this post was submitted on 23 Nov 2023
88 points (98.9% liked)
Fediverse
28395 readers
334 users here now
A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).
If you wanted to get help with moderating your own community then head over to !moderators@lemmy.world!
Rules
- Posts must be on topic.
- Be respectful of others.
- Cite the sources used for graphs and other statistics.
- Follow the general Lemmy.world rules.
Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration), Search Lemmy
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Maybe? I don't know. Is that a relevant distinction on a decentralized system where the application logic can live on whatever side of the network?
Because they are constrained by the "client-server" paradigm. If you spend some time working with decentralized apps that assume that data is available to any nodes on the network, all your "protocol" really needs to do is to provide the primitives to query, pull and push the data around. I kinda got to write about it on an old blog post
I think it's still relevant. I mean... It's turtles all the way down, but applications on equivalent layers need to share a common API.
I don't think it's reasonable to ask voluntary instance hosts to pay for bandwidth and storage for networks they don't want to host, so mirroring the all Activitypub on all servers doesn't seem reasonable, especially if any of the networks take off on popularity. Imagine if every single fediverse instance of any type needed to be twitter-scale just because some instance of mastadon took off.
I think it's correct for servers of a specific network/type to only subscribe to messages of the type they care about, as a purely practical matter. It'd be nice if there was a fediverse standard used to announce capabilities, along with standards for common capabilities and restrictions, but there is none that I'm aware of.
In my dream world, servers are only relays. They don't store anything, unless it wants to keeps a copy for one of its clients, like POP3.
for the same reason that ISPs don't solve the need for servers and serverside storage, moving all your storage to the edge is usually a bad idea. You're basically describing a serverless P2P social network, but with it comes all of the pitfalls of strictly-p2p apps. mainly, searching becomes prohibitively expensive, and if your client goes offline (eg you need to go on an airplane or your phone runs out of batteries) reliably catching up can be problematic. How would this work for PeerTube, for example. would ever client that cared about peertub need to keep a copy of every peertube video on every peertube server, just in case you wanted to search it? My phone would fill up instantly. Would my phone just save an address to look up the video from the original author's personal device? not only does that sound like a security nightmare, but also RIP to the author's data usage caps if they published from their mobile device.
I think that servers are needed. IDK if we need servers to partially mirror eachother like mastadon does, but i think that hosting the content on the servers themselves is the right practical move. and given that we're more or less boxed into a federated server-client architecture, then I think that we're getting it as good as we're going to get, until we choose some standards body to govern how to expose capabilities.
I do think that the right approach is to have a discoverable API where clients can discover what capabilities a certain piece of content has, and what those capabilities mean. Just like how javascript feature detection is far better than user agent detection, servers can integrate with any social network that supports some minimum set of capabilities, and clients can present all capabilities to the user (while ignoring unsupported capabilities) regardless of originating social network. but we're not there yet, we need that standard first, and major players need to agree on it.
No, that sounds exactly like Nostr, which is a lot more practical and cheap to run that a Mastodon server and actually scales quite well.
No. You just need to move the application state to the edge. Storage itself can still be in content-addressable data servers, like IPFS, magnet links or plain-old (S)FTP servers.
When someone posts a picture on Mastodon, the picture itself is not replicated, just a link to it. Now, imagine that your "smart client" version of Mastodon (or Peertube, or Lemmy) wants to post a picture. How would it work?
If by "servers" you mean "nodes in the network that are more stable and have stronger uptime/performance guarantees", I agree 100%. If by "servers" you mean "centralized nodes responsible for application logic" then I'd say you can be easily be proven wrong by actual examples of distributed apps.
Looking at nostr, I generally like the architecture, although the it's very similar in broad strokes.
I like the simplification and separation of the responsibilities. I don't like using self signing as an identification mechanism for a social network.
But crucially it seems like it has the same problem we're discussing here, wrt different social networks based on that protocol, having different message schemas and capabilities, making them incompatible.