It kind of makes me think of how odd it would have been if many of the old forums named themselves like bookclub.phpbulletin.com, metalheads.vbulletin.net, or something.
There’s nothing wrong with doing that, obviously, but it’s struck me as another interesting quirk of fediverse instances/sites. Generally as soon as you visit them you can tell by the site interface or an icon somewhere what software they’re using.
for the same reason that ISPs don’t solve the need for servers and serverside storage, moving all your storage to the edge is usually a bad idea. You’re basically describing a serverless P2P social network, but with it comes all of the pitfalls of strictly-p2p apps. mainly, searching becomes prohibitively expensive, and if your client goes offline (eg you need to go on an airplane or your phone runs out of batteries) reliably catching up can be problematic. How would this work for PeerTube, for example. would ever client that cared about peertub need to keep a copy of every peertube video on every peertube server, just in case you wanted to search it? My phone would fill up instantly. Would my phone just save an address to look up the video from the original author’s personal device? not only does that sound like a security nightmare, but also RIP to the author’s data usage caps if they published from their mobile device.
I think that servers are needed. IDK if we need servers to partially mirror eachother like mastadon does, but i think that hosting the content on the servers themselves is the right practical move. and given that we’re more or less boxed into a federated server-client architecture, then I think that we’re getting it as good as we’re going to get, until we choose some standards body to govern how to expose capabilities.
I do think that the right approach is to have a discoverable API where clients can discover what capabilities a certain piece of content has, and what those capabilities mean. Just like how javascript feature detection is far better than user agent detection, servers can integrate with any social network that supports some minimum set of capabilities, and clients can present all capabilities to the user (while ignoring unsupported capabilities) regardless of originating social network. but we’re not there yet, we need that standard first, and major players need to agree on it.
No, that sounds exactly like Nostr, which is a lot more practical and cheap to run that a Mastodon server and actually scales quite well.
No. You just need to move the application state to the edge. Storage itself can still be in content-addressable data servers, like IPFS, magnet links or plain-old (S)FTP servers.
When someone posts a picture on Mastodon, the picture itself is not replicated, just a link to it. Now, imagine that your “smart client” version of Mastodon (or Peertube, or Lemmy) wants to post a picture. How would it work?
If by “servers” you mean “nodes in the network that are more stable and have stronger uptime/performance guarantees”, I agree 100%. If by “servers” you mean “centralized nodes responsible for application logic” then I’d say you can be easily be proven wrong by actual examples of distributed apps.
Looking at nostr, I generally like the architecture, although the it’s very similar in broad strokes.
I like the simplification and separation of the responsibilities. I don’t like using self signing as an identification mechanism for a social network.
But crucially it seems like it has the same problem we’re discussing here, wrt different social networks based on that protocol, having different message schemas and capabilities, making them incompatible.