this post was submitted on 12 Jun 2025
121 points (93.5% liked)
Fediverse
34421 readers
2130 users here now
A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).
If you wanted to get help with moderating your own community then head over to !moderators@lemmy.world!
Rules
- Posts must be on topic.
- Be respectful of others.
- Cite the sources used for graphs and other statistics.
- Follow the general Lemmy.world rules.
Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration)
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The only real option is to charge people.
Hosting isn't free. It costs money to run a website. That money needs to come from somewhere. If it doesn't come from advertisers, it must come from users.
There could be a verity options for that. But I like the simple annual subscription. Each and every user pays. Spread out the cost as much as possible. It's only fair.
Provided there is an "upper limit" on what scale we are talking, Ive often wondered, couldn't private users also host a sharded copy of a server instance to offset load and bandwidth? Like Folding@Home, but for site support.
I realize this isn't exactly feasible today for most infra, but if we're trying to "solve" the problem, imagine if you were able to voluntarily, give up like 100gb HDD space and have your PC host 2-3% of an instance's server load for a month or something. Or maybe just be a CDN node for the media and bandwidth heavy parts to ease server load, while the server code is on different machines.
This kind of distributed "load balancing" on private hardware may be a complete pipe dream today, but it think if might be the way federated services need to head. I can tell you if we could get it to be as simple as volunteers spinning up a docker, and dropping the generated wireguard key and their IP in a "federate" form to give the mini-node over to an instance, it would be a lot easier to support sites in this way.
Speaking for myself, I have enough bandwidth and space I could lend some compute and offset a small amount of traffic. But the full load of a popular instance would be more than my simple home setup is equipped for. If contributing hosting was as easy as contributing compute, it could have a chance to catch on.
That's not really how it works. If it was made to work that way, it would still be a relatively small group donating their own compute resources to subsidize everyone else. Which is what we already have, and isn't very scalable.
I responded above, but my point kind of was that it doesn't work that way, but as we rethinking content delivery we should also rethinking hosting distribution. What I was saying is not a "well gee we should just do this..." type of suggestion, but more a extremely high level idea for server orchestration from a public private swarm that may or may not ever be feasible, but definitely doesn't really exist today.
Imagine if it were somewhat akin to BitTorrent, only the user could voluntarily give remote control to the instance for orchestration management. The orchestration server toggles the nodes contents so that, lets say, 100% of them carry the most accessed data (hot content, <100gb), and the rest is sharded so they each carry 10% of the archived data, making each node require <1tb total. And the node client is given X number of pinned CPUs that can be used for additional server compute tasks to offload various queries.
See, I'm fully aware this doesn't really exist on this form. But thinking of it like a Kubernetes cluster or a HA webclient it seems like it should be possible somehow to build this in a way where the client really only needs to install, and say yes to contribute. If we could cut it down to that level, then you can start serving the site like a P2P bittorrent swarm, and these power user clients can become nodes.