chiisana

joined 1 year ago
[–] chiisana@lemmy.chiisana.net 1 points 1 year ago

Economy is a huge driving factor to these decisions.

This is largely due to the unusually low interest rate environment we’re used to in the past 20+ years. With the low interest rates, everyone at all levels are taking on more loans. US Govt just uncapped the debt ceiling recently so they could borrow more; companies are used to getting “free” money from investors who take on cheap loans in hopes for a big payout; individuals are leveraging more and more into mortgages because property values go up, and dammit I’m working full time and I demand annual international vacations.

All these money pumped into the system is creating more opportunities to earn more, which results in more spending (that’s partially driven by the ever growing of loans), which leads to the higher rate of inflation. And to tame that inflation, the only tool we’d have at our disposal is to dial up the interest rate.

Higher interest rate leads to lesser money floating around; on the corporate side, it means lesser free money from investors because they’re no longer getting the cheap loans. As result of that, companies have to try to extract more out of what they’ve already got to keep things a float. More ads, less freebies/discounts, increasing costs, etc. are just the beginning. Pretty soon, those with less than solid business model will be unable to keep their entire staff and even larger rate of layoffs will follow, followed by eventual closures.

There’s no push back on this; individual push back or not, the companies are more likely to answer to their shareholders’ demand for profit than users’ demand for things of what was. More and more companies will follow suit and/or go under. What’s coming isn’t going to be pretty, sadly.

[–] chiisana@lemmy.chiisana.net 13 points 1 year ago

I am not a lawyer and definitely not anyone’s lawyer providing legal advices, but I’ve done a little bit of work around implementing GDPR compliance at my jobby job. My understanding is that you must inform users when you’re sending their data out to third party processors, and they, too, must be GDPR complaint.

So if your instance is sending information that is covered under GDPR out to other instances, you much call out those instances as data processors, and ensure they’re complaint before you add them. When you add one, I think you’re also supposed to inform users that you’re adding a new data processor via some form of notice addressed to them. Furthermore, at time of deletion, you’d also need to inform your data processors of the request, such that their compliance workflow can be followed.

In my mind, strictly speaking, what Lemmy is doing could work if the “cluster” of GDPR compliant instances doesn’t federate out to the broader non-GDPR compliant instances. So, lots of manual maintaining the allowed federation instances, each time you add a new instance, you’d then need to inform your users… once you receive a deletion request, you’d need to use the ban with purge option to purge everything on your instance, and pass that on to all federated instances. The key distinction here is ensuring your federated instances honours your purge request, which is hard to verify.

The end result is that you’d essentially be creating your own bubble of the fediverse isolated from the rest of the fediverse… which is not an ideal outcome but that’s what happens when you let regulators decide what to do on things they don’t understand…

[–] chiisana@lemmy.chiisana.net 1 points 1 year ago

If there is a way to handle auth, then you can maybe put it behind a SSO platform (Keycloak, fusionauth, authlia, etc) and slap a billing system (not familiar with open source solutions here, I used to use commercial solutions like Blesta and WHMCS) to activate/deactivate user accounts. You’d need to do a lot of the expropriation and heavy lifting yourself though.

[–] chiisana@lemmy.chiisana.net 3 points 1 year ago

Slap CloudFlare tunnel in front of your web services and call it a quits?

[–] chiisana@lemmy.chiisana.net 2 points 1 year ago

I believe the way it works is that the moment you interact with something, instance with at least one user who subscribe to the community you’re interact with gets a ping with activity associated with you. Since each message is signed, webfinger is used to verify your user’s authenticity (prevents me from posting something offensive pretending to be from your instance). That would then allow the bad actors to quickly collect instances to bot upon.

Payoff is minimal but theoretically they’d be able to shill for things just like they already do on Reddit.

[–] chiisana@lemmy.chiisana.net 2 points 1 year ago

Yeah. I found the official compose… let’s say leaves a lot to desire… so we had slightly different approaches to similar problems. I don’t want their built in nginx so I override it with a simple alpine that quits. But I also chose to use override for exactly the reason you mentioned (in event if they add new config stuff into compose).

Glad to see I’m not the only one finding there are rooms for improvement. Thanks for sharing your thoughts with the community!

[–] chiisana@lemmy.chiisana.net 3 points 1 year ago (2 children)

I recommend using override feature in docker compose instead of editing the compose directly. That way it will be easier to pull updated file from the GitHub and receive updates.

May be an interesting idea to incorporate some of my findings in your doc as well: https://lemmy.chiisana.net/post/264

[–] chiisana@lemmy.chiisana.net 1 points 1 year ago

Hm... What else... Did you modify the docker-compose.yml by chance? Are lemmy and pictrs sharing the same network? Other than that I'd be at a loss.

[–] chiisana@lemmy.chiisana.net 1 points 1 year ago (2 children)

Did you run the chown -R command for the volume bind? Is the pictrs actually up and running? Check docker ps | grep pictrs and see if it is actually running. Also check the logs to see if there are any errors.

[–] chiisana@lemmy.chiisana.net 0 points 1 year ago (1 children)

If you're planning to go BSD, or buy all the drives you're ever gonna have in the cluster up-front, then ZFS is great. Otherwise, be mindful of the hidden cost of ZFS. Personally, for my home server, because I'm gradually adding more drives still, I'm using mdraid on RAID6 with 8 x 8TB WD Reds/HGST Ultrastars, and I'm loving the room for activities.

Having said that, regardless of the solution you go with, since you've got only 4 drives, higher RAID level (and equivalent of thereof such as RAIDZ2) might be out of reach as you'd be "wasting" a lot of space for the extra piece of mind. If I were in your situation, I'd probably use RAID5 (despite RAID 5 is dead in 2009, or have they continued chugging on after 2013) for less important data (so sustain 1 drive failure) or RAID 10 if I need more performance (and depending on luck of draw, potentially sustain 2 drive failures depending on which drive fails).

[–] chiisana@lemmy.chiisana.net 1 points 1 year ago

Hey that’s me!

If you’re excited, hop on, as have I! The protocol related scaling issue will resolve itself over time; a few of us are throwing ideas out and hoping some will stick with the developers.

But just bear in mind that you’re not adding to the network scaling because federation on the big servers will not be alleviated by you (or me) having an extra instance.

[–] chiisana@lemmy.chiisana.net 0 points 1 year ago (1 children)

Header is expired issue is big part of the current federation problem. And whether you know it like it or not, you’ve just made the matter worse. You’re not to blame though. I’ve done it too, along with many other people self hosting our own instance.

The way federation currently works is each write action must be federated outwards to each federated instance. A comment reply, such as this one, must be federated outwards by the hosting instance. An instance receiving a federation event must also discard messages that are older than 10 seconds.

Here lies the problem… popular instances like lemmy.world and lemmy.ml has thousands of users, and thousands of federated servers. Yesterday, when I checked, lemmy.world had 3600 users per day and 2200+ federated servers. If there’s a really popular post on a very popular community, and 10% of the users comments on it? Lemmy.world server must send 360x2200 = 700K+ outbound federation event messages. Each one of these are sent over HTTPS via TCP so they can’t send all of them at the same time, and the messages are put into a queue where the federation workers will send them out. Each worker will send the message and because HTTPS is over TCP, it is not fire and forget, the worker must wait for acknowledgement for the packets. If an instance owner gets bored because they’re not getting all the messages and shuts down? Now the worker needs to wait for that to error out and thereby delaying messages further down the queue. If it had to wait more than 10 seconds? Everyone down the queue will just get expired messages because the event is already outdated.

So now you’ve already created an instance and adding to the load of the network, just like me, what can you do? Keep your server online in a fast data center. Use Cloudflare to reduce latency. That way at least your server isn’t going to introduce too much latency to other servers down the queue. Hopefully the devs figure out something to make the process better. I’ve put in a more scalable notification fleet architecture change on GitHub already. Lets see if they can implement that or change other requirements on the system.

view more: ‹ prev next ›