My company uses it for some of our legacy on-prem hosting, but a lot of that is being actively decommissioned.
Zangoose
Most of the world can't feasibly install non-Apple-approved apps on iOS without paying $100 a year so something like that would probably never catch on in iOS land
It's an Android app patcher that removes ads and adds some other quality of life patches. Primarily for YouTube but it supports several other apps as well.
On YouTube it also adds things like integrated SponsorBlock, extrapolating dislikes, actual resolution buttons, and the option to disable shorts.
What you're talking about is "source-available." I.e. being able to read source code but not having licensing rights to redistribute or make changes.
"Open-source" means that being able to modify and distribute changes is built into the license of the code.
For example, Minecraft Java is source-available in that decompiling Java bytecode is trivial - enough so that tools exist which can easily generate a source code dump. However, actually distributing that source code dump is technically illegal and falls under piracy, so it isn't open source.
Edit: I didn't see your edit, this comment is kind of pointless, oh well
I mean I don't see any reason why a Wayland compositor couldn't support it, it's pretty cursed either way though.
There's a screenshot in one of the other comments in this thread (from owenfromcanda, I think the other screenshots are fake)
X11 already supports this lol
I think they're talking about the image
The code is open source. Nothing is obscured.
"Security-by-obscurity" is a phrase used for any measure that is useless once you know how it works. In this case it's hoping that a troll doesn't know about the specific hardcoded rules. None of the rules in PieFed actually work if you are at all aware of them.
Thanks for clarifying, I guess I misremembered the shadowbanning part. I think I was mixing together the fact that reputation isn't really transparent (users' reputation can change by even attempting to upload an image that gets flagged, and the vague error means they'll probably try multiple times without realizing they're being moderated) and the fact the communities can autoban any user whose global reputation is low enough.
I still think the security-by-obscurity approach to moderation is inherently flawed though, and I hate to imagine how the dev approaches actual account security if that's their approach to moderation.
Honestly I would consider [user-obscured] hardcoded ~~shadow~~banning just as bad.
Just because I'm closer to agreeing with the PieFed dev's opinions a little bit more doesn't mean that I'd support ~~shadow~~banning someone because the trivially-evaded checks caught a false positive in the crossfire. Piefed's auto moderation/social scoring is pretty much textbook definition security-by-obscurity. The second anyone knows how it works, it's useless. It will pretty much exclusively catch people who just wanted to post a harmless meme or something.
At least (for now) Dessalines isn't hardcoding his tankie beliefs into Lemmy's source code.
Edit: Blaze is right, it isn't shadowbanning, but the rest of my point still stands, added the [] part to clarify
There were a few, not exaustive since it's been a few months since I looked through the source code, some of this might have changed and there's also a few other checks that I'm forgetting:
- 4chan screenshots (specifically anything that OCR identified as having "Anonymous #(number)" in it) were banned. Honestly this one is fine as a toggle but I think for a while it was just on by default in the code
- any community that had specific words in it were blocked at instance level. I think "meme" was there, a few swear words, and a few carryover reddit meme community names (196, I think nottheonion was also there, anything with "shitpost" in the name, etc.)
- There's a hidden karma/social credit score based on a user's interactions and net total karma hidden from them that gets impacted by any moderation actions, including some of the automated hardcoded ones (e.g. even trying to upload an image that gets flagged by the hardcoded checks). In some cases the user is not informed of any of these changes (the image upload will appear as a generic image upload error)
- users with a low enough net score can be automoderated at both a community and instance level
Edit: the other thing is, a lot of this hardcoded moderation isn't documented anywhere outside of the code, likely because a lot of the measures would be useless if people knew how they worked
Edit 2: updated based on Blaze's reply from another comment, I misremembered the shadow banning, I was confusing it with the federation errors that occur when one user blocks another


As someone who has worked with a pretty large C# codebase and several smaller ones, I've found it to be one of the least efficient languages to program in. This is maybe not a technical fault of the language, but the way Microsoft encourages developing C# means that once you get past a certain point even simple MRs will have 10-20 files changed. There is sooooooooo much boilerplate caused by .NET that even things like Java Spring Boot just don't have (and even then I'd consider Java to be a pretty bloated language in terms of boilerplate).
That's ignoring the fact that the ecosystem surrounding .NET is a lot more enterprise-y, meaning a good portion of libraries require paid licenses to use.