this post was submitted on 23 Jun 2023
1815 points (96.7% liked)

Lemmy

12572 readers
32 users here now

Everything about Lemmy; bugs, gripes, praises, and advocacy.

For discussion about the lemmy.ml instance, go to !meta@lemmy.ml.

founded 4 years ago
MODERATORS
 

Please. Captcha by default. Email domain filters. Auto-block federation from servers that don't respect. By default. Urgent.

meme not so funny

And yes, to refute some comments, this publication is being upvoted by bots. A single computer was needed, not "thousands of dollars" spent.

top 50 comments
sorted by: hot top controversial new old
[–] xtremeownage@lemmyonline.com 102 points 1 year ago* (last edited 1 year ago) (4 children)

Sigh....

All of those ideas are bad.

  1. Captchas are already pretty weak to combat bots. It's why recaptcha and others were invented. The people who run bots, spend lots of money for their bots to.... bot. They have accessed to quite advanced modules for decoding captchas. As well, they pay kids in india and africa pennies to just create accounts on websites.

I am not saying captchas are completely useless, they do block the lowest hanging fruit currently. That- being most of the script kiddies.

  1. Email domain filters.

Issue number one, has already been covered below/above by others. You can use a single gmail account, to basically register an unlimited number of accounts.

Issue number two. Spammers LOVE to use office 365 for spamming. Most of the spam I find, actually comes from *.onmicrosoft.com inboxes. its quick for them to spin it up on a trial, and by the time the trial is over, they have moved to another inbox.

  1. Autoblocking federation for servers who don't follow the above two broken rules

This is how you destroy the platform. When you block legitimate users, the users will think the platform is broken. Because, none of their comments are working. They can't see posts properly.

They don't know this is due to admins defederating servers. All they see, is broken content.

At this time, your best option is for admin approvals, combined with keeping tabs on users.

If you notice an instance is offering spammers. Lets- use my instance for example- I have my contact information right on the side-bar, If you notice there is spam, WORK WITH US, and we will help resolve this issue.

I review my reports. I review spam on my instance. None of us are going to be perfect.

There are very intelligent people who make lots of money creating "bots" and "spam". NOBODY is going to stop all of it.

The only way to resolve this, is to work together, to identify problems, and take action.

Nuking every server that doesn't have captcha enabled, is just going to piss off the users, and ruin this movement.

One possible thing that might help-

Is just to be able to have an easy listing of registered users in a server. I noticed- that actually... doesn't appear to be easily accessible, without hitting rest apis or querying the database.

[–] eyy@lemm.ee 22 points 1 year ago

Haven't you heard of the "Swiss cheese" model of security?

The best way to ensure your server is protected is to unplug it from the Internet and put it in an EMF-shielded Faraday cage.

There's always a tradeoff between security, usability and cost.

captchas can be defeated, but that doesn't mean they're useless - they increase the level of friction required to automate malicious activity. Maybe not a lot, but along with other measures, it may make it tricky enough to circumvent that it discourages a good percentage of bot spammers.

[–] sugar_in_your_tea@sh.itjust.works 10 points 1 year ago (6 children)

I disagree. I think the solution is moderation. Basically, have a set of tools that identify likely bots, and let human moderators make the call.

If you require admins to manually approve accounts, admins will either automate approvals or stop approving. That's just how people tend to operate imo. And the more steps you put between people wanting to sign up and actually getting an account, the fewer people you'll get to actually go through with it.

So I'm against applications. What we need is better moderation tools. My ideal would be a web of trust. Basically, you get more privileges the more trusted people that trust you. I think that should start from the admins, then to the mods, and then to regular users.

But lemmy isn't that sophisticated. Maybe it will be some day, IDK, but it's the direction I'd like to see things go.

[–] tetris11@lemmy.ml 8 points 1 year ago* (last edited 1 year ago) (1 children)

HackerNews does something similar where new users don't have the ability to down vote until they have earned enough upvotes from other users.

We could extend that, and literally not allow upvotes to properly register if the user is too new. The vote would still show on the comment/post, but the ranking of the comment/post will only be influenced by seasoned users. That way, users could scroll down a thread, see a very highly upvoted comment bang in the middle, and think for themselves "huh, probably bots".

Very hierarchical solution, heavily reliant on the mods not playing favourites or having their own agenda.

load more comments (1 replies)
load more comments (5 replies)
[–] dessalines@lemmy.ml 4 points 1 year ago (5 children)

This is all 100% correct. People have already written captcha-bypassing bots for lemmy, we know from experience.

The only way to stop bots, is the way that has worked for forums for years: registration applications. At lemmy.ml we historically have blocked any server that doesn't have them turned on, because of the likelihood of bot infiltration from them.

Registration applications have 100% stopped bots here.

load more comments (5 replies)
[–] homesnatch@lemmy.one 3 points 1 year ago

Captcha is like locking your car... There are still ways to get in, but it blocks the casual efforts.

I review my reports. I review spam on my instance. None of us are going to be perfect.

Do you review upvote bots? The spam is an easily replaceable account, the coordinated army of upvote bots may be harder to track down.

[–] 2xsaiko@discuss.tchncs.de 87 points 1 year ago (4 children)

As someone with his own email domain, screw you for even thinking about suggesting domain filters.

[–] TWeaK@lemm.ee 37 points 1 year ago (1 children)

Blacklist domain filters are fine, it's whitelist domain filters that get small personal domains.

[–] Tywele@dataterm.digital 19 points 1 year ago (2 children)

And blacklist domain filters are pretty useless when you can create unlimited emails with johndoe+anything@gmail.com

load more comments (2 replies)
[–] TheFrenchGhosty@lemmy.pussthecat.org 14 points 1 year ago (1 children)

This. Domain whitelist are the worse thing you can do.

load more comments (1 replies)
[–] MavTheHack@lemmy.fmhy.ml 9 points 1 year ago

I second this

load more comments (1 replies)
[–] Aux@lemmy.world 35 points 1 year ago (7 children)

Lemmy is just getting started and way too many people are talking about defederation for any reason possible. What is even the point of a federated platform if everyone's trying to defederate? If you don't like federation so much, go use Facebook or something.

[–] Nerd02@forum.basedcount.com 12 points 1 year ago (23 children)

This. Defed is not the magic weapon that will solve all your problems. Captcha and email filters should be on by default though.

load more comments (23 replies)
[–] Greenskye@lemmy.world 3 points 1 year ago

My understanding from the beehaw defed is that more surgical moderation tools just don't exist right now (and likely won't for awhile unless the two Lemmy devs get some major help). Admins only really have a singular nuclear option to deal with other instances that aren't able to tackle the bot problem.

Personally I don't see defederating as a bad thing. People and instances are working through who they want to be in their social network. The well managed servers will eventually rise to the top with the bot infested and draconian ones eventually falling into irrelevance.

As a user this will result in some growing pains since Lemmy currently doesn't offer a way to migrate your account. Personally I already have 3 Lemmy accounts. A good app front end that minimizes the friction from account switching would greatly help these growing pains.

load more comments (5 replies)
[–] draughtcyclist@programming.dev 33 points 1 year ago (1 children)

Everyone is talking about how these things won't work. And they're right, they won't work 100% of the time.

However, they work 80-90% of the time and help keep the numbers under control. Most importantly, they're available now. This keeps Lemmy from being a known easy target. It gives us some time to come up with a better solution.

This will take some time to sort out. Take care of the low hanging fruit first.

[–] InfiniteFlow@lemmy.world 4 points 1 year ago

Plus, if this becomes the "bot wild west" at such an early stage, the credibility hit will be a serious hindrance to future growth...

[–] jollyroger@lemmy.dbzer0.com 30 points 1 year ago* (last edited 1 year ago) (5 children)

The admin https://lemmy.dbzer0.com/u/db0 from the lemmy.dbzer0.com instance possibly made a solution that uses a chain of trust system between instances to whitelist each other and build larger whitelists to contain the spam/bot problem. Instead of constantly blacklisting. For admins and mods maybe take a look at their blog post explaining it in more detail. https://dbzer0.com/blog/overseer-a-fediverse-chain-of-trust/

[–] star_boar@lemmy.ml 13 points 1 year ago (1 children)

db0 probably knows what they're talking about, but the idea that there would be an "Overseer Control Plane" managed by one single person sounds like a recipe for disaster

[–] jollyroger@lemmy.dbzer0.com 4 points 1 year ago* (last edited 1 year ago) (1 children)

I hear you. For what it's worth it is mentioned in the end of the blog post, the project is open source, people can run their own overseer API and create less strict or more strict whitelists, instances can also be registered to multiple chains. Don't mistake my enthousiasm for self run open social media platforms for trying to promote a single tool as the the be-all and end-all solution. Under the swiss cheese security model/idea, this could be another tool in the toolbox to curb the annoyance to a point where spam or bots become less effective. Edit: *The be-all and end-all *not be and end all solution

[–] prlang@lemmy.world 6 points 1 year ago (1 children)

Couldn't agree more. I gatta say though I kinda find it funny that the pirate server is coming up with practical solutions for dealing with spam in the fediverse. I guess it shouldn't though, y'all have been dealing with this distributed trust thing for a while now eh?

load more comments (1 replies)
[–] Ech@lemmy.world 13 points 1 year ago (2 children)

So defeating the point of Lemmy? Nah, that's a terrible "solution" that will only serve to empower big servers imposing on smaller or even personal one's.

[–] prlang@lemmy.world 8 points 1 year ago (2 children)

It's probably the opposite. I'd say that right now, the incentives for a larger server with an actual active user base is to move to a whitelist only model, given the insane number or small servers with no activity but incredibly high account registrations happening right now. When the people controlling all of those bot accounts start flexing their muscle, and flooding the fediverse with spam it'll become clear that new and unproven servers have to be cut off. This post just straight up proves that. It's the most upvoted Lemmy post I've ever seen.

If I'm right, and the flood of spam commeth then a chain of trust is literally the only way a smaller instance will ever get to integrate with the wider ecosystem. Reaching out to someone and having to register to be included isn't too much of an ask for me. Hell, most instances require an email for a user account, and some even do the questionnaires.

load more comments (2 replies)
load more comments (1 replies)
[–] lukas@lemmy.haigner.me 9 points 1 year ago (1 children)

Neat, but I appreciate the email model of spam protection more than simple dumb whitelists. I won't list my domain on any whitelist as whitelists discourage what Lemmy needs the most: People who run their own instances. At the end of the day, spammers will automate the process of listing themselves, and the person who runs their own instance has to go around doing everything manually.

[–] prlang@lemmy.world 6 points 1 year ago* (last edited 1 year ago) (2 children)

The blog post dives into how it's hard for spammers to automate adding themselves onto the whitelist because its a chain of trust. You have to have an existing instance owner to vouch for you, which they can revoke at any time. A spammer couldn't do things like run a "clean" instance, and then whitelist off that, because presumably someone would try to contact the owner of the presumed "clean" instance to get them to remove the spam. When they don't respond, or only partially address the issue, it's possible to pull rank and contact the person further up the chain of trust.

In short, it's real people talking to each other about spam issues, but in a way that scales so that an owner of one instance doesn't need to personally trust and know every other instance owner. It should allow for small single user instances to get set up about as easily as any other instance. Everyone has to know and talk to someone along the chain.

The real downside of the system is that people are human, and cliques are going to form that may defederate swathes of the fediverse from each other. I kinda think that's going to happen anyways though.

A chain of trust is the best proposal I've seen for addressing the scaling issues associated with the fediverse. I'm not associated with that guy at all, just saying I like his idea.

-- edit

On second thought, getting your instance added to the chain of trust is literally no more difficult than signing up for an instance with a questionnaire. It's basically that but at the instance level instead of the user level.

load more comments (2 replies)
[–] mlaga97@lemmy.mlaga97.space 7 points 1 year ago (1 children)

Obviously biased, but I'm really concerned this will lead to it becoming infeasible to self-host with working federation and result in further centralization of the network.

Mastodon has a ton more users and I'm not aware of that having to resort to IRC-style federation whitelists.

I'm wondering if this is just another instance of kbin/lemmy moderation tools being insufficient for the task and if that needs to be fixed before considering breaking federation for small/individual instances.

[–] Raiden11X@programming.dev 6 points 1 year ago (2 children)

He explained it already. It looks for a ratio of number of users to posts. If your "small" instance has 5000 users and 2 posts, it would probably assume a lot of those users would be spam bots. If your instance has 2 users and 3 posts, it would assume your users are real. There's a ratio, and the admin of each server that utilizes it can control the level at which it assumes a server is overrun by spam accounts.

load more comments (2 replies)
[–] ulu_mulu@lemmy.world 5 points 1 year ago (1 children)

Who controls the Overseer Control?

[–] prlang@lemmy.world 3 points 1 year ago

It's been answered further below. Yeah it's that one bloke who did it at https://lemmy.dbzer0.com/u/db0 . The projects also open source though, so anyone can run their own Overseer Control server, with their own chain of trust whitelist. I suspect many whitelists will pop up as the fediverse evolves.

[–] vapeloki@lemmy.ml 30 points 1 year ago (2 children)

First of all: I'm posting this from my .ml alt. Because i can not do it from my .world main. That i can't do it, i found out just because i was waiting for a response on a comment where is was sure that the OP would respond. After searching, i found out that my comment and my DM's never where federated to .ml.

So, that said: I'm all for defederating bad instances, i'm all for separation where it makes sense. BUT:

  • If an instance is listed on join-lemmy, this should work as the normal user would expect
  • We are not ready for this yet. We are missing features (more details below)
  • Even instances that officialy require applications, can be spam instances (admins can do what ever they want), so we would need protection against this anyways. Hell, one could just implement spam bots that talk directly federation protocol, and wouldn't even need lemmy for this ...

Minimal features we need:

  • Show users that the community they try to interact with is on a server that defederated the users instance
  • Forbid sending DM's to servers that are not fully federated

Currently, all we do is: Make lemmy look broken

And before someone starts with: "Then help!", i do. I do in my field of expertice. I'm a PostgreSQL Professional. So i have build a setup to messure the lemmy SQL performance, usage patterns, and will contribute everything i can to make lemmy better.

(I tried rust, but i'm to much C++ guy to bring something usefull to the table beyond database stuff, sry :( )

[–] Taxxor@lemm.ee 13 points 1 year ago* (last edited 1 year ago)

Show users that the community they try to interact with is on a server that defederated the users instance

Not only that, also show users when comments in any community are made by users from an instance that your instance defederated.

Because you(instance A) may very well only be able to see half of the comments in a thread of a community of instance B because half of them were made by users of instance C which instance A defederated.

Right now the comments just don't get copied to your instance at all, which also leads to followup comments not being visible even if they are not from defederated instances.
Instead I'd like everything to be copied and then flagged based on defederations. Just don't show the original content and instead show a hint that a comment can't be seen because of defederation would be enough.
At least that way we know that we're missing something.

Because simply not showing it also leads to confusion why you see less comments than other users on another instance.

And this goes both ways. The user from the other instance(who can still see your comment because his instance didn't defederate yours) should also see that I'm from an instance that defederated his instance directly by looking at my post before commenting, maybe in form of a symbol or a note next to my username, so that he knows it doesn't make any sense to comment on my post.

[–] retiolus@lemmy.cat 3 points 1 year ago (1 children)

Interesting, I hadn't thought of that. I guess it's technically possible to post on a community without even having an account on any server...?

[–] vapeloki@lemmy.ml 4 points 1 year ago

In theory, yes. You only need a Activitypub library and some lines of code.

[–] fubo@lemmy.world 14 points 1 year ago* (last edited 1 year ago) (7 children)

Look up the origins of IRC's EFNet, which was created specifically to exclude a server that allowed too-easy federation and thus became an abuse magnet.

[–] FrostBolt@kbin.social 4 points 1 year ago (2 children)

Now that’s a name I’ve not heard in a long time… a long time

[–] fubo@lemmy.world 7 points 1 year ago

Folks running new federated networks gotta learn this stuff!

https://en.wikipedia.org/wiki/EFnet

load more comments (1 replies)
load more comments (6 replies)
[–] llama@midwest.social 10 points 1 year ago (1 children)

Looking at you oceanbreeze.earth, your instance is worth defending from bots

load more comments (1 replies)
[–] tyfi@wirebase.org 10 points 1 year ago

Mine got blown up a day or two ago before I had enabled Captch. About 100 accounts were created before I started getting rate-limited (or similar) by Google.

Better admin tools are definitely needed to handle the scale. We need a pane of glass to see signups and other user details. Hopefully it’s in the works.

[–] Phantom_Engineer@lemmy.ml 9 points 1 year ago (2 children)

Isn't this what all you lemmy-worlders got mad at Beehaw for doing? I don't think it's unreasonable to ask for a small statement from people as an anti-spam measure (a sort of advanced captcha), though of course the big problem there is reviewing all the applications in a timely manner. Still, I think there's room for more and less exclusive instances. The tools are there for instance owners to protect their instances however they choose.

load more comments (2 replies)
[–] stupidmanager@insane.dev 7 points 1 year ago (2 children)

for larger instances, this makes sense. For us smaller instances, just add a custom application requirement that isn't about reddit. though i'll be adding captcha too if they keep at it (every hour, 2 bots apply).

I've seen bots trying to create accounts, it's the same boring message about needing a new home because "random reason about reddit". I'll borrow a quote from Mr Samuel Jackson: "I don't remember asking you a god damn thing about reddit"... and application is denied.

[–] Clompsh@mander.xyz 5 points 1 year ago (2 children)

I mentioned Reddit in an application. I feel like that would come up in legitimate applications at the moment. Is it easy to tell the bots from actual applicants?

[–] stupidmanager@insane.dev 3 points 1 year ago

In my case, yes. I asked for a reason written in code (working or not). Since I intend to be a DevOps focused instance, there’s no excuse. Most humans would read the application and I don’t feel bad for denying based on this requirement.

Also helps that after 8 of those bots apps, the message is very similar. If there was a human in that mix, they can dm me and ask for reconsideration.

load more comments (1 replies)
load more comments (1 replies)
[–] boonhet@lemm.ee 5 points 1 year ago (1 children)

Email domain filters

Okay, gmail should definitely be blacklisted, because it's extremely easy to abuse. Microsoft email domains too. What domains should be allowed then?

load more comments (1 replies)
[–] Juliie@lemmy.world 4 points 1 year ago* (last edited 1 year ago) (1 children)

We need a distributed decentralized curated whitelist that new servers will apply for and hopefully get a quick week max response after some kind of precisely defined anti spam/bot audit. Also then periodic checks of existing servers.

Like crypto has transaction ledger confirmed some kind of notabot confirmation ledger chain.

Weak sides if bot servers get on whitelist somehow they can poison it

load more comments (1 replies)
[–] OptimusPrimeRib@lemmy.world 3 points 1 year ago (1 children)

Whats the purpose of these bots?

[–] ilovesatan@lemmy.world 6 points 1 year ago* (last edited 1 year ago) (1 children)

Spam and vote manipulation are the two biggest concerns.___

load more comments (1 replies)
load more comments
view more: next ›