this post was submitted on 09 Jul 2023
1336 points (96.8% liked)

Fediverse

17683 readers
19 users here now

A community dedicated to fediverse news and discussion.

Fediverse is a portmanteau of "federation" and "universe".

Getting started on Fediverse;

founded 4 years ago
MODERATORS
 

The best part of the fediverse is that anyone can run their own server. The downside of this is that anyone can easily create hordes of fake accounts, as I will now demonstrate.

Fighting fake accounts is hard and most implementations do not currently have an effective way of filtering out fake accounts. I'm sure that the developers will step in if this becomes a bigger problem. Until then, remember that votes are just a number.

top 50 comments
sorted by: hot top controversial new old
[–] PetrichorBias@lemmy.one 285 points 1 year ago* (last edited 1 year ago) (32 children)

This was a problem on reddit too. Anyone could create accounts - heck, I had 8 accounts:

one main, one alt, one "professional" (linked publicly on my website), and five for my bots (whose accounts were optimistically created, but were never properly run). I had all 8 accounts signed in on my third-party app and I could easily manipulate votes on the posts I posted.

I feel like this is what happened when you'd see posts with hundreds / thousands of upvotes but had only 20-ish comments.

There needs to be a better way to solve this, but I'm unsure if we truly can solve this. Botnets are a problem across all social media (my undergrad thesis many years ago was detecting botnets on Reddit using Graph Neural Networks).

Fwiw, I have only one Lemmy account.

[–] impulse@lemmy.world 113 points 1 year ago (2 children)

I see what you mean, but there's also a large number of lurkers, who will only vote but never comment.

I don't think it's unfeasible to have a small number of comments on a highly upvoted post.

[–] SGforce@lemmy.ca 60 points 1 year ago

If it's a meme or shitpost there isn't anything to talk about

[–] PetrichorBias@lemmy.one 33 points 1 year ago (2 children)

Maybe you're right, but it just felt uncanny to see thousands of upvotes on a post with only a handful of comments. Maybe someone who active on the bot-detection subreddits can pitch in.

[–] RedCowboy@lemmy.world 19 points 1 year ago (1 children)

I agree completely. 3k upvotes on the front page with 12 comments just screams vote manipulation

load more comments (1 replies)
load more comments (1 replies)
[–] simple@lemmy.world 26 points 1 year ago (7 children)

Reddit had ways to automatically catch people trying to manipulate votes though, at least the obvious ones. A friend of mine posted a reddit link for everyone to upvote on our group and got temporarily suspended for vote manipulation like an hour later. I don't know if something like that can be implemented in the Fediverse but some people on github suggested a way for instances to share to other instances how trusted/distrusted a user or instance is.

[–] cynar@lemmy.world 27 points 1 year ago (4 children)

An automated trust rating will be critical for Lemmy, longer term. It's the same arms race as email has to fight. There should be a linked trust system of both instances and users. The instance 'vouches' for the users trust score. However, if other instances collectively disagree, then the trust score of the instance is also hit. Other instances can then use this information to judge how much to allow from users in that instance.

load more comments (4 replies)
[–] TWeaK@lemm.ee 15 points 1 year ago (1 children)
load more comments (1 replies)
load more comments (5 replies)
[–] BrianTheeBiscuiteer@lemmy.world 21 points 1 year ago (2 children)

Yes, I feel like this is a moot point. If you want it to be "one human, one vote" then you need to use some form of government login (like id.me, which I've never gotten to work). Otherwise people will make alts and inflate/deflate the "real" count. I'm less concerned about "accurate points" and more concerned about stability, participation, and making this platform as inclusive as possible.

[–] PetrichorBias@lemmy.one 16 points 1 year ago* (last edited 1 year ago) (2 children)

In my opinion, the biggest (and quite possibly most dangerous) problem is someone artificially pumping up their ideas. To all the users who sort by active / hot, this would be quite problematic.

I'd love to actually see some social media research groups actually consider how to detect and potentially eliminate this issue on Lemmy, considering Lemmy is quite new and is malleable at this point (compared to other social media). For example, if they think metric X may be a good idea to include in all metadata to increase chances of detection, then it may be possible to include this in the source code of posts / comments / activities.

I know a few professors and researchers who do research on social media and associated technologies, I'll go talk to them when they come to their office on Monday.

load more comments (2 replies)
load more comments (1 replies)
[–] Thorny_Thicket@sopuli.xyz 19 points 1 year ago (1 children)

I always had 3 or 4 reddit accounts in use at once. One for commenting, one for porn, one for discussing drugs and one for pics that could be linked back to me (of my car for example) I also made a new commenting account like once a year so that if someone recognized me they wouldn't be able to find every comment I've ever written.

On lemmy I have just two now (other is for porn) but I'm probably going to make one or two more at some point

load more comments (1 replies)
[–] InternetPirate@lemmy.fmhy.ml 19 points 1 year ago* (last edited 1 year ago) (1 children)

I feel like this is what happened when you’d see posts with hundreds / thousands of upvotes but had only 20-ish comments.

Nah it's the same here in Lemmy. It's because the algorithm only accounts for votes and not for user engagement.

load more comments (1 replies)
[–] AndrewZabar@beehaw.org 16 points 1 year ago

On Reddit there were literally bot armies by which thousands of votes could be instantly implemented. It will become a problem if votes have any actual effect.

It’s fine if they’re only there as an indicator, but if the votes are what determine popularity, prioritize visibility, it will become a total shitshow at some point. And it will be rapid. So yeah, better to have a defense system in place asap.

load more comments (26 replies)
[–] Boozilla@lemmy.world 102 points 1 year ago (7 children)

The lack of karma helps some. There's no point in trying to rack up the most points for your account(s), which is a good thing. Why waste time on the lamest internet game when you can engage in conversation with folks on lemmy instead.

[–] Protoknuckles@lemmy.world 137 points 1 year ago (2 children)

It can still be used to artificially pump up an idea. Or used to bury one.

load more comments (2 replies)
[–] Steve@compuverse.uk 40 points 1 year ago

Maybe you move public perception of a product or political goal.
To push a narrative of some kind. Astroturfing basically.

[–] muddybulldog@mylemmy.win 34 points 1 year ago* (last edited 1 year ago) (3 children)

Lack of karma is a fallacy. The default Lemmy UI doesn't display it but the karma system appears to be fully built.

load more comments (3 replies)
[–] bassdrop321@feddit.de 28 points 1 year ago (5 children)

Corporations could use it to push their ads to the top

load more comments (5 replies)
[–] reallynotnick@lemmy.world 23 points 1 year ago (9 children)

Maybe I'm misunderstanding karma, but Memmy appears to show the total upvotes I've gotten for comments and posts, isn't that basically karma?

load more comments (9 replies)
load more comments (2 replies)
[–] Wander@yiffit.net 93 points 1 year ago (3 children)

In case anyone's wondering this is what we instance admins can see in the database. In this case it's an obvious example, but this can be used to detect patterns of vote manipulation.

[–] toish@yiffit.net 42 points 1 year ago (1 children)

“Shill” is a rather on-the-nose choice for a name to iterate with haha

[–] Evergreen5970@beehaw.org 23 points 1 year ago* (last edited 1 year ago)

I appreciate it, good for demonstration and just tickles my funny bone for some reason. I will be delighted if this user gets to 100,000 upvotes—one for every possible iteration of shill#####.

load more comments (2 replies)
[–] sparr@lemmy.world 71 points 1 year ago (14 children)

Web of trust is the solution. Show me vote totals that only count people I trust, 90% of people they trust, 81% of people they trust, etc. (0.9 multiplier should be configurable if possible!)

load more comments (13 replies)
[–] popemichael@lemmy.world 70 points 1 year ago (3 children)

You can buy 700 votes anonymously on reddit for really cheap

I don't see that it's a big deal, really. It's the same as it ever was.

[–] Valmond@lemmy.ml 47 points 1 year ago (2 children)

Over a houndred dollars for 700 upvotes O_o

I wouldn't exactly call that cheap 🤑

On the other hand, ten or twenty quick downvotes on an early answer could swing things I guess ...

[–] popemichael@lemmy.world 35 points 1 year ago (13 children)

For the companies who want a huge advantage over others, $100 is nothing in an advertising budget.

I have a small business and I do $1000 a week in advertising.

[–] OtakuAltair@lemmy.world 23 points 1 year ago* (last edited 1 year ago)

Yeah, 700 upvotes soon after a post is made could easily shoot it up to the top of even a popular sub for a few days (specially with the lack of mod tools rn), with others upvoting it purely because it already has alot of upvotes.

load more comments (12 replies)
load more comments (1 replies)
load more comments (2 replies)
[–] czarrie@lemmy.world 60 points 1 year ago (5 children)

The nice things about the Federated universe is that, yes, you can bulk create user accounts on your own instance - and that server can then be defederated by other servers when it becomes obvious that it's going to create problems.

It's not a perfect fix and as this post demonstrated, is only really effective after a problem has been identified. At least in terms of vote manipulation from across servers, it could act if it, say, detects that 99% of new upvotes are coming from a server created yesterday with 1 post, it could at least flag it for a human to review.

[–] two_wheel2@lemm.ee 22 points 1 year ago (1 children)

It actually seems like an interesting problem to solve. Instance runners have the sql database with all the voting record, finding manipulative instances seems a bit like a machine learning problem to me

load more comments (1 replies)
load more comments (4 replies)
[–] Flashoflight@lemmy.world 47 points 1 year ago (1 children)

This is really important to call out. Also though the bots have gotten so good it would be hard to tell the difference. To be honest though I'm pretty sure reddit was teeming withing them and it didn't really bother me. lol

load more comments (1 replies)
[–] 7heo@lemmy.ml 45 points 1 year ago* (last edited 1 year ago) (4 children)
load more comments (4 replies)
[–] fermuch@lemmy.ml 43 points 1 year ago (3 children)

Votes were just a number on reddit too... There was no magic behind them, and as Spez showed us multiple times: even reddit modified counts to make some posts tell something different.

And remember: reddit used to have a horde of bots just to become popular.

Everything on the internet is or can be fake!

load more comments (3 replies)
[–] YoBuckStopsHere@lemmy.world 41 points 1 year ago (2 children)

Reddit admins manipulated vote counts all the time.

[–] authed@lemmy.ml 27 points 1 year ago (1 children)

Reddit also created fake users to post fake content... At least in the beginning of reddit.

load more comments (1 replies)
load more comments (1 replies)
[–] gthutbwdy@lemmy.sdf.org 30 points 1 year ago (2 children)

I think people often forget federation is not a new thing, it's a first design for internet communication services. Email, which is predating the Internet, is also federated network and most popular widely adopted of them all modes of Internet communication. It also had spam issues and there where many solutions for that case.

The one I liked the most was hashcash, since it requires not trust. It's the first proof-of-work system and it was an inspiration to blockchains.

load more comments (2 replies)
[–] skullgiver@popplesburger.hilciferous.nl 30 points 1 year ago* (last edited 11 months ago) (8 children)

[This comment has been deleted by an automated system]

load more comments (8 replies)
[–] Andreas@feddit.dk 29 points 1 year ago (2 children)

Federated actions are never truly private, including votes. While it's inevitable that some people will abuse the vote viewing function to harass people who downvoted them, public votes are useful to identify bot swarms manipulating discussions.

load more comments (2 replies)
[–] Mikina@programming.dev 28 points 1 year ago (3 children)

This is something that will be hard to solve. You can't really effectively discern between a large instance with a lot of users, and instance with lot of fake users that's making them look like real users. Any kind of protection I can think of, for example based on the activity of the users, can be simply faked by the bot server.

The only solution I see is to just publish the vote% or vote counts per instance, since that's what the local server knows, and let us personally ban instances we don't recognize or care about, so their votes won't count in our feed.

load more comments (3 replies)
[–] mintyfrog@lemmy.ml 27 points 1 year ago

PSA: internet votes are based on a biased sample of users of that site and bots

[–] deadsuperhero@lemmy.ml 26 points 1 year ago (1 children)

Honestly, thank you for demonstrating a clear limitation of how things currently work. Lemmy (and Kbin) probably should look into internal rate limiting on posts to avoid this.

I'm a bit naive on the subject, but perhaps there's a way to detect "over x amount of votes from over x amount of users from this instance"? and basically invalidate them?

[–] jochem@lemmy.ml 19 points 1 year ago (2 children)

How do you differentiate between a small instance where 10 votes would already be suspicious vs a large instance such as lemmy.world, where 10 would be normal?

I don't think instances publish how many users they have and it's not reliable anyway, since you can easily fudge those numbers.

load more comments (2 replies)
[–] cypherpunks@lemmy.ml 18 points 1 year ago (1 children)
load more comments (1 replies)
[–] stevedidWHAT@lemmy.world 17 points 1 year ago (2 children)

You mean to tell me that copying the exact same system that Reddit was using and couldn’t keep bots out of is still vuln to bots? Wild

Until we find a smarter way or at least a different way to rank/filter content, we’re going to be stuck in this same boat.

Who’s to say I don’t create a community of real people who are devoted to manipulating votes? What’s the difference?

The issue at hand is the post ranking system/karma itself. But we’re prolly gonna be focusing on infosec going forward given what just happened

load more comments (2 replies)
[–] krnl386@lemmy.ca 16 points 1 year ago

Did anyone ever claim that the Fediverse is somehow a solution for the bot/fake vote or even brigading problem?

[–] hawkwind@lemmy.management 16 points 1 year ago

IMO, likes need to be handled with supreme prejudice by the Lemmy software. A lot of thought needs to go into this. There are so many cases where the software could reject a likely fake like that would have near zero chance of rejecting valid likes. Putting this policing on instance admins is a recipe for failure.

[–] Mesa@programming.dev 16 points 1 year ago* (last edited 1 year ago) (2 children)

I don't have experience with systems like this, but just as sort of a fusion of a lot of ideas I've read in this thread, could some sort of per-instance trust system work?

The more any instance interacts positively (posting, commenting, etc.) with main instance 'A,' that particular instance's reputation score gets bumped up on main instance A. Then, use that score with the ratio of votes from that instance to the total amount of votes in some function in order to determine the value of each vote cast.

This probably isn't coherent, but I just woke up, and I also have no idea what I'm talking about.

load more comments (2 replies)
[–] nekat_emanresu@lemmy.ml 15 points 1 year ago (5 children)

Upvotes aren't just a number, they determine placing on the algorithm along with comments. It's easy to censor an unwanted view by mass downvoting it.

load more comments (5 replies)
load more comments
view more: next ›