this post was submitted on 28 Jul 2023
18 points (87.5% liked)

Lemmy

12524 readers
13 users here now

Everything about Lemmy; bugs, gripes, praises, and advocacy.

For discussion about the lemmy.ml instance, go to !meta@lemmy.ml.

founded 4 years ago
MODERATORS
 

I'm sure this is a common topic but the timeline is pretty fast these days.

With bots looking more human than ever i'm wondering what's going to happen once everyone start using them to spam the platform. Lemmy with it's simple username/text layout seem to offer the perfect ground for bots, to verify if someone is real is going to take scrolling through all his comments and read them accurately one by one.

all 15 comments
sorted by: hot top controversial new old
[–] muddybulldog@mylemmy.win 7 points 1 year ago (2 children)

Somewhat of a loaded question but, if we need to scroll through their comment history meticulously to separate real from bot, does it really matter at that point?

SPAM is SPAM and we’re all in agreement that we don’t want bots junking up the communities with low effort content. However if they reach the point that it takes real effort to ferret them out they must be successfully driving some sort of engagement.

I’m not positive that’s a bad thing.

[–] usrtrv@lemmy.ml 5 points 1 year ago (1 children)

I think we'll be in bad shape when you can't trust any opinions about products, media, politics, etc. Sure, shills currently exists, so everything you read already needs skepticism. But at some point bots will be able to flood very high quality posts. But these will of course be lies to push a product or ideology. The truth will be noise.

I do think this is inevitable, and the only real guard would be to move back to smaller social circles.

[–] muddybulldog@mylemmy.win 1 points 1 year ago (1 children)

I’m of the mind that the truth already is noise and has been for a long, long time. AI isn’t introducing anything new, it’s just enabling faster creation of agenda-driven content. Most people already can’t identify the AI generated content that’s been spewing forth in years past. Most people aren’t looking for quality content, they looking for bias-affirming content. The overall quality is irrelevant.

[–] zer0@thelemmy.club -1 points 1 year ago (1 children)

The outcome is that people will ditch platform like lemmy and seek true informations somewhere else

[–] usernotfound@lemmy.ml 1 points 1 year ago

Where did you have in mind?

[–] HelloHotel@lemm.ee 1 points 1 year ago (1 children)

Things like chatGPT are not designed to think using object relations like a human. Its designed to respond the way a human would, (a speach quartex with no brain), it is made to figure out what a human would respond with rather than give a well thoght out answer.

Robert Miles can explain it better than i ever could

[–] PipedLinkBot@feddit.rocks 2 points 1 year ago (1 children)

Here is an alternative Piped link(s): https://piped.video/watch?v=w65p_IIp6JY

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source, check me out at GitHub.

[–] usernotfound@lemmy.ml 1 points 1 year ago

BOT! KILL IT!

[–] ezmack@lemmy.ml 1 points 1 year ago (1 children)

The horde aspect might make it easier. The ones on twitter at least you can tell are just running the same script through a thesaurus basically. 20 people leaving the same comment is a little more obvious than just one

[–] usernotfound@lemmy.ml 1 points 1 year ago

That's why they're talking about the next generation.

With AI you can easily generate 100 different ways to say the same thing. And it's hard to distinguish a bot that's parroting someone else from a person who's repeating something they heard.