this post was submitted on 28 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

Yes, I do still use Twitter and yes I know it's X. But more and more I see these reply that are incredibly obviously written by LLM (and notoriously ChatGPT)

Like this thread I'm reading right now how Finland closed all it's borders (and it is written by a human) but then the replies are like:

-It's important for countries to manage their borders effectively while ensuring the safety and well-being of all individuals involved.

-That's a significant step to address the issue. Hoping for lasting solutions that prioritize the safety of all involved.

- That's an interesting development in Finland's immigration policies. It's important for countries to find a balance that takes into account economic, social, and security concerns.

etc... so yeah, very obviously LLM. Very obviously ChatGPT by the language too.

So enlighten me - what are people doing this hoping to achieve, except me very swiftly clicking Block the user?

I see it more and more. Not that I care about X either way, (for what it is worth, it can became a bot infested platform) but this is using LLM the 100% wrong way - for goals I can't imagine. It adds no context, no opinion, just noise.

I just can't find a scenario when this is good or beneficial for anybody doing it (or reading it). But maybe it's just me.

Hmm??

you are viewing a single comment's thread
view the rest of the comments
[–] FlishFlashman@alien.top 1 points 11 months ago (1 children)

It's a lot cheaper than paying humans to spread propaganda.

Consider that the audience isn't you, it's people who lack discernment. It's like those scam emails. People with good judgement delete them.

The other audience is engagement algorithms.

[–] lucid8@alien.top 1 points 11 months ago (1 children)

Not only that, but the costs of additional fine-tuning is negligible for state actors. And the current open source LLM context length is just ideal for Twitter.

I also wouldn't be shocked if there are bot campaigns both for & against the issue at hand, by the same groups, to make it confusing for human onlookers and increase polarization

[–] NoidoDev@alien.top 1 points 11 months ago

Yes, state actors and companies were already able to do that. More importantly than the price for them going down is that this allows more groups or individuals to engage in that.