Yes, I do still use Twitter and yes I know it's X. But more and more I see these reply that are incredibly obviously written by LLM (and notoriously ChatGPT)
Like this thread I'm reading right now how Finland closed all it's borders (and it is written by a human) but then the replies are like:
-It's important for countries to manage their borders effectively while ensuring the safety and well-being of all individuals involved.
-That's a significant step to address the issue. Hoping for lasting solutions that prioritize the safety of all involved.
- That's an interesting development in Finland's immigration policies. It's important for countries to find a balance that takes into account economic, social, and security concerns.
etc... so yeah, very obviously LLM. Very obviously ChatGPT by the language too.
So enlighten me - what are people doing this hoping to achieve, except me very swiftly clicking Block the user?
I see it more and more. Not that I care about X either way, (for what it is worth, it can became a bot infested platform) but this is using LLM the 100% wrong way - for goals I can't imagine. It adds no context, no opinion, just noise.
I just can't find a scenario when this is good or beneficial for anybody doing it (or reading it). But maybe it's just me.
Hmm??
I plan to do the same thing, I had this idea for quite some time, though not strictly just a LLM based bot.
The reason? Political topics are often dominated by those who have most resources, including more ideological zealots. I have no intention to explain things over and over again. Most people deserve to be handled by a bot if they want to to engage in political discussions, because they're just a biobot themselves (NPC meme).
Also, countries and corporation were able to hire PR agencies for quite some time. Then training activists and paying some people in India to post certain comments or vote in certain ways. A lot of politics was likely have been dominated by that.
Obviously it's better to make this so that it will be less obvious, but it also might not matter if certain people block such a bot, since the target are people which are not as convinced about certain issues. Just reading something with an argument they never heard before might change their mind. Some might be true for not being under the impression that everyone has the same mainstream opinion.
This won't work, because you can convince bots like ChatGPT and get them to agree with your political views. This doesn't work with real humans though.