this post was submitted on 28 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

Yes, I do still use Twitter and yes I know it's X. But more and more I see these reply that are incredibly obviously written by LLM (and notoriously ChatGPT)

Like this thread I'm reading right now how Finland closed all it's borders (and it is written by a human) but then the replies are like:

-It's important for countries to manage their borders effectively while ensuring the safety and well-being of all individuals involved.

-That's a significant step to address the issue. Hoping for lasting solutions that prioritize the safety of all involved.

- That's an interesting development in Finland's immigration policies. It's important for countries to find a balance that takes into account economic, social, and security concerns.

etc... so yeah, very obviously LLM. Very obviously ChatGPT by the language too.

So enlighten me - what are people doing this hoping to achieve, except me very swiftly clicking Block the user?

I see it more and more. Not that I care about X either way, (for what it is worth, it can became a bot infested platform) but this is using LLM the 100% wrong way - for goals I can't imagine. It adds no context, no opinion, just noise.

I just can't find a scenario when this is good or beneficial for anybody doing it (or reading it). But maybe it's just me.

Hmm??

top 35 comments
sorted by: hot top controversial new old
[–] NoidoDev@alien.top 1 points 9 months ago (1 children)

I plan to do the same thing, I had this idea for quite some time, though not strictly just a LLM based bot.

The reason? Political topics are often dominated by those who have most resources, including more ideological zealots. I have no intention to explain things over and over again. Most people deserve to be handled by a bot if they want to to engage in political discussions, because they're just a biobot themselves (NPC meme).

Also, countries and corporation were able to hire PR agencies for quite some time. Then training activists and paying some people in India to post certain comments or vote in certain ways. A lot of politics was likely have been dominated by that.

Obviously it's better to make this so that it will be less obvious, but it also might not matter if certain people block such a bot, since the target are people which are not as convinced about certain issues. Just reading something with an argument they never heard before might change their mind. Some might be true for not being under the impression that everyone has the same mainstream opinion.

[–] CheatCodesOfLife@alien.top 1 points 9 months ago

The reason? Political topics are often dominated by those who have most resources, including more ideological zealots. I have no intention to explain things over and over again. Most people deserve to be handled by a bot if they want to to engage in political discussions, because they're just a biobot themselves (NPC meme).

This won't work, because you can convince bots like ChatGPT and get them to agree with your political views. This doesn't work with real humans though.

[–] alanshore222@alien.top 1 points 9 months ago

because virtual assistants are hard to find. I put well over 200 hours into prompting to make Instagram business accounts automated so I can fire 12 assistants.

[–] LocoLanguageModel@alien.top 1 points 9 months ago

Young and bored teenagers would get a nice chuckle seeing people unknowingly having convos with their bots online.

Imagine you hate political candidate A but love political candidate B. Imagine setting your bot up to trash A and promote B.

Even more entertaining would be to setup your bot to debate and waste peoplea time. Go to sleep and wake up to see your bot has been arguing with someone wasting their time for 8 hours. That would be hilarious to a troll.

[–] AdamDhahabi@alien.top 1 points 9 months ago

Unethical practices, one-man-shops attempting to pump up the account value artificially, aiming for a sale later on.

[–] NelsonMinar@alien.top 1 points 9 months ago (2 children)

This sounds like political propaganda. Either someone astroturfing support for Finland's immigration policies. Or more likely, Russian efforts to stir up shit.

Twitter used to filter this stuff out but Musk stopped doing that.

[–] WaterdanceAC@alien.top 1 points 9 months ago

Yeah. My first thought was marketing, but in context it sounds more like fine tuning a disinfo bot based on engagement.

[–] BurnerAndTurn@alien.top 1 points 9 months ago (1 children)

Pretty hilarious since his "goal" with buying twitter was to get rid of bots. Now they're in full force more than even remotely ever before. Wild stuff.

[–] FPham@alien.top 1 points 9 months ago

And also made LLM for the users to abuse... Wait when twitter replies start sounding like Hitchhiker's guide to the galaxy and citing Vogon poetry, coz that's the "style" of Grok.

[–] Herr_Drosselmeyer@alien.top 1 points 9 months ago

Because a frightening amount of people still think Twitter matters.

[–] Disastrous_Elk_6375@alien.top 1 points 9 months ago (1 children)

Astroturfing just got orders of magnitude cheaper with the advent of LLMs. This, along with spam and advanced phishing are some of the true real and present dangers of this technology. It's a battle between content platforms and any bloke with an axe to grind, and it's probably a loosing battle for the content platforms.

Genuine human to human interaction online is going to become rare and tedious. Can't even imagine what kind of captchas they'll have to come up with to fool the next generation of multimodal models.

[–] Key_Extension_6003@alien.top 1 points 9 months ago (3 children)

Almost like we should just go back to meaningful face to face communication.

[–] NoidoDev@alien.top 1 points 9 months ago

People will be more in bubbles with bots they agree with, and which they might often know to be bots, which are maybe framed as something likable like an anime girl.

[–] uhuge@alien.top 1 points 9 months ago

over Skype, right?;)

[–] FPham@alien.top 1 points 9 months ago

Commu - what? Blasphemy....

[–] __SlimeQ__@alien.top 1 points 9 months ago

I'm not doing that but my guess is it's fun, easy, and cheap to do (only $8/mo!) and potentially lucrative if you can cheese a following somehow.

Using gpt is really lazy though when it's so easy to do a custom 13B Lora that will actually interact like a human

[–] NachosforDachos@alien.top 1 points 9 months ago

Lazy promoting.

[–] FlishFlashman@alien.top 1 points 9 months ago (1 children)

It's a lot cheaper than paying humans to spread propaganda.

Consider that the audience isn't you, it's people who lack discernment. It's like those scam emails. People with good judgement delete them.

The other audience is engagement algorithms.

[–] lucid8@alien.top 1 points 9 months ago (1 children)

Not only that, but the costs of additional fine-tuning is negligible for state actors. And the current open source LLM context length is just ideal for Twitter.

I also wouldn't be shocked if there are bot campaigns both for & against the issue at hand, by the same groups, to make it confusing for human onlookers and increase polarization

[–] NoidoDev@alien.top 1 points 9 months ago

Yes, state actors and companies were already able to do that. More importantly than the price for them going down is that this allows more groups or individuals to engage in that.

[–] rob10501@alien.top 1 points 9 months ago

The ultimate end game is selling persuasion.

Sentiment is scraped from Twitter and trading and policy is ultimately derived from it.

[–] New_Lifeguard4020@alien.top 1 points 9 months ago

The examplary topic you mentioned is highly political debatable. So the obvious reason is to manipulate and spread your political agenda.

[–] kindacognizant@alien.top 1 points 9 months ago

Elon Musk has made it profitable

[–] User1539@alien.top 1 points 9 months ago (2 children)

The dead internet theory

Basically, it enhances your status in the algorithm, so it's worth having some bots that will talk you up. Like creating AI friends to tell everyone how cool you are. But, since it's largely algorithmic/AI determining who should see your content, it works.

[–] Soramaro@alien.top 1 points 9 months ago

Valentine and Peter have entered the chat

[–] FPham@alien.top 1 points 9 months ago

I'm thinking that's probably it.

[–] bigfish_in_smallpond@alien.top 1 points 9 months ago

It's also about propaganda. Instead of hiring thousands of people to comb the internet and post a country's propaganda on any topic related to your point of view, have chatpgt do it for you.

[–] Sabin_Stargem@alien.top 1 points 9 months ago

In some distant future, I might create an AI agent to go out and post my videogame recommendations. Writing up a review and finding appropriate recent threads to post in, is much all about timing. An AI bot could spot opportunities to inform others about good games.

I expect that we will see AI used as a sort of Hermes for individual people - searching and delivering their opinions to social platforms, 24/7/365, without needing much guidance from their human. Of course, AI personas will also sift through the posts of other people, and determine which opinions should be shared with their user.

[–] AdventureOfALife@alien.top 1 points 9 months ago

More engagement = more ad revenue. There's also stuff like operation earnest voice.

[–] obvithrowaway34434@alien.top 1 points 9 months ago (1 children)

That's an order of magnitude improvement from the average Twitter post in terms of grammar, composition, ability to hold a coherent thought for a few seconds etc. and most importantly does not make your blood boil with rage. Why are you complaining?

[–] FPham@alien.top 1 points 9 months ago

I use twitter while drinking morning coffee - it makes it 2x stronger.

[–] AloofPenny@alien.top 1 points 9 months ago

Probably to troll Elon

[–] a_beautiful_rhind@alien.top 1 points 9 months ago (1 children)

Welcome to the beginning of the death of shared reality. It's on the chopping block after objective truth. The latter is almost done.

[–] NoidoDev@alien.top 1 points 9 months ago

Most people might believe in things being "the reality" while there's no opposition, but splitting in groups with different judgments and knowledge is now some kind of collapse.

[–] mcmoose1900@alien.top 1 points 9 months ago

See the end result of this: https://chirper.ai/

Its... actually pretty pleasant without any humans, lol.