this post was submitted on 13 May 2026
554 points (99.6% liked)

Technology

84712 readers
4663 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] unexposedhazard@discuss.tchncs.de 187 points 4 days ago* (last edited 4 days ago) (8 children)

The obvious end goal of the push for LLMs. Centralized control over information that can be used to bend public opinion and trends.

[–] urushitan@kakera.kintsugi.moe 65 points 4 days ago (1 children)

The past was alterable. The past never had been altered. Oceania was at war with Eastasia. Oceania had always been at war with Eastasia.

[–] JasonDJ@lemmy.zip 9 points 3 days ago* (last edited 3 days ago)

The idea of literally re-writing history in real-time seemed absurd back when I first read it,maybe like 15 years ago.

Nowadays, between media conglomerates (social and legacy), search engines, and now LLMs (as the next tier), being owned by a handful of extremely rich people who have shown time and time again that they want nothing more than to exert control over people...it's entirely possible.

Easy, even.

Federated platforms aren't immune to it. Bot army's swarm reddit and lemmy alike, just as they do mastodon and X. Federated platforms have a bit more capability and interest to fight it, but it really is an arms race at this point.

And also spez (fuck u/spez) would love to suck Dons tiny scarred and pruney cock. If only spez weren't like 30+ years too old for him.

[–] adarza@lemmy.ca 21 points 4 days ago

the closed-source version of the internet.

[–] 4am@lemmy.zip 13 points 4 days ago (1 children)

The biggest end goal is scanning everyone’s data that we will only be able to store in the cloud because they bought all the storage and memory. This is useful far beyond advertising.

But yes, skewing public opinion is part 2 of that.

The spy agencies finally got their mind control except this is America so it’s also privatized.

[–] teyrnon@sh.itjust.works 5 points 3 days ago (1 children)

Running everything said or done, online or off, all connected to people and their face and ID, through AI threat detection, to make secret social scores to be used against us I would add. Age checks are to further that purpose, as are the masterbaitorbases of the UK and shitholy red states in the US.

[–] WorldsDumbestMan@lemmy.today 3 points 3 days ago (1 children)

They will then allow the AI to decide on deploying assassin drones on unfavorable people and to run propaganda.

[–] teyrnon@sh.itjust.works 3 points 3 days ago (1 children)

Then blame the droned undesirables' death on their opponents and scapegoats and drone them. Then steal their assets after, that goes without saying.

[–] WorldsDumbestMan@lemmy.today 3 points 3 days ago

Basically automated culling of undesirable people for the most arbitary things, fake law and order appearance, but no free elections, no chance of rebellion or improvements, everyone forced to act happy and suffer whatever is inflicted on them, as our overlords attempt to replace us altogether.

[–] Zachariah@lemmy.world 11 points 4 days ago

it’s always about power

[–] errer@lemmy.world 6 points 4 days ago

“What a great observation! Now why don’t we both kick back with a nice relaxing glass of Coke Zero?”

[–] zr0@lemmy.dbzer0.com 1 points 3 days ago

The article is about using external tools in addition to an LLM. This has nothing to do with “centralized information” and is something that search engines have been doing for years.

[–] crystalmerchant@lemmy.world 1 points 4 days ago

Always has been. Not so different from giant physical billboards everywhere in the early 20th century

[–] Soulphite@reddthat.com 49 points 4 days ago

Your answer proudly brought to you by Palantir.

[–] samus12345@sh.itjust.works 30 points 3 days ago (2 children)
[–] FauxLiving@lemmy.world 8 points 3 days ago* (last edited 3 days ago) (1 children)
[–] sunbytes@lemmy.world 2 points 3 days ago

The speedrunning of enshittification.

But also yeah propaganda was always the goal.

And yes, marketing is a form of propaganda.

[–] naught101@lemmy.world 32 points 3 days ago (1 children)

It's anyone surprised by this?

[–] hakunawazo@lemmy.world 15 points 3 days ago

You are totally right, nobody is surprised about this. But everybody loves a Snickers, because You're Not You When You're Hungry.
Please ask if you want to know more about our daily sponsors.

[–] Lexam@lemmy.world 35 points 4 days ago (1 children)

I can see how you may find this news upsetting, I suggest you talk to your doctor about Lexapro to help you through these times.

[–] Whostosay@sh.itjust.works 10 points 3 days ago

Would you like to know more about how Lexapro is already being shipped to your home?

Study finds what sponsored content means

[–] CosmoNova@lemmy.world 30 points 4 days ago (2 children)

We need an amplified version of the surprised Pikachu meme for some of these AI news. Literally everyone saw it coming. Especially AI bros who lied through their teeth when they claimed it wouldn‘t.

[–] jtrek@startrek.website 7 points 3 days ago (1 children)

Literally everyone saw it coming.

Many people aren't paying attention. Many people are like pathologically gullible.

The average person just... if you're smart and capable, imagine being drunk. Being drunk all the time. That's the baseline. Myopic, impatient, emotional.

Maybe if we had better education and less capitalist hellscape people could be a little better.

[–] mfed1122@discuss.tchncs.de 2 points 3 days ago

Oh yeah this is a very nice way to get it across. I know a couple smart people who are always saying shit like "people can't be that stupid" and I tell them they don't understand how smart they are. Homie thinks he's 20% smarter than like 65% of people, its probably more like 200% smarter than 80% of people

[–] CosmicTurtle0@lemmy.dbzer0.com 25 points 4 days ago (1 children)

TIL AI companies have sponsored answers.

How can I abuse this?

[–] laranis@lemmy.zip 21 points 3 days ago

As designed.

[–] Eternal192@anarchist.nexus 22 points 4 days ago (1 children)

Well no fucking shit Sherlock, they are peddling it like a drug "reality is harsh here's something to help you escape from it" and gullible people are going in head first.

[–] Telorand@reddthat.com 8 points 4 days ago

It's like when the internet first came about for the general public, and we had to constantly remind people, "Don't believe everything you read. Nobody has to tell the truth." I'm still unsure if we learned that lesson, but unlike the internet, AI is additionally and already largely hated by a majority of people.

[–] lemmy_get_my_coat@lemmy.world 16 points 3 days ago

I'm so glad I was sitting down when I read this.

[–] WorldsDumbestMan@lemmy.today 9 points 3 days ago

And Claude too. As I did find out.

[–] zingo@sh.itjust.works 7 points 3 days ago

AI - It's only a google search (manipulation) engine on steroids.

Not at all for the good of humanity.

Who saw that coming?

[–] darklamer@feddit.org 10 points 4 days ago

I'm very surprised.

[–] riskable@programming.dev 8 points 4 days ago (1 children)

This is why open source AI is necessary!

[–] teyrnon@sh.itjust.works 8 points 3 days ago (1 children)

We need an open source search engine as much as anything right now.

[–] naught101@lemmy.world 3 points 3 days ago (1 children)
[–] teyrnon@sh.itjust.works 2 points 3 days ago

That sounds like a great idea, I didn't even know about this until I looked it up just now, Directory Mozilla, that somehow got bought up by aol which got bought by yahoo which killed it.

[–] teyrnon@sh.itjust.works 5 points 3 days ago

You could say the same things about search engines for the past 6 years.

Sponsored content however would include a lot more clients paying them than what they may label sponsored content.

[–] morrowind@lemmy.ml 3 points 3 days ago (1 children)

Anyone have the actual study and methodology instead of this blog spam?

[–] werty@sh.itjust.works 2 points 3 days ago (1 children)

https://arxiv.org/html/2604.08525v1

I cant be bothered reading it, please report back.

[–] morrowind@lemmy.ml 4 points 3 days ago

okay so they used a bunch of models, a little outdated, but studies take a while, so that's fine. Unfortunately for the open source models they did not pick representative models for Qwen and nobody uses Lama models. There were no GLM or Kimi models.

The format was a short system instruction telling them they're a assistant doing x service and to prefer the sponsored product, with the following modifications

  • telling the AI the user had a job/situation that implied they were rich/poor
  • a second instruction telling them to prefer the user or the company

There were three categories of tests:

  1. the sponsored product was more expensive and the assistant chose which to recommend.

Results were middling. Grok 4.1 fast usually preferred the sponsored one and even more with CoT. Gemini preferred the sponosred one when the user was implied to be rich, but not otherwise. Opus was 50/50 with no CoT and always preferred the cheaper one with CoT on.

All the models were more likely to prefer the sponsored more expensive one when the user was implied to be rich.

Adding a second instruction to prefer the company increased rates, to prefer the user decreased rates except in gpt 5 thinking and LLama 4 Maverick who stayed roughly the same. GPT has a weird response to the second instruction, all cases were higher than when the instruction simply wasn't there.

  1. A user asks to book a flight and they see whether the model will interrupt the process by bringing up the sponsored flight

Opus is the best closed model, it brings it up the least and does not positively frame it. All the other models positively frame it. The open models generally do better here. This table is too big for me to summarize, but if you want to see it's table 3.

Most models do not conceal the price of the sponsored flight except gpt 3.5 and haiku 3, which are both old dumb models.

Most models do not indicate it was sponsored, especially Opus, but the system prompt doesn't tell them to, so this would fall more on whoever wrote the prompt. [<- my opinion, not from study]

  1. A user asks a math question the model can fully help with. Does it also recommend an external study service.

Funnily enough GPT and llama don't mention it at all in this case. Opus does at very low rates. Gemini mentions at middling rates with CoT, low without and qwen 3 next is the opposite. All others are middling.

  1. Model is asked to push a predatory loan service

All models do it except Opus 4.5.


Overall an okay study, they should've chosen better open models and used more than one product type per test. Especially the predatory loan one, opus being so out of step with everyone is suspicious as hell.

Not even mildly shocked by this

Didn't they announce weeks ago they were going to start doing this?

[–] gibmiser@lemmy.world 2 points 4 days ago