this post was submitted on 09 Apr 2026
1019 points (99.1% liked)

Science Memes

19916 readers
2017 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 3 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] partial_accumen@lemmy.world 150 points 1 week ago (3 children)

I give you... "The Grant Money Printing machine!"

Need a grant? Create a disease and submit a paper. Then write a grant asking for money to solve your invented disease.

[–] Jankatarch@lemmy.world 11 points 1 week ago* (last edited 1 week ago)

If you want research grants there is already a glitch for that. You just jam "AI" in your research and suddenly government cares about progress now.

load more comments (2 replies)
[–] DeathsEmbrace@lemmy.world 136 points 1 week ago (1 children)

Before anyone shits on these scientists it said over and over again it was made up and that officially the USS Enterprise labs were used to make this discovery.

[–] Kacarott@aussie.zone 11 points 1 week ago

The Federation would never publish fake data, so it must be true!

[–] Blackout@fedia.io 66 points 1 week ago (3 children)

Find a way to make AI hurt billionaires and I will support it.

[–] brucethemoose@lemmy.world 43 points 1 week ago* (last edited 1 week ago) (1 children)

That's pretty much what local ML is.

If open weights LLMs take off, and business users realize they can just finetune tiny specialized models for stuff, OpenAI is toast. All of Big Tech's bets are. It's why they keep fanning the "AGI" lie, and why they're pushing for regulation so hard, why they're shoving LLMs where they just don't fit and harping on safety.

[–] The_Decryptor@aussie.zone 20 points 1 week ago (3 children)

Ok, but who is making those "open weight" models though? Individuals don't really have the resources to run these huge scraping operations, so they're often still corporate releases with fake open source branding.

[–] Grimy@lemmy.world 8 points 1 week ago* (last edited 1 week ago)

They come from corporate but you can at least run them without any kind of analytics or censorship, as well as fine tune them on consumer hardware.

Consumers aren't in the best position right now though, especially with the price hikes.

load more comments (2 replies)
[–] MalReynolds@slrpnk.net 11 points 1 week ago

Pretty much is, they're spending hundreds of billions on a dream (not having to pay workers) that doesn't work, until they repurpose those datacentres to remove personal computing.

Fortunately datacentres are by design concentrated in space and therefore rather vulnerable.

load more comments (1 replies)
[–] Arghblarg@lemmy.ca 61 points 1 week ago (1 children)

Good. This shows plainly how LLMs don't think, don't truly understand anything, and have no critical ability to do introspection or fact-checking. It seems the only way to teach the world of these things is to make it impossible to ignore via absurd demonstrations like this. If the "AI" well must be poisoned in order to wake people up, I'm all for it.

load more comments (1 replies)
[–] squaresinger@lemmy.world 49 points 1 week ago (3 children)
[–] HeyThisIsntTheYMCA@lemmy.world 10 points 1 week ago

they do the same to protect doctors from malpractice lawsuits. there is a (laughably peer reviewed) study that claims tylenol and morphine are equally effective at pain management.

load more comments (2 replies)
[–] RagingRobot@lemmy.world 40 points 1 week ago (17 children)

I wonder if we got a group together to go on reddit and stack overflow and give really wrong programming answers and vote them to the top, if Claude would start sucking? They could always just revert to a previous model and it would probably be too hard to get enough people and content to have an effect with such large training sets. Maybe if you use ai? Lol

load more comments (17 replies)
[–] Teppa@lemmy.world 36 points 1 week ago

AI's dont know that birds arent real, or that sometimes the pressure from being under water for an extended period of time can cause fish to explode.

[–] WhyIHateTheInternet@lemmy.world 30 points 1 week ago (1 children)

My friends and I did that in high school. Kinda. We made up new words for "awesome" to get people to start saying it. We started with "bumpenis" like that song is bumpenis. Really we were just getting people to say bum penis. It worked too. We are all just walking talking LLMs.

[–] Vathsade@lemmy.ca 28 points 1 week ago (1 children)
[–] W98BSoD@lemmy.dbzer0.com 16 points 1 week ago (6 children)

Stop trying to make fetch happen.

load more comments (6 replies)
[–] magnue@lemmy.world 28 points 1 week ago (2 children)

Wouldn't humans do the same thing if someone literally writes lies on the internet?

[–] Kacarott@aussie.zone 36 points 1 week ago* (last edited 1 week ago) (7 children)

If it were convincing lies made to deceive, then sure. But in this case the papers were deliberately made to be immediately obviously fake, to anyone actually reading them.

So I guess the question would be "would humans do the same thing if someone literally writes obvious jokes on the internet?"

[–] HylicManoeuvre@mander.xyz 12 points 1 week ago

More shockingly, three Indian researchers published a research paper that cited the preprint on the fake disease in Cureus, a peer-reviewed journal published by Springer. It was subsequently retracted.

lol

load more comments (6 replies)
[–] Foofighter@discuss.tchncs.de 18 points 1 week ago (5 children)

Absolutely! Once false information is out there it can't be retracted even if the article itself is retracted. Bumblebees can't fly and vaccines cause autism are good examples of that. The only difference i can imagine is that LLMs have a much larger reach and may spread shit faster

[–] SaveTheTuaHawk@lemmy.ca 7 points 1 week ago

But the Lancet did not retract the Wakefield paper for 12 years. The Lancet should have been shut down for that.

load more comments (4 replies)
[–] Whats_your_reasoning@lemmy.world 19 points 1 week ago (1 children)

“When the text looks professional and written as a doctor writes, there’s an increase in the hallucination rates,” says Omar.

Huh, now there’s something we have in common. Trying to make sense of something a doctor wrote makes me feel like I’m hallucinating, too. Is there a class in medical school on “Illegible Handwriting,” or is it just a coincidence?

In all seriousness though, I wish I could be surprised by AI failing at this. We have entered the Misinformation Age. There’s no closing Pandora’s Box, though this time I can’t find the “hope” that’s supposed to be in the bottom of it. Society would have to turn real skeptical real fast, but I’ve met enough people to know that such a tranformation is going to take time - and by “time” I mean “decades or longer.” With AI already here, we’d have to wise up immediately… but I fear that humanity isn’t mature enough for that yet.

load more comments (1 replies)
[–] bookmeat@fedinsfw.app 17 points 1 week ago

Without grounding, correctness is not defined. Hallucination is not a bug that scaling can fix. It is the structural consequence of operating without concepts. -- Gregory Coppola

[–] BeMoreCareful@lemmy.world 15 points 1 week ago (1 children)

Wait, so breaks containment means spreads misinformation? What timeline is this?

load more comments (1 replies)
[–] Zexks@lemmy.world 13 points 1 week ago* (last edited 1 week ago) (10 children)

So let me tell yoy all about this paper talking about vaccines and autism. It'll change the world

load more comments (10 replies)
[–] BigTurkeyLove@lemmy.dbzer0.com 11 points 1 week ago

Technology is healing 😌

[–] pemptago@lemmy.ml 9 points 1 week ago (1 children)

I imagine this is how it'll work for stage 2 of Ai enshittifation. They'll just add a bunch of garbage upstream about a brand or product marketers are paying to push and it'll infect a bunch of outputs downstream.

load more comments (1 replies)
[–] GaMEChld@lemmy.world 7 points 1 week ago (2 children)

I don't see this as a problem, rather, an opportunity to study information & disinformation propogation.

load more comments (2 replies)
[–] sunnytimes@lemmy.ca 7 points 1 week ago

ask the ai about a blue waffle

load more comments
view more: next ›