this post was submitted on 27 Apr 2026
1409 points (98.0% liked)

Science Memes

20117 readers
1876 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 3 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] webghost0101@sopuli.xyz 260 points 1 week ago (1 children)

No no, you see they trained an ai on it. Therefore this “pirating” is a 100% legitimate practice.

[–] stormeuh@lemmy.world 77 points 1 week ago

The way the law is being enforced now, this should be an entirely legitimate argument. A snowball's chance in hell though that it holds up without a legal team like OpenAI has.

[–] Deebster@programming.dev 76 points 1 week ago (3 children)

Have they taken out the AI generated papers? We know that training LLMs on LLM-generated text leads to an absolute collapse in quality, and we also know that AI has been showing up in papers so if they haven't, then this will be quite unreliable.

[–] brucethemoose@lemmy.world 39 points 1 week ago* (last edited 1 week ago)

We know that training LLMs on LLM-generated text leads to an absolute collapse in quality.

This is often repeated, and true. But needs to be qualified.

Modern LLMs use tons and tons of “augmented” data, which is code for LLM generated or massaged data. Some is even generated during training, and judged; papers on that are what made Deepseek famous.

Training on LLM trash will, of course, yield greater trash, and obviously good text has to come from something real. But that’s because slop is slop. And there are issues with “deep frying” LLMs, yes, but simply training on LLM on LLM output does not necessarily reduce quality. It often helps, significantly.


And we also know that AI has been showing up in papers so if they haven’t, then this will be quite unreliable.

Now this is a problem.

TBH LLMs would be pretty good at flagging papers for humans to check, similar to what Wikipedia is already doing. But yeah, if you just feed a prompt bad papers, LLMs just assume the context is true, generally, and that’s a tremendous problem.

[–] T156@lemmy.world 8 points 1 week ago (1 children)

I would be surprised if it was something that they trained themselves, and not an off the shelf model hooked up to a search.

load more comments (1 replies)
[–] FundMECFS@piefed.zip 63 points 1 week ago* (last edited 1 week ago) (4 children)

I tried it on a couple things that are controversial or problematic in the literature and its about what I expected. It parrots the literature, for better or worse. Which means it’s great at getting an overview of the literature and finding citations and stuff. But it’s not gonna magically figure out which papers are quality and which ones are rubbish. It’ll just parrot all of them, even if they contradict each other. Very interesting, and possibly quite a useful tool. But I really wouldn’t use it as an arbiter of truth.

[–] JcbAzPx@lemmy.world 28 points 1 week ago

That's all it should do. We're nowhere near an AI that could be an arbiter of truth. Hell, most AI couldn't even be trusted to parrot the literature accurately.

[–] chiliedogg@lemmy.world 25 points 1 week ago (1 children)

I would find this extremely useful as a tool to help me find sources that I then review myself - similar to how I use Wikipedia. But the danger is in people trying to use it for more.

load more comments (1 replies)
[–] fossilesque@mander.xyz 21 points 1 week ago (2 children)

Chat bots are a starting place. I find them useful for rubber ducking.

load more comments (2 replies)
load more comments (1 replies)
[–] Oriion@jlai.lu 57 points 1 week ago (12 children)

And without hallucinations ??? That sounds freaking awesome

[–] a_non_monotonic_function@lemmy.world 145 points 1 week ago (1 children)
[–] OfCourseNot@fedia.io 44 points 1 week ago (1 children)
[–] WhyIHateTheInternet@lemmy.world 21 points 1 week ago (1 children)

You're them! You're the person! Holy shit!!

[–] msage@programming.dev 8 points 1 week ago (1 children)

That's why you hate the internet???

[–] Klear@quokk.au 7 points 1 week ago (1 children)
load more comments (1 replies)
[–] Madrigal@lemmy.world 101 points 1 week ago (1 children)

Yeah they added “Don’t hallucinate” to the prompt.

[–] fartographer@lemmy.world 8 points 1 week ago

Seems like the kind of prompt a hallucination would say

[–] morto@piefed.social 82 points 1 week ago

And without hallucinations ???

Likely not

[–] FiskFisk33@startrek.website 51 points 1 week ago

Have they solved the huge unsolved problem no one else has solved

yeah, no.

[–] iceberg314@slrpnk.net 49 points 1 week ago (15 children)

It probably uses Retrieval Augmented Generation, which can still hallucinate, but usually does a better job for niche questions and it can even provide a source sometimes depending on how you set it up

load more comments (15 replies)
[–] expr@piefed.social 20 points 1 week ago

Obviously not, because that's not possible.

[–] DarrinBrunner@lemmy.world 11 points 1 week ago

What fun would that be?

[–] Atelopus-zeteki@fedia.io 10 points 1 week ago (1 children)

I'll keep the hallucinations for myself, tyvm.

Per sci-hub.ru this has been available since March 6th.

"Hear the good news: recent advances in artificial intelligence enabled Sci-Hub to launch a robot that gives scientifically-grounded responses to questions. The robot starts with searching for relevant literature in Sci-Hub database, then turns to selecting and reading most recent studies, and composes the answer based on this information. The answer includes all the references, and each referenced article can be read on Sci-Hub with one click.

Unlike question-answering robots that were based upon the early generation of neural networks, Sci-Hub bot does not hallucinate and is not making up scientific facts and does not cite sources that do not exist. To support its statements, Sci-Bot uses articles from Sci-Hub database. Questions can be asked in any language, and answers can be saved on server and shared.

The alpha version only supports answerig one question, and a more advanced variation that supports conversation mode is coming soon. Right column displays example questions that has been answered by robot - push the question to see the generated answer."

[–] Oriion@jlai.lu 9 points 1 week ago (2 children)

Thanks for doing what I should have done, I actually red that and thought it sounded great. The claim of "no hallucination" should of course be taken with a grain of salt, as other comments have pointed out.

load more comments (2 replies)
load more comments (4 replies)
[–] Not_mikey@lemmy.dbzer0.com 40 points 1 week ago* (last edited 1 week ago) (3 children)

Asked it the following to test it:

What caused the cooling at the end of the cenezoic that lead to the glacial quarternary period?

Took a while, actively showed the source articles it was looking into while it was processing which were clickable. Here's a pdf of the response which is long, and well referenced, pretty interesting IMO, but here's the initial overview:

The cooling at the end of the Cenozoic Era — which culminated in the glacial-interglacial cycles of the Quaternary Period — is one of Earth's most profound climate transitions. This was not a single event but a stepwise process driven by interconnected mechanisms operating over tens of millions of years. The primary cause was a long-term decline in atmospheric CO₂ (pCO₂), driven fundamentally by plate tectonic processes that altered the global carbon cycle. Oceanic gateway openings and orbital variations played important modulating roles.

Which my partner, whose taken some climate classes in college, said sounds right. If anyone thinks this is wrong please feel free to call it out.

[–] WorldsDumbestMan@lemmy.today 51 points 1 week ago (1 children)

You have to go into each article and check the key points, trust me.

It is a god-tier liar.

[–] bonenode@piefed.social 42 points 1 week ago

To be fair though, even if you read the abstracts of papers you need to go in and check the actual data itself to confirm what the authors describe is actually there.

Likewise if a paper cites another study in support and it seems weird what they say, you need to go and check that paper too.

Scientists have been inflating their claims as long as the impact factor exists (and probably longer). This now just makes it even easier to receive lies.

load more comments (2 replies)
[–] Tollana1234567@lemmy.today 26 points 1 week ago* (last edited 1 week ago)

nothing more evil than have prestigious journals gatekeep, and paywall research articles without even the scientists knowledge, which only universities and research teams are privy to. looking at nature, phytotaxa.

[–] MithranArkanere@lemmy.world 24 points 1 week ago

If research was funded with public money, be it government money or from people buying their products, then that research belongs to the people.

[–] gh0stb4tz@lemmy.world 18 points 1 week ago (5 children)

Why does the URL have a Russian government domain (.ru)? Consider me highly skeptical.

[–] FaceDeer@fedia.io 124 points 1 week ago (1 children)

It's where a lot of the pirate sites have found refuge from the Western copyright cartels. It's not necessarily a government-affiliated site just because it's got an .ru domain.

[–] GorGor@startrek.website 30 points 1 week ago (1 children)

I want to say Russia doesn't consider it a crime to hack as long as the system/IP you are accessing is outside Russia. No source on that cause Im lazy, so take it with a boulder of salt.

[–] Nurse_Robot@lemmy.world 18 points 1 week ago (1 children)

I second this reply, with an additional boulder

load more comments (1 replies)
[–] wylinka@szmer.info 68 points 1 week ago* (last edited 1 week ago) (3 children)

.ru is not government domain, it's just the normal russian domain... Literally every country except for america uses the country code top level domain for everyday use.

load more comments (3 replies)
[–] exixx@lemmy.world 40 points 1 week ago (1 children)

Because Alexandra Elbakyan lives in Russia. One of the official sci hub homes is .ru also

[–] fossilesque@mander.xyz 9 points 1 week ago

⬆️⬆️⬆️⬆️⬆️⬆️⬆️⬆️⬆️

load more comments (2 replies)
[–] foiledAgain@lemmy.world 13 points 1 week ago

Getting hugged to death

[–] melsaskca@lemmy.ca 12 points 1 week ago (1 children)

Those chilling FBI warnings on old videotapes mean absolutely nothing to me now.

load more comments (1 replies)
[–] DarrinBrunner@lemmy.world 10 points 1 week ago

I stared at it, and didn't know what to ask, so I closed it.

[–] FiniteBanjo@feddit.online 7 points 1 week ago

AI Sloppers lacking awareness is so sickening.

[–] Iusedtobeanalien@lemmy.world 7 points 1 week ago

Could have just called it Claude

load more comments
view more: next ›