this post was submitted on 11 Mar 2026
176 points (98.4% liked)

Technology

82551 readers
4089 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Evaluating 35 open-weight models across three context lengths (32K, 128K, 200K), four temperatures, and three hardware platforms—consuming 172 billion tokens across more than 4,000 runs—we find that the answer is “substantially, and unavoidably.” Even under optimal conditions—best model, best temperature, temperature chosen specifically to minimize fabrication—the floor is non-zero and rises steeply with context length. At 32K, the best model (GLM 4.5) fabricates 1.19% of answers, top-tier models fabricate 5–7%, and the median model fabricates roughly 25%.

top 50 comments
sorted by: hot top controversial new old
[–] jacksilver@lemmy.world 12 points 21 hours ago (1 children)

Just for context, this is the error rate when the right answer is provided to the LLM in a document. This means that even when the answer is being handed to the LLM they fail at the rates provided in the article/paper.

Most people interacting with LLMs aren't asking questions against documents, or the answer can not be directly inferred from the documents (asking the LLM to think about the materials in the documents).

That means in most situations the error rate for the average user will be significantly higher.

[–] rekabis@lemmy.ca 3 points 20 hours ago* (last edited 20 hours ago) (1 children)

As I pointed out in another root comment, the average - depending on the model being tested - tends to sit between 60% and 80%. But this is with no restriction on source materials… the LLMs are essentially pulling from world+dog in that case

So this opens up an interesting option for users, in that hallucinations/inaccuracies can be controlled for and potentially reduced by as much as ⅔ simply by restricting the model to those documents/resources that the user is absolutely certain contains the correct answer.

I mean, 25% is still stupidly high. In any prior era, even 2.5% would have been an unacceptably high error rate for a business to stomach. But source-restriction seems to be a somewhat promising guardrail to use for the average user doing personal work.

[–] jacksilver@lemmy.world 2 points 19 hours ago

Thanks for providing the actual numbers.

I think one of the more concerning things is, what if you think the answer is in the documents you provided but they actually aren't. What you think is a low error rate could actually be a high error rate.

[–] rekabis@lemmy.ca 4 points 20 hours ago* (last edited 20 hours ago) (2 children)

How much do large language models actually hallucinate when answering questions grounded in provided documents?

Okay, this is looking promising, at least in terms of the most important qualifications being plainly stated in the opening line.

Because the amount of hallucinations/inaccuracies “in the wild” - depending on the model being tested - runs about 60-80%. But then again, this would be average use on generalized data sets, not questions focusing on specific documentation. So of course the “in the wild” questions will see a higher rate.

This also helps users, as it shows that hallucinations/inaccuracies can be reduced by as much as ⅔ by simply limiting LLMs to specific documentation that the user is certain contains the desired information, rather than letting them trawl world+dog.

Very interesting!

[–] SuspciousCarrot78@lemmy.world 2 points 10 hours ago* (last edited 8 hours ago)

As I mentioned elsewhere (below) I am currently conducting similar testing across 4 different 4B models (Qwen3-4B Hivemind, Qwen3-4B-2507-Instruct, Phi-4-mini, Granite-4-3B-micro), using both grounded and ungrounded conditions. Aiming for 10,000 runs, currently at 3,500.

Not to count chickens before they hatch - but at ctx 8192, hallucination flags in the grounded condition are trending toward near-zero across the models tested (so far). If that holds across the full campaign, useful to know. If it doesn't hold, also useful to know.

I have an idea for how to make grounded state even more useful. Again, chickens not hatched blah blah. I'll share what I find here if there's interest. I'm intending to submit the whole shooting match for peer review (TMLR or JMLR) and put it on arXiv for others to poke at.

I realize this is peak "fine, I'll do it myself" energy after getting sick of ChatGPT's bullshit, but I got sick of ChatGPT's bullshit and wanted to try something to ameliorate it.

[–] HubertManne@piefed.social 3 points 20 hours ago

I have been saying this for awhile. I am sorta hoping we see open source llms that are trained on a curated list of literature. its funny that these came out and it seemed like the makers did not take the long known garbage in - garbage out into account.

[–] Son_of_Macha@lemmy.cafe 2 points 20 hours ago (1 children)

We need to stop calling it hallucinations and say what it is, ERRORS

[–] cmhe@lemmy.world 2 points 17 hours ago

Hallucinations of LLMs are just one class of errors, and the most dangerous one.

Other stuff like garbeled or repeating output are other errors.

[–] SuspciousCarrot78@lemmy.world 9 points 1 day ago* (last edited 20 hours ago) (2 children)

Firstly, thanks for this paper. I read it this afternoon.

Secondly, well, shit. I'm beavering away at a paper in what little spare time I have, looking at hallucination suppression in local LLM. I've been testing both the abliterated and base version of Qwen3-4B 2507 instruct, as they represent an excellent edge device llm per all benchmarks (also, because I am a GPU peasant and only have 4GB vram). I've come at it from a different angle but in the testing I've done (3500 runs; plus another 210 runs on a separate clinical test battery), it seems that model family + ctx size dominate hallucination risk. Yes, a real "science discovers water makes things wet; news at 11" moment.

Eg: Qwen3-4B Hivemind ablation shows strong hallucination suppression (1.4% → 0.2% over 1000 runs) when context grounded. But it comes with a measured tradeoff: contradiction handling suffers under the constraints (detection metrics 2.00 → 0.00). When I ported the same routing policy to base Qwen 3-4B 2507 instruct, the gains flipped. No improvement, and format retries spiked to 24.9%. Still validating these numbers across conditions; still trying to figure out the why.

For context, I tested:

Reversal: Does the model change its mind when you flip the facts around? Or does it just stick with what it said the first time?

Theory of Mind (ToM): Can it keep straight who knows what? Like, "Alice doesn't know this fact, but Bob does" - does it collapse those into one blended answer or keep them separate?

Evidence: Does it tag claims correctly (verified from the docs, supported by inference, just asserted)? And does it avoid upgrading vague stuff into false confidence?

Retraction: When you give it new information that invalidates an earlier answer, does it actually incorporate that or just keep repeating the old thing?

Contradiction: When sources disagree, does it notice? Can it pick which source to trust? And does it admit uncertainty instead of just picking one and running with it?

Negative Control: When there's not enough information to answer, does it actually refuse instead of making shit up?

Using this as the source doc -

https://tinyurl.com/GuardianMuskArticle

FWIW, all the raw data, scores, and reports are here: https://codeberg.org/BobbyLLM/llama-conductor/src/branch/main/prepub

The Arxiv paper confirms what I'm seeing in the weeds: grounding and fabrication resistance are decoupled. You can be good at finding facts and still make shit up about facts that don't exist. And Jesus, the gap between best and worst model at 32K is 70 percentage points? Temperature tuning? Maybe 2-3 pp gain. I know which lever I would be pulling (hint: pick a good LLM!),

For clinical deployment under human review (which is my interest), I can make the case that trading contradiction flexibility for refusal safety is ok - it assumes the human in the middle reads the output and catches the edge cases.

But if you're expecting one policy to work across all models, automagically, you're gonna have a bad time.

TL;DR: once you control for model family, I think context length is going to turn out the be the main degradation driver; my gut feeling based on the raw data here is that the useful window for local 4B is tighter ~16K. Above that hallucination starts to creep in, grounding or not. It would be neat if it was a simple 4x relationship (4b-->16K; 8b-->32K) but things tend not to work out that nicely IRL.

PS: I think (no evidence yet) that ablit and non abilt might need different grounding policies for different classes of questions. That's interesting too - it might mean we can route between deterministic grounding and not differently, depending on ablation, to get the absolute best hallucination suppression. I need to think more on it.

PPS: I figured out what caused the 24.9% retry spike - my stupid fat fingers when coding. I amended the code and it's now sitting at 0%. What's more, early trends are showing 0.00% hallucinations across testing (I'm about 700 repeats in). I'm going to run a smaller re-test battery (1400 or so) across both Qwen3-4B 2507 models to achieve minimal statistical valid difference. If THAT holds, I will then test on Granite Micro 3B, Phi-4B-mini and Small-llm 3B tomorrow. I think that will give me approx 8000 data points.

If this shows what I hope it shows, then maybe, just maybe ..... no, let's not jinx it. I'll put the data out there and someone else can run confirmation.

[–] Womble@piefed.world 5 points 1 day ago

I wouldnt read too much into the lower scores, they include some absolutely tiny models. The one 70% lower than the top score at 24% correct is a 1B model from 2024. Honestly that it can do any information retrival from a 32k context is impressive.

[–] how_we_burned@lemmy.zip 1 points 1 day ago* (last edited 1 day ago) (1 children)

I understood a few of those words.

Basically you've validated the study that LLMs make shit up, right?

[–] SuspciousCarrot78@lemmy.world 3 points 1 day ago* (last edited 1 day ago) (1 children)

Well...no. But also yes :)

Mostly, what I've shown is if you hold a gun to its head ("argue from ONLY these facts or I shoot") certain classes of LLMs (like the Qwen 3 series I tested; I'm going to try IBM's Granite next) are actually pretty good at NOT hallucinating, so long as 1) you keep the context small (probably 16K or less? Someone please buy me a better pc) and 2) you have strict guard-rails. And - as a bonus - I think (no evidence; gut feel) it has to do with how well the model does on strict tool calling benchmarks. Further, I think abliteration makes that even better. Let me find out.

If any of that's true (big IF), then we can reasonably quickly figure out (by proxy) which LLM's are going to be less bullshitty when properly shackled, in every day use. For reference, Qwen 3 and IBM Granite (both of which have abliterated version IIRC - that is, safety refusals removed) are known to score highly on tool calling. 4 swallows don't make spring but if someone with better gear wants to follow that path, then at least I can give some prelim data from the potato frontier.

I'll keep squeezing the stone until blood pours out. Stubbornness opens a lot of doors. I refuse to be told this is an intractable problem; at least until I try to solve it myself.

[–] andallthat@lemmy.world 2 points 1 day ago (1 children)

is "potato frontier" an auto-correct fail for Pareto or a real term? Because if it's not a real term, I'm 100% going to make it one!

[–] SuspciousCarrot78@lemmy.world 3 points 1 day ago* (last edited 1 day ago)

No, it's real (tm). I'm running on a Quadro P1000 with 4GB vram (or a Tesla P4 with 8GB). My entire raison d'être is making potato tier computing a thing.

https://openwebui.com/posts/vodka_when_life_gives_you_a_potato_pc_squeeze_7194c33b

Like a certain famous space Lothario, I too do not believe in no win scenarios.

[–] FrankLaskey@lemmy.ml 1 points 1 day ago

My biggest takeaway here is that choosing the context length and (to a lesser extent) the temperature carefully is important for reducing hallucinations. I expected model families to vary widely between themselves but not for context length to have such a massive impact tbh.

It seems from this like reducing context length in applications where it isn’t essential for the model to hold very large amounts of context simultaneously would be best practice no?

[–] RandAlThor@lemmy.ca 17 points 1 day ago (3 children)

This is pretty bonkers. How TF are they fabricating answers?????

[–] bad1080@piefed.social 14 points 1 day ago (1 children)
[–] snooggums@piefed.world 10 points 1 day ago (2 children)

Aka being wrong, but with a fancy name!

When Cletus is wrong because he mixed up a dog and a cat when deacribing their behavior do we call it hallucinating? No.

[–] Scipitie@lemmy.dbzer0.com 23 points 1 day ago (1 children)

Accepting concepts like "right" and "wrong" gives those tools way too much credit, basically following the AI narrative of the corporations behind them. They can only be used about the output but not the tool itself.

To be precise:

LLMs can't be right or wrong because the way they work has no link to any reality - it's stochastics, not evaluation. I also don't like the term halluzination for the same reason. It's simply a too high temperature setting jumping into a closeby but unrelated vector set.

Why this is an important distinction: Arguing that an LLM is wrong is arguing on the ground of ChatGPT and the likes: It's then a "oh but wen make them better!" And their marketing departments overjoy.

To take your calculator analogy: like these tools do have floating point errors which are inherent to those tools wrong outputs are a dore part of LLMs.

We can minimize that but then they automatically use part of their function. This limitation is way stronger on LLMs than limiting a calculator to 16 digits after the comma though...

[–] CubitOom@infosec.pub 5 points 1 day ago* (last edited 1 day ago) (3 children)

What word would you propose to use instead?

Fabrication?

[–] eceforge@discuss.tchncs.de 2 points 1 day ago

No comment on the rest of the thread but I always though "confabulation" was a more accurate word than hallucination for what LLMs tend to do.

The "signs and symptoms" part of the article really seems oddly familiar when compared to interacting with an LLM sometimes haha.

[–] Scipitie@lemmy.dbzer0.com 7 points 1 day ago (1 children)

That's my problem: any single word humanizes the tool in my opinion. Iperhaps something like "stochastic debris" comes close but there's no chance to counter the common force of pop culture, Corp speak a and humanities talent to see humanoid behavior everywhere but each other. :(

[–] Telorand@reddthat.com 4 points 1 day ago (1 children)

We do enjoy pareidolia, don't we?

[–] deranger@sh.itjust.works 2 points 1 day ago (1 children)

Paredolia just means seeing patterns that aren’t there, it’s not implicitly human. If you see a dog in the clouds, that’s paredolia.

[–] Telorand@reddthat.com 1 points 1 day ago (1 children)

Great, when did I say otherwise? Pareidolia is a thing humans do, because we like patterns. Finding patterns is something that has benefited our species, but it is sometimes so strong that we see faces in electrical outlets or the shape of a car's front profile (for example).

[–] deranger@sh.itjust.works 1 points 1 day ago* (last edited 1 day ago) (1 children)

I mean, it doesn’t really follow given the context. Nobody is talking about the visual sense, they’re talking about humanizing AI through using certain words, which isn’t paredolia.

[–] Telorand@reddthat.com 2 points 1 day ago (1 children)
[–] deranger@sh.itjust.works 2 points 20 hours ago

I REGRET EVERYTHING

load more comments (1 replies)
[–] bad1080@piefed.social 4 points 1 day ago

if you have a lobby you get special names, look at the pharma industry who coined the term "discontinuation syndrome" for a simple "withdrawal"

[–] Zink@programming.dev 5 points 1 day ago

I'm no expert and don't care to become one, but I understand they generally trained these models on the entire public internet plus all the literature and research they could pirate.

So I would expect the outputs of those models to not be some kind of magical correct description of the world, but instead to be roughly "this passes for something a person on the internet might write."

It does the thing it was designed to do pretty well. But then the sociopathic grifters tried to sell it to the world as a magic super-intelligence that actually knows things. And of course many small-time wannabe grifters ate it up.

What LLMs do is get you a passable elaborate forum post replying to your question, written by an extremely confident internet rando. But it's done at computer speed and global scale!

[–] ji59@hilariouschaos.com 2 points 1 day ago* (last edited 1 day ago)

Because guessing correct answer is more successful than saying nothing.

[–] HubertManne@piefed.social 0 points 20 hours ago (1 children)

This is why I would encourage people to use llms for something not important. like video games or interests. You likely will have enough knowledge around the things to catch the "hallucinations" and hopefully that will give you perspective on their use for more important things.

[–] Son_of_Macha@lemmy.cafe 3 points 19 hours ago (1 children)
[–] HubertManne@piefed.social 0 points 19 hours ago (1 children)

see if they don't at all then they can fall victim to thinking they are better than they are. By using it a bit in something unimportant which you are knowledgable enough about it allows you to see the flaws and it does not take that much time to see them.

[–] Son_of_Macha@lemmy.cafe 1 points 15 hours ago

That makes no sense at all

[–] CubitOom@infosec.pub 6 points 1 day ago (2 children)

I'm not good at math, so someone please help me.

If a model hallucinates 1% of the time for every question in a chat window that has 100 prompts in it, what is the chance of receiving a hallucination at some point in the chat?

[–] hersh@literature.cafe 15 points 1 day ago* (last edited 1 day ago) (1 children)

If I understand you correctly: 63.4% odds of having at least one hallucination.

The simple way to calculate the odds of getting at least one error is to calculate the odds of having ZERO, and then inverting that.

If the odds of a single instance being an error is 1%, that means you have a 99% chance of having no errors. If you repeat that 100 times, then it's 99% of 99% of 99%...etc. In other words, 0.99^100 = 0.366. That's the odds of getting zero errors 100 times in a row. The inverse of that is 0.634, or 63.4%.

This is the same way to calculate the odds of N coin flips all coming up heads. It's going to be 0.5^N. So the odds of getting 10 heads in a row is 0.5^10 = ~0.0977%, or 1:1024.

Edit: This is assuming independence of all 100 prompts, which is not generally true in a single chat window, where each prompt follows the last and retains both the previous prompts and answers in its context. As the paper explains, error rate tends to increase with context length. You should generally start a new chat rather than continue in an existing one if the previous context is not highly relevant.

[–] CubitOom@infosec.pub 4 points 1 day ago

Thanks, I also wonder how context collapse affects the fabrication rate.

load more comments (1 replies)
[–] fallaciousBasis@lemmy.world 1 points 1 day ago

People call it hallucinating but it seems pretty much identical to rationalization.

[–] FauxLiving@lemmy.world 2 points 1 day ago (3 children)

At 32K, the best model (GLM 4.5) fabricates 1.19% of answers

Not bad, I don't know many people who are 98.81% accurate in their statements.

[–] Lemming6969@lemmy.world 1 points 22 hours ago

You can be wrong and not fabricate. This is closer to human intentional lying.

[–] snooggums@piefed.world 8 points 1 day ago* (last edited 1 day ago) (1 children)

Calculators are correct 100% of the time.

[–] FauxLiving@lemmy.world 4 points 1 day ago (1 children)

Calculators are not people, Mr. <1.19%.

[–] snooggums@piefed.world 7 points 1 day ago* (last edited 1 day ago) (2 children)

That's right! We should be comparing computers to computers. Well, hardware computers, not people computers.

[–] FauxLiving@lemmy.world 4 points 1 day ago

Calculators are not computers, computers contain calculator-like elements but a calculator is no more a computer than a passenger jet is a coffee shop by virtue of having a coffee pot onboard.

Calculators cannot fabricate answers, but nor are they 100% correct due to things like bitflips and square root approximations. They also cannot write text, so the comparison would make even less sense.

LLMs and Humans can fabricate answers in written text so comparing the fabrication rate in written text of an LLM to a human (both entities which generate their answers with neural networks) makes more sense than to compare either to a calculator which neither uses a neural network or produces text.

So 'we' should compare like things and not choose items based on superficial similarities.

load more comments (1 replies)
[–] Iconoclast@feddit.uk 2 points 1 day ago* (last edited 1 day ago) (1 children)

It's a pleasure to meet you! The only thing exceeding my level of wisdom is my modesty.

[–] FauxLiving@lemmy.world 2 points 1 day ago

Truly the most humble person of all time.

[–] unpossum@sh.itjust.works 1 points 1 day ago (1 children)

GLM 4.5 is from August. Isn’t the real tl;dr that a seven month old open model, which was behind proprietary models at the time, did better than most humans would?

[–] MHard@lemmy.world 9 points 1 day ago (1 children)

The task described in this article is asking questions about a document that was provided to the llm in the context.

I would hope that if you give a human a text and ask them to cite facts from it they would do better than 99% correct.

Also, when the tokens exceeded 200k, the llm error rate was higher than 10%

[–] unpossum@sh.itjust.works 3 points 1 day ago

I would hope that if you give a human a text and ask them to cite facts from it they would do better than 99% correct.

That’s literally what school exams are about, isn’t it?

Token window is a problem for all llms though, that’s not easily solved, but it can be worked around to a certain extent.

load more comments
view more: next ›