this post was submitted on 30 Apr 2026
98 points (86.6% liked)
Technology
42843 readers
383 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It's not a "confession". Don't abuse the English language. The AI system doesn't have a conscience, so it can't feel guilty or feel bad or apologetic. It is incapable of confessing to things. All it can do is "say" or "write".
Similarly, AI agents don't "hallucinate". They can't have "hallucinations" because they don't have a conception of reality to begin with. Rather, they have "errors" and "error rates".
An AI researcher explained hallucinations as lying when it doesn't know, because we train it on truth and lies to hone the model, so it "learns" that misinformation is part of the mess. I.e. training it on what a tiger looks like. To hone that we may feed it zebras, or optical illusion things in a tiger data set to test its internal "what is a tiger" true false ranking, so it learns that non tiger things are in the fuzzy zone. And later may draw from that, and eager to provide an answer throws in garbage it has also "seen"
Also wrong. An error for an llm is if it fails to return random text based on the supplied context. You have an error rate as a user applying that random text to your systems.