772
this post was submitted on 08 Aug 2025
772 points (96.6% liked)
Technology
73878 readers
6273 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You're giving way too much credit to LLMs. AIs don't "know" things, like "humans lie". They are basically like a very complex autocomplete backed by a huge amount of computing power. They cannot "lie" because they do not even understand what it is they are writing.
Can you explain why AIs always have a "confidently incorrect" stance instead of admitting they don't know the answer to something?
I'd say that it's simply because most people on the internet (the dataset the LLMs are trained on) say a lot of things with absolute confidence, no matter if they actually know what they are talking about or not. So AIs will talk confidently because most people do so. It could also be something about how they are configured.
Again, they don't know if they know the answer, they just say what's the most statistically probable thing to say given your message and their prompt.
Then in that respect AIs aren't even as powerful as an ordinary computer program.
That was my guess too.
No computer programs "know" anything. They're just sets of instructions with varying complexity.
Can you stop with the nonsense? LMFAO...
if exists(thing) {
write(thing);
} else {
write("I do not know");
}
Yea I see what you mean, I guess in that sense they know if a state is true or false.
Because its an auto complete trained on typical responses to things. It doesn't know right from wrong, just the next word based on a statistical likelihood.
Are you saying the AI does not know when it does not know something?
Exactly. I'm over simplifying it of course, but that's generally how it works. Its also not "AI" as in Artificial Intelligence, in the traditional sense of the word, its Machine Learning. But of course its effectively had a semantic change over the last couple years because AI sounds cooler.
Edit: just wanted to clarifying I'm talking about LLMs like ChatGPT etc