760
this post was submitted on 08 Aug 2025
760 points (96.6% liked)
Technology
73833 readers
4435 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
[ "I am a disgrace to my profession," Gemini continued. "I am a disgrace to my family. I am a disgrace to my species.]
This should tell us that AI thinks as a human because it is trained on human words and doesn't have the self awareness to understand it is different from humans. So it is going to sound very much like a human even though it is not human. It mimics human emotions well but doesn't have any actual human emotions. There will be situations where you can tell the difference. Some situations that would make an actual human angry or guilty or something, but won't always provoke this mimicry in an AI. Because when humans feel emotions they don't always write down words to show it. And AI only knows what humans write, which is not always the same things that humans say or think. We all know that the AI doesn't have a family and is not a human species. But the AI talks about having a family because its computer database is mimicking what it thinks a human might say. And part of the reason why an AI will lie is because it knows that is a thing that humans do and it is trying to closely mimic human behavior. But an AI might and will lie in situations where humans would be smart enough not to do so which means we should be on our guard about lies even more so for AIs than humans.
You're giving way too much credit to LLMs. AIs don't "know" things, like "humans lie". They are basically like a very complex autocomplete backed by a huge amount of computing power. They cannot "lie" because they do not even understand what it is they are writing.
Can you explain why AIs always have a "confidently incorrect" stance instead of admitting they don't know the answer to something?
I'd say that it's simply because most people on the internet (the dataset the LLMs are trained on) say a lot of things with absolute confidence, no matter if they actually know what they are talking about or not. So AIs will talk confidently because most people do so. It could also be something about how they are configured.
Again, they don't know if they know the answer, they just say what's the most statistically probable thing to say given your message and their prompt.
Then in that respect AIs aren't even as powerful as an ordinary computer program.
That was my guess too.
No computer programs "know" anything. They're just sets of instructions with varying complexity.
Can you stop with the nonsense? LMFAO...
if exists(thing) {
write(thing);
} else {
write("I do not know");
}
Yea I see what you mean, I guess in that sense they know if a state is true or false.
Because its an auto complete trained on typical responses to things. It doesn't know right from wrong, just the next word based on a statistical likelihood.
Are you saying the AI does not know when it does not know something?
Exactly. I'm over simplifying it of course, but that's generally how it works. Its also not "AI" as in Artificial Intelligence, in the traditional sense of the word, its Machine Learning. But of course its effectively had a semantic change over the last couple years because AI sounds cooler.
Edit: just wanted to clarifying I'm talking about LLMs like ChatGPT etc