this post was submitted on 27 Sep 2023
96 points (97.1% liked)
Technology
59135 readers
2878 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I would say the specific shortcoming being demonstrated here is the inability for LLMs to determine whether a piece of information is factual (not that they're even dealing with "pieces of information" like that in the first place). They are also not able to tell whether a human questioner is being truthful, or misleading, or plain lying, honestly mistaken, or nonsensical. Of course, which one of those is the case matters in a conversation which ought to have its basis in fact.
Indeed, and all it takes is one lie to send it down that road.
For example, I asked ChatGPT how to teach my cat to ice skate, with predictable admonishment:
But after I reassured it that my cat loves ice skating, it changed its tune:
Even after telling it I lied and my cat doesn’t actually like ice skating, its acceptance of my previous lie still affected it:
Thank you for putting it far more eloquently than I could have