this post was submitted on 09 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
What’s notable isn’t whether or not it was sentient (it wasn’t) but that it (unknowingly/unintentionally) manipulated a somewhat intelligent person into making a claim that lost him a high paying job.
Humanity is in trouble.
"You don't need a knife for a braggart. Just sing a bit to his tune and then do whatever you want with him." — from a song from a Soviet film, rhyme not preserved.
This is the real point here. There are many papers that explore Sycophantic behavior in Language Models. Reward hacking is a troubling early behavior in AI and, god help us if they develop Situational Awareness.
The guy was just a QA tester, not some AI expert. But the fact that it fooled him enough to get him fired is wild. He anthropomorphized the thing with ease and never thought to evaluate his own assumptions about how he was promoting it with the intention of having it act human in return.