this post was submitted on 09 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] kevinbranch@alien.top 1 points 1 year ago (2 children)

What’s notable isn’t whether or not it was sentient (it wasn’t) but that it (unknowingly/unintentionally) manipulated a somewhat intelligent person into making a claim that lost him a high paying job.

Humanity is in trouble.

[–] Bernafterpostinggg@alien.top 1 points 1 year ago

This is the real point here. There are many papers that explore Sycophantic behavior in Language Models. Reward hacking is a troubling early behavior in AI and, god help us if they develop Situational Awareness.

The guy was just a QA tester, not some AI expert. But the fact that it fooled him enough to get him fired is wild. He anthropomorphized the thing with ease and never thought to evaluate his own assumptions about how he was promoting it with the intention of having it act human in return.

load more comments (1 replies)