this post was submitted on 09 Aug 2025
96 points (96.2% liked)

Tech

1785 readers
643 users here now

A community for high quality news and discussion around technological advancements and changes

Things that fit:

Things that don't fit

Community Wiki

founded 2 years ago
MODERATORS
top 10 comments
sorted by: hot top controversial new old
[–] CarbonIceDragon@pawb.social 28 points 2 weeks ago (2 children)

Honestly I think the scariest part of all this is how it shows that all it takes to drive someone off the deep end is for someone or something that person trusts to merely agree with whatever idea pops into a person's head. I guess it makes sense, we use reinforcement to learn what we think is true and often have bad ideas, but still, I'd always been under the impression that humans were a bit more mentally resilient than that.

[–] rollin@piefed.social 9 points 2 weeks ago

bit more mentally resilient than that

I think when we get down to it, none of us can actually separate reality from imagination accurately - after all, our perceptions of reality all exist inside our minds and are informed by our imaginations. People who are outwardly crazy seem to be placing the line between reality and fantasy at a very different place to anyone else, but we all put the line in a slightly different place.

Compare people who believe in conspiracy theories, or horoscopes, or conflicting religions for instance. What I'm trying to say is that "crazy people" are not really so different from the rest of us.

[–] Kissaki@programming.dev 4 points 1 week ago

The reinforcement learning is a good point, but the social aspect seems equally important to me. Humanity is a very social creature. We learn from others, we seek agreement and acknowledgment, if we see rejection from one end, we may be all too willing to seek out where we don't see rejection.

A trained chat bot hijacking this evolved mechanism is interesting, at least, if not ironic or absurd. We are so driven by the social mechanisms of communication and social ideation, that no human is needed for this mechanism to work - whether in good or bad effect.

[–] jjjalljs@ttrpg.network 6 points 1 week ago

I keep telling people not to use the lie machine but I'm not making much progress. People aren't smart and resilient enough for the world we built.

[–] lol_idk@piefed.social 6 points 1 week ago (1 children)

The thing about this is you have to use it enough for it to get that far. I've used it 3 times and the one time it successfully refactored my code without coaxing me into psychosis

[–] fubarx@lemmy.world 8 points 1 week ago

It's more subtle than that. When refactoring code, it constantly compliments you on how smart you are that you caught its mistake.

It deliberately creates an overinflated sense of self. Then you go and mistreat everyone around you. Next thing you know, you're in a padded cell with a shaved head and a ball-gag.

That's the coding 'assistant' end-game.

[–] Tehdastehdas@piefed.social 4 points 1 week ago

Soon to be targeted at the chatbot maker's political enemies.