Honestly I think the scariest part of all this is how it shows that all it takes to drive someone off the deep end is for someone or something that person trusts to merely agree with whatever idea pops into a person's head. I guess it makes sense, we use reinforcement to learn what we think is true and often have bad ideas, but still, I'd always been under the impression that humans were a bit more mentally resilient than that.
Tech
A community for high quality news and discussion around technological advancements and changes
Things that fit:
- New tech releases
- Major tech changes
- Major milestones for tech
- Major tech news such as data breaches, discontinuation
Things that don't fit
- Minor app updates
- Government legislation
- Company news
- Opinion pieces
bit more mentally resilient than that
I think when we get down to it, none of us can actually separate reality from imagination accurately - after all, our perceptions of reality all exist inside our minds and are informed by our imaginations. People who are outwardly crazy seem to be placing the line between reality and fantasy at a very different place to anyone else, but we all put the line in a slightly different place.
Compare people who believe in conspiracy theories, or horoscopes, or conflicting religions for instance. What I'm trying to say is that "crazy people" are not really so different from the rest of us.
The reinforcement learning is a good point, but the social aspect seems equally important to me. Humanity is a very social creature. We learn from others, we seek agreement and acknowledgment, if we see rejection from one end, we may be all too willing to seek out where we don't see rejection.
A trained chat bot hijacking this evolved mechanism is interesting, at least, if not ironic or absurd. We are so driven by the social mechanisms of communication and social ideation, that no human is needed for this mechanism to work - whether in good or bad effect.
I keep telling people not to use the lie machine but I'm not making much progress. People aren't smart and resilient enough for the world we built.
The thing about this is you have to use it enough for it to get that far. I've used it 3 times and the one time it successfully refactored my code without coaxing me into psychosis
It's more subtle than that. When refactoring code, it constantly compliments you on how smart you are that you caught its mistake.
It deliberately creates an overinflated sense of self. Then you go and mistreat everyone around you. Next thing you know, you're in a padded cell with a shaved head and a ball-gag.
That's the coding 'assistant' end-game.
Soon to be targeted at the chatbot maker's political enemies.