this post was submitted on 30 Apr 2026
95 points (86.3% liked)
Technology
42838 readers
477 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This right here. Just about everything in here is awful, and implies decision making and thought processes that straight up do not and have never existed in any AI model whatsoever.
What happened was they threw an awfully-scoped statistics model at problems the program couldn't possibly generate good outputs for, and surprise surprise, it generated bad outputs. The part that's of interest is just how bad the output was, and even then, only in a schadenfreude-filled "it was bound to happen eventually" manner.
It didn't confess it just outputted more plausible garbage based on inputs.
It just agreed with the accusations, because these models do what they're trained to do: Agree with the prompter.
No, not necessarily; they can easily, even condescendingly go against your view depending on the topic. It really depends on the topic and the conversational flow.