this post was submitted on 27 Feb 2026
79 points (98.8% liked)
Technology
42369 readers
577 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Because 90% accuracy is acceptable for financial institutions ...
I've got an idea. If 90% of AI's output is accurate, just have humans review the 10% that will be inaccurate.
(Yes I am an AI expert, how did you know)
Which outputs are accurate, and which ones are inaccurate? How could you tell? What steps did you take to verify accuracy? Was verifying it a manual process?
That's easy. You just get a second AI to ask the first AI if their responses were accurate or not
(/s)
This is unironically what I've seen people try to do, except they assume the second AI is correct.
Unrelated, but this is how GANs work to some extent. GANs train during the back-and-forth though, while LLMs do not.
That's also basically how thinking models work too, isn't it? And probably the new GPT-5 router, which everybody hates...
Not exactly. Thinking models just inflate the context window to point the model closer to your target. GANs have two models which compete against each other, both training each other, with the goal of one (or both) of those models being improved over time.
No, it's really not. Thus the 6000 remaining employees.
(Assuming this is a significant part of their business)