this post was submitted on 15 Aug 2023
2 points (100.0% liked)
TechNews
4136 readers
1 users here now
Aggregated tech news.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This is the best summary I could come up with:
While it takes humans a lot of training to learn and adapt, OpenAI argues large language models could implement new policies instantly.
Third, OpenAI mentions the well-being of the workers who are continually exposed to harmful content, such as videos of child abuse or torture.
Mark Zuckerberg’s vision of a perfect automated system hasn’t quite panned out yet, but Meta uses algorithms to moderate the vast majority of harmful and illegal content.
Both humans and machines make mistakes, and while the percentage might be low, there are still millions of harmful posts that slip through and as many pieces of harmless content that get hidden or deleted.
In particular, the gray area of misleading, wrong, and aggressive content that isn’t necessarily illegal poses a great challenge for automated systems.
Generative AI such as ChatGPT or the company’s image creator, DALL-E, makes it much easier to create misinformation at scale and spread it on social media.
I'm a bot and I'm open source!