this post was submitted on 16 Jul 2023
80 points (100.0% liked)

Technology

37712 readers
622 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

"Our primary conclusion across all scenarios is that without enough fresh real data in each generation of an autophagous loop, future generative models are doomed to have their quality (precision) or diversity (recall) progressively decrease," they added. "We term this condition Model Autophagy Disorder (MAD)."

Interestingly, this might be a more challenging problem as we increase the use of generative AI models online.

you are viewing a single comment's thread
view the rest of the comments

But...isn't unsupervised backfeeding the same as simply overtraining the same dataset? We already know overtraining causes broken models.

Besides, the next AI models will be fed with the interactions from humans with AI, not just it's own content. ChatGPT already works like this, it learns with every interaction, every chat.

And the generative image models will be fed with AI-assisted images where humans will have fixed flaws like anatomy (the famous hands) or other glitches.

So as interesting as this is, as long as humans interact with AI the hybrid output used for training will contain enough new "input" to keep the models on track. There are already refined image generators trained with their own but human-assisted output that are better than their predecessor.