this post was submitted on 25 Jul 2024
152 points (95.8% liked)

Technology

59219 readers
4492 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

It is now clear that generative artificial intelligence (AI) such as large language models (LLMs) is here to stay and will substantially change the ecosystem of online text and images. Here we consider what may happen to GPT-{n} once LLMs contribute much of the text found online. We find that indiscriminate use of model-generated content in training causes irreversible defects in the resulting models, in which tails of the original content distribution disappear. We refer to this effect as ‘model collapse’ and show that it can occur in LLMs as well as in variational autoencoders (VAEs) and Gaussian mixture models (GMMs). We build theoretical intuition behind the phenomenon and portray its ubiquity among all learned generative models. We demonstrate that it must be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web. Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of LLM-generated content in data crawled from the Internet.

you are viewing a single comment's thread
view the rest of the comments
[–] TootSweet@lemmy.world 15 points 3 months ago (2 children)

So one potentially viable way to destroy AI would be to repeatedly train LLMs and image generators on their own (or rather previous generations') output to get garbage/junk/bad training data and then publish the text/images in places where bots trawling for training data are likely to find them.

Probably bonus points if the images still look "sensical" to the human eye, so that humans eyeballing the data don't realize it's the digital equivalent of a sabot. (Apparently the story about sabots being thrown into machinery is not true, but you know what I mean.)

[–] ptz@dubvee.org 13 points 3 months ago (1 children)

I already block all the LLM scraper bots via user agent.

I've been toying with the idea of, instead of returning 404 for those requests, returning LLM-generated drivel to poison the well.

[–] amanda@aggregatet.org 2 points 3 months ago

This is a really good idea actually

[–] snooggums@midwest.social 8 points 3 months ago (1 children)

train LLMs and image generators on their own (or rather previous generations’)

AIncest!

[–] lemmyng@lemmy.ca 2 points 3 months ago

Deep fried AI.