this post was submitted on 04 Dec 2023
811 points (98.1% liked)

Technology

59427 readers
3085 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Sibbo@sopuli.xyz 0 points 11 months ago (1 children)

How can the training data be sensitive, if noone ever agreed to give their sensitive data to OpenAI?

[–] TWeaK@lemm.ee 0 points 11 months ago (1 children)

Exactly this. And how can an AI which "doesn't have the source material" in its database be able to recall such information?

[–] Jordan117@lemmy.world 1 points 11 months ago

IIRC based on the source paper the "verbatim" text is common stuff like legal boilerplate, shared code snippets, book jacket blurbs, alphabetical lists of countries, and other text repeated countless times across the web. It's the text equivalent of DALL-E "memorizing" a meme template or a stock image -- it doesn't mean all or even most of the training data is stored within the model, just that certain pieces of highly duplicated data have ascended to the level of concept and can be reproduced under unusual circumstances.