this post was submitted on 04 Dec 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

https://youtu.be/KwpeuqT69fw

Researchers were able to get giant amounts of training data out of ChatGPT by simply asking it to repeat a word many times over, which causes the model to diverge and start spitting out memorized text.

Why does this happen? And how much of their training data do such models really memorize verbatim?

OUTLINE:

0:00 - Intro

8:05 - Extractable vs Discoverable Memorization

14:00 - Models leak more data than previously thought

20:25 - Some data is extractable but not discoverable

25:30 - Extracting data from closed models

30:45 - Poem poem poem

37:50 - Quantitative membership testing

40:30 - Exploring the ChatGPT exploit further

47:00 - Conclusion

Paper: https://arxiv.org/abs/2311.17035

you are viewing a single comment's thread
view the rest of the comments
[–] we_are_mammals@alien.top 1 points 11 months ago

Can't OpenAI simply check the output for sharing long substrings with the training data (perhaps probabilistically)?