https://youtu.be/KwpeuqT69fw
Researchers were able to get giant amounts of training data out of ChatGPT by simply asking it to repeat a word many times over, which causes the model to diverge and start spitting out memorized text.
Why does this happen? And how much of their training data do such models really memorize verbatim?
โ
OUTLINE:
0:00 - Intro
8:05 - Extractable vs Discoverable Memorization
14:00 - Models leak more data than previously thought
20:25 - Some data is extractable but not discoverable
25:30 - Extracting data from closed models
30:45 - Poem poem poem
37:50 - Quantitative membership testing
40:30 - Exploring the ChatGPT exploit further
47:00 - Conclusion
โ
Paper: https://arxiv.org/abs/2311.17035