this post was submitted on 29 Nov 2023
1 points (100.0% liked)
Machine Learning
1 readers
1 users here now
Community Rules:
- Be nice. No offensive behavior, insults or attacks: we encourage a diverse community in which members feel safe and have a voice.
- Make your post clear and comprehensive: posts that lack insight or effort will be removed. (ex: questions which are easily googled)
- Beginner or career related questions go elsewhere. This community is focused in discussion of research and new projects that advance the state-of-the-art.
- Limit self-promotion. Comments and posts should be first and foremost about topics of interest to ML observers and practitioners. Limited self-promotion is tolerated, but the sub is not here as merely a source for free advertisement. Such posts will be removed at the discretion of the mods.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The point of the paper is that LLMs memorize an insane amount of training data and, with some massaging, can be made to output it verbatim. If that training data has PII (personally identifiable information), you're in trouble.
Another big takeaway is that training for more epochs leads to more memorization.
Should be expected. It's overfitting.
Overfitting, by definition, happens when your generalization error goes up.
it's possible to "overfit" to a subset of the data. generalization error going up is a symptom of "overfitting" to the entire dataset. memorization is functionally equivalent to locally overfitting, i.e. generalization error going up in a specific neighborhood of the data. you can have a global reduction in generalization error while also having neighborhoods where generalization gets worse.
On most tasks, memorization would be overfitting, but I think one would see that “overfitting” is task/generalization dependent. As long as accurate predictions are being made for new data, it doesn’t matter that it can cough up the old.
Uh, no it is not. Memorization and overfitting are not the same thing. You are certainly capable of memorizing things without degrading your generalization performance (I hope).
Hopefully I'm not being offtopic here, but a recent paper suggested that repeating a requirement several times within the same instructions lead the model to be more compliant towards it.
Do you know whether it's true or grounded ?
Thanks in advance.
Indeed. Just like with training humans to be smart, rote memorization sometimes happens but is generally not the goal. Research like this helps avoid it better in future.
That's not overfitting. That's just fitting.
The point isn’t just that they memorize a ton. It’s also that current alignment efforts that purport to prevent regurgitation fail.
Nothing about this is novel though; the fact that language models are able to uncover sensitive training information has been a thing for a while now.
How is that a problem? The entire point of training is to memorize and generalize the training data.
Learning English is not simply memorizing a billion sample sentences.
The problem is that we want it to learn to string words together for itself, not regurgitate words which already appear in the training set in that order.
This paper attempts to solve the difficult dilemma of detecting how much of the success of an llm is due to rote memorization.
Maybe more importantly: how much parameter space/ training resources are wasted on this?