this post was submitted on 29 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Zondartul@alien.top 1 points 11 months ago (12 children)

The point of the paper is that LLMs memorize an insane amount of training data and, with some massaging, can be made to output it verbatim. If that training data has PII (personally identifiable information), you're in trouble.

Another big takeaway is that training for more epochs leads to more memorization.

[–] HateRedditCantQuitit@alien.top 1 points 11 months ago

The point isn’t just that they memorize a ton. It’s also that current alignment efforts that purport to prevent regurgitation fail.

load more comments (11 replies)