this post was submitted on 25 Nov 2023
1 points (100.0% liked)
Machine Learning
1 readers
1 users here now
Community Rules:
- Be nice. No offensive behavior, insults or attacks: we encourage a diverse community in which members feel safe and have a voice.
- Make your post clear and comprehensive: posts that lack insight or effort will be removed. (ex: questions which are easily googled)
- Beginner or career related questions go elsewhere. This community is focused in discussion of research and new projects that advance the state-of-the-art.
- Limit self-promotion. Comments and posts should be first and foremost about topics of interest to ML observers and practitioners. Limited self-promotion is tolerated, but the sub is not here as merely a source for free advertisement. Such posts will be removed at the discretion of the mods.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
What? Are you familiar with the field of statistical learning? Formal frameworks for proving generalization have existed for some decades at this point. So when you look at anything pre-Deep Learning, you can definitely show that many mainstream ML models do more than just "mimic statistical aspects of the training set". Or if you want to go on some weird philosophical tangent, you can equivalently say that "mimicing statistical aspects of the training set" is enough to learn distributions, provided you use the right amount of the data and the right model.
And even for DL, which at the moment lacks a satisfying theoretical framework for generalization, it's obvious that empirically models can generalize.
From statistical learning theory, there is always some adversarial distribution where the model will fail to generalize... (no free lunch). And isn't generalization about extrapolation beyond the training distribution? So learning the training distribution itself is not generalization.
The No free lunch theorem in Machine Learning refers to the case in which the hypothesis class contains all possible classifiers in your domain (and your training set is either too small, or the domain set is infinite), and learning becomes impossible to guarantee, i.e. you have no useful bounds on generalization. When you restrict your class to something like linear classifiers, for example, you can reason about things like generalization and so on. For finite domain sets, you can even reason about the "every hypothesis" classifier, but that's not very useful in practice.
I'm not sure about your point about the training distribution. In general, you are interested in generalization on your training distribution, as that's where your train\test\validation data is sampled from. Note that overfitting your training set is not the same thing as learning your training distribution. You can think about stuff like domain adaptation, where you reason about your performance on "similar" distributions and how you might improve on that, but that's already something very different.