this post was submitted on 25 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] dragosconst@alien.top 1 points 11 months ago (1 children)

no ML technique has been shown to do anything more than just mimic statistical aspects of the training set

What? Are you familiar with the field of statistical learning? Formal frameworks for proving generalization have existed for some decades at this point. So when you look at anything pre-Deep Learning, you can definitely show that many mainstream ML models do more than just "mimic statistical aspects of the training set". Or if you want to go on some weird philosophical tangent, you can equivalently say that "mimicing statistical aspects of the training set" is enough to learn distributions, provided you use the right amount of the data and the right model.

And even for DL, which at the moment lacks a satisfying theoretical framework for generalization, it's obvious that empirically models can generalize.

[–] On_Mt_Vesuvius@alien.top 1 points 11 months ago (1 children)

From statistical learning theory, there is always some adversarial distribution where the model will fail to generalize... (no free lunch). And isn't generalization about extrapolation beyond the training distribution? So learning the training distribution itself is not generalization.

[–] dragosconst@alien.top 1 points 11 months ago

The No free lunch theorem in Machine Learning refers to the case in which the hypothesis class contains all possible classifiers in your domain (and your training set is either too small, or the domain set is infinite), and learning becomes impossible to guarantee, i.e. you have no useful bounds on generalization. When you restrict your class to something like linear classifiers, for example, you can reason about things like generalization and so on. For finite domain sets, you can even reason about the "every hypothesis" classifier, but that's not very useful in practice.

I'm not sure about your point about the training distribution. In general, you are interested in generalization on your training distribution, as that's where your train\test\validation data is sampled from. Note that overfitting your training set is not the same thing as learning your training distribution. You can think about stuff like domain adaptation, where you reason about your performance on "similar" distributions and how you might improve on that, but that's already something very different.