this post was submitted on 10 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

I came up with this thought experiment today because I'm trying to get at the heart of how to approximate a function. TDLR: if you know the foundational principles of that, it's really my whole question.

I thought, ok, you are given a deterministic dataset and asked to model it perfectly. Perfectly means you extract every last ounce of information out of it, you can predict the dataset with 100% accuracy and you will be given new observations to predict that are more of the same so you should be able to predict those too.

You are given a magic computer to make this model with. It's infinitely fast and has infinite memory. So you have no constraints, no limitations. You can do anything, but you must do it. You must write a way to build a perfect model. You can brute force it, but it has to learn the perfect model.

What do you do? What does the simplest algorithm to perfectly model the data look like?

you are viewing a single comment's thread
view the rest of the comments
[–] FrostyFix4614@alien.top 1 points 1 year ago (1 children)

Among equally performing models the simples one is the best.

If you want more theory look at statistical learning, eg "Understanding machine learning by shai ben-david". There the idea is that we have data {(x_1, y_1), ..., (x_n, y_n)}, where y_i is given by h(x_i), and we don't know h, so we want to approximate it using the data. The approximation is selected from a family of functions (hypothesis class) H using a learning algorithm (typically ERM).

Given infinite data, perhaps the best hypothesis class is the one which has the smallest VC dimension and contains the true function h. Then, you can estimate h pretty much perfectly.

Given finite data, the best hypothesis class is perhaps the one whose complexity is just right for the given amount of data and its complexity.