this post was submitted on 10 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

I came up with this thought experiment today because I'm trying to get at the heart of how to approximate a function. TDLR: if you know the foundational principles of that, it's really my whole question.

I thought, ok, you are given a deterministic dataset and asked to model it perfectly. Perfectly means you extract every last ounce of information out of it, you can predict the dataset with 100% accuracy and you will be given new observations to predict that are more of the same so you should be able to predict those too.

You are given a magic computer to make this model with. It's infinitely fast and has infinite memory. So you have no constraints, no limitations. You can do anything, but you must do it. You must write a way to build a perfect model. You can brute force it, but it has to learn the perfect model.

What do you do? What does the simplest algorithm to perfectly model the data look like?

top 8 comments
sorted by: hot top controversial new old
[–] Phy96@alien.top 1 points 1 year ago

In simple terms, by your definition the perfect model is a database that is storing your dataset and allows you to query single values you are interested in.

In statistical classification this is called a Bayes classifier and it can exist with your constraints of infinite memory and infinite compute.

However, getting "100% accuracy" still depends on your starting dataset and/or on how you define accuracy.

[–] currentscurrents@alien.top 1 points 1 year ago

All the real datasets we care about are "special" in that they are the output of complex systems. We don't actually want to model the data; we want to model the underlying system.

Many of these systems are as computationally as complex as programs, and so can only be perfectly modeled by another program. This means that modeling can be viewed as the process of analyzing the output of a program to create another program that emulates it.

Given infinite compute, I would brute force search the space of all programs, and find the shortest one that matches the original system for all inputs and outputs. Lacking infinite compute, I would use an optimization algorithm like gradient descent to find an approximate solution.

You can see the link to Kolmogorov Complexity here, and why modeling is said to be equivalent to compression.

[–] FrostyFix4614@alien.top 1 points 1 year ago (1 children)

Among equally performing models the simples one is the best.

If you want more theory look at statistical learning, eg "Understanding machine learning by shai ben-david". There the idea is that we have data {(x_1, y_1), ..., (x_n, y_n)}, where y_i is given by h(x_i), and we don't know h, so we want to approximate it using the data. The approximation is selected from a family of functions (hypothesis class) H using a learning algorithm (typically ERM).

Given infinite data, perhaps the best hypothesis class is the one which has the smallest VC dimension and contains the true function h. Then, you can estimate h pretty much perfectly.

Given finite data, the best hypothesis class is perhaps the one whose complexity is just right for the given amount of data and its complexity.

[–] Infamous-Bank-7739@alien.top 1 points 1 year ago

The perfect model is one that knows the distribution of the test-set. Can't do this? Well, you can always approximate based on your observations...

[–] AlgorithmSimp@alien.top 1 points 1 year ago

You are looking for Solomonff Induction, the mathematical description of a perfect learning algorithm.

The TLDR is you do Bayesian Inference over the set of all programs, with prior probabilities as 2^(-K(p)) where K(p) is the length of a program p. You can prove that this method has a lower expected generalization error than all programs.

[–] Paluth@alien.top 1 points 1 year ago

If one had available a computer with infinite memory and instant compute, then there would be no need to create a Perfect Model. Lets call this theoretical computer PC (Perfect Computer). With a PC available to the developer the goal changes from creating a perfect model, to creating a model that can create, train, and test other models for the problem. This model would than create infinite models, train and test than all. Select the best one, than replace itself with and best one. Then the new model repeats this process again. And again. To Infinity. Since our PC can compute all this instantly, we would instantly have a perfect model, and that model is the infinitesimal winner of the contest between the infinite designs that were created by the preview's generation winner.