this post was submitted on 10 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

I came up with this thought experiment today because I'm trying to get at the heart of how to approximate a function. TDLR: if you know the foundational principles of that, it's really my whole question.

I thought, ok, you are given a deterministic dataset and asked to model it perfectly. Perfectly means you extract every last ounce of information out of it, you can predict the dataset with 100% accuracy and you will be given new observations to predict that are more of the same so you should be able to predict those too.

You are given a magic computer to make this model with. It's infinitely fast and has infinite memory. So you have no constraints, no limitations. You can do anything, but you must do it. You must write a way to build a perfect model. You can brute force it, but it has to learn the perfect model.

What do you do? What does the simplest algorithm to perfectly model the data look like?

you are viewing a single comment's thread
view the rest of the comments
[–] currentscurrents@alien.top 1 points 1 year ago

All the real datasets we care about are "special" in that they are the output of complex systems. We don't actually want to model the data; we want to model the underlying system.

Many of these systems are as computationally as complex as programs, and so can only be perfectly modeled by another program. This means that modeling can be viewed as the process of analyzing the output of a program to create another program that emulates it.

Given infinite compute, I would brute force search the space of all programs, and find the shortest one that matches the original system for all inputs and outputs. Lacking infinite compute, I would use an optimization algorithm like gradient descent to find an approximate solution.

You can see the link to Kolmogorov Complexity here, and why modeling is said to be equivalent to compression.