this post was submitted on 12 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

LLMs are very powerful and many consider them to be quite intelligent. However, they are trained on mountains of human-generated data from the internet. There are many issues that come with human-generated data, like bias, quality, and availability. Naturally, LLMs are limited and biased by the human data available to it. We are basically avoiding the hard problem of intelligence construction by bootstraping an intelligent agent based on traces of a group of intelligent agents (humans in the past and present). This is loosely analogous to writing a compiler for a new programming language in an existing language. While the new language might look more impressive, it is relying on the capability of the existing language to function. The compiler for the first programming language, however, does not have this luxury and has to be written in machine code directly. Obviously the machine code is an abstraction over the actual hardware but my point is that we should explore a lower-level way to construct intelligence that relies on as little human-generated data as possible.

Feel free to answer this general question I posed but I'm going to narrow down the question significantly to a particular formulation of intelligence. There are many definitions of intelligence, so I'm going to start by putting forth my definition associated with this question: intelligence is the ability to predict the outputs of a deterministic time-bounded Turing machine given a set of demonstrations. More concretely, the tape of the Turing machine is initialized with an input bit array of length s, the read/write head is placed at the left most cell of the input array at start, and the output array consists of exactly those bits in the input array when the machine halted (assuming zero initialization of the whole tape). The execution of that Turing Machine is bounded by m number of steps so it always halts and produces an output. My question would be: Can we construct an intelligent agent that can predict the outputs of these time-bounded Turing machines?

Obviously, without the opportunity to observe and learn from the behavior of the Turing machine, our agent has no clue as to what that behavior would be. The best strategy here is to guess randomly. However, we are going to provide some hints to the agent by giving it a set of input-output pairs of the machine that it's trying to immitate. The agent is given a chance to observe k such pairs and then asked to generate outputs on a set of q random inputs that it has never observed. We judge the intelligence of the agent by using mean squared error between the agent's outputs and the expected outputs. Under this definition of intelligence, the least intelligent agent is one that outputs the exact opposite bits as the machine it is trying to immitate. Now, one can logically argue that this agent is equally intelligent as one which perfectly predicts the outputs of the machine. However, we are going to reject this logic under our definition of intelligence.

Since the agent only sees a very limited subset of the machine's behavior, we don't expect it to perfectly immitate the behavior of the machine. However, the agent should be able to outperform random guess due to the extra bits of information from the input-output pairs and the fact that the machine is time-bounded to a maximum of m steps. More formally, the agent is asked to predict the charateristic behavior of a class of observationally equivalent Turing machines. These machines are observationally equivalent because they behave identically under the same k inputs. By characteristic behavior, we mean the most likely outputs for this class of machines under the set of q new inputs.

Concretely, we can specify that an agent is another deterministic Turing Machine that is time-bounded to n steps and takes in (k*2+q)*s input bits and produces q*s output bits, where the input bits consist of k input-output pairs and q input arrays. We can now rephrase the original question to be: Is there a way to construct an n-bounded Turing machine that minimizes average mean squared error between its predictions and the expected outputs of any m-bounded Turing machine?

There can be many variations to this formulation, like allowing the machines to be nondeterministic, limiting the tape available to the machines, or limiting the number of states. However, I believe the principle stays the same. I found some research on Neural Turing Machines and Differentiable Neural Computers. However, recent research seems to be dominated by LLMs so I'm wondering if there are other alternatives for constructing intelligent agents.

top 5 comments
sorted by: hot top controversial new old
[–] yannbouteiller@alien.top 1 points 1 year ago (1 children)

I am not sure I fully understand what your definition of "intelligence" really encompasses, but as far as I understand it sounds like a definition of supervised learning rather than "intelligence"?

Where does naturally arising intelligence from, e.g., random genetic mutations, or reinforcement learning, stand here?

Natural selection via random adaption to the current state of the universe is an example of intelligence constructed without human-generated data, I think, but it doesn't seem to fit in what you call "intelligence" here, since it is not trying to imitate anything.

[–] slashdave@alien.top 1 points 1 year ago

Natural selection selects the model. Training still happens during the lifetime of the individual.

[–] limpbizkit4prez@alien.top 1 points 1 year ago

I would consider this question to be off topic. Maybe consider posting this to r/ArtificialIntelligence, but what you're looking for is unsupervised learning techniques like contrastive learning.

[–] hunted7fold@alien.top 1 points 1 year ago (1 children)

Yes, this is why people have been very interested in reinforcement learning. It’s also very sample inefficient / expensive and cheaper to use human priors, which is why people are now less interested in reinforcement learning.

[–] alienkevinkevin@alien.top 1 points 1 year ago

I see, that makes sense.