It's called a scratchpad.
And the network still uses skills that it learned in a fixed-computation-per-token regime.
Sure, future versions will lift many existing limitations, but I was talking about current LLMs.
It's called a scratchpad.
And the network still uses skills that it learned in a fixed-computation-per-token regime.
Sure, future versions will lift many existing limitations, but I was talking about current LLMs.
a lot like saying “rocket ships may not be FTL yet, but…”
And the human brain is FTL then?
LLMs might still lack something that the human brain has. Internal monologue, for example, that allows us to allocate more than fixed amount of compute per output token.
Engineers comprise 0.06% of US population for example. Managers around 20%. Also, narrow AI systems aren't so fascinating.