gebregl

joined 10 months ago
[–] gebregl@alien.top 1 points 9 months ago

The vacuous truth is saying that AI is statistical. It certainly is, but it's also much more.

The fallacy part is to take that fact and claim about an AI algorithm, that because it's "just statistics" that it therefore cannot exhibit "true" intelligence but it's somehow faking or mimicking intelligence.

[–] gebregl@alien.top 1 points 10 months ago

Increasing model size is only the most obvious way of improving on LLMs. There are many ways of changing LLM architecture and combining them with models from other fields.

I for one am excited to hear of lots of new LLM discoveries and applications, even if they're not guaranteed.

[–] gebregl@alien.top 1 points 10 months ago (24 children)

We need a name for the fallacy where people call highly nonlinear algorithms with billions of parameters "just statistics", as if all they're doing is linear regression.

ChatGPT isn't AGI yet, but it is a huge leap in modeling natural language. The fact that there's some statistics involved explains neither of those two points.

[–] gebregl@alien.top 1 points 10 months ago

Basically yes. As far as we know, human brains don't employ quantum randomness in any meaningful manner, so they're also deterministic.

What difference does it make? It doesn't say much about what AI systems or humans can or can't do.

[–] gebregl@alien.top 1 points 10 months ago

Basically yes. As far as we know, human brains don't employ quantum randomness in any meaningful manner, so they're also deterministic.

What difference does it make? It doesn't say much about what AI systems or humans can or can't do.