this post was submitted on 22 Sep 2024
80 points (90.0% liked)
Socialism
6024 readers
71 users here now
Rules TBD.
founded 6 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
We're into the Chinese Room problem. "Understand" is not a well-defined or measurable thing. I don't see how it could be measured except from looking at inputs&outputs.
So why attribute it to an LLM in the first place then? All of the LLMs are just floating point numbers being multiplied and added inside a digital computer, the onus is on the AI bros to show what kind of floating point multiplication is real "understanding".
But it's inherently impossible to "show" anything except inputs&outputs (including for a biological system).
What are you using the word "real" to mean, and is it aloof from the measurable behaviour of the system?
You seem to be using a mental model that there's
A: the measurable inputs & outputs of the system
B: the "real understanding", which is separate
How can you prove B exists if it's not measurable? You say there is an "onus" to do so. I don't agree that such an onus exists.
This is exactly the Chinese Room paper. 'Understand' is usually understood in a functionalist way.
Because I've felt it, I've felt how understanding feels, because ultimately understanding is a conscious experience within a mind, you cannot define understanding without referencing conscious experience, you cannot possibly define it only in terms of behavior or function. So either you have to concede that every floating point multiplication in a digital chip "feels like something" at some level or you show what specific kind of floating point multiplication does.