this post was submitted on 12 Sep 2024
187 points (88.2% liked)
Technology
59323 readers
5183 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
While truly defining pretty much any aspect of human intelligence is functionally impossible with our current understanding of the mind, we can create some very usable "good enough" working definitions for these purposes.
At a basic level, "reasoning" would be the act of drawing logical conclusions from available data. And that's not what these models do. They mimic reasoning, by mimicking human communication. Humans communicate (and developed a lot of specialized language with which to communicate) the process by which we reason, and so LLMs can basically replicate the appearance of reasoning by replicating the language around it.
The way you can tell that they're not actually reasoning is simple; their conclusions often bear no actual connection to the facts. There's an example I linked elsewhere where the new model is asked to list states with W in their name. It does a bunch of preamble where it spells out very clearly what the requirements and process are; assemble a list of all states, then check each name for the presence of the letter W.
And then it includes North Dakota, South Dakota, North Carolina and South Carolina in the list.
Any human being capable of reasoning would absolutely understand that that was wrong, if they were taking the time to carefully and systematically work through the problem in that way. The AI does not, because all this apparent "thinking" is a smoke show. They're machines built to give the appearance of intelligence, nothing more.
When real AGI, or even something approaching it, actually becomes a thing, I will be extremely excited. But this is just snake oil being sold as medicine. You're not required to buy into their bullshit just to prove you're not a technophobe.