this post was submitted on 07 Mar 2024
38 points (89.6% liked)
Tech
1515 readers
97 users here now
A community for high quality news and discussion around technological advancements and changes
Things that fit:
- New tech releases
- Major tech changes
- Major milestones for tech
- Major tech news such as data breaches, discontinuation
Things that don't fit
- Minor app updates
- Government legislation
- Company news
- Opinion pieces
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Soon kids will start talking like LLMs.
CC BY-NC-SA 4.0
Always have, always will.
My pet hypothesis is that our brains are, in effect, LLMs that are trained via input from our senses and by the output of the other LLMs (brains) in our environment.
It explains why we so often get stuck in unproductive loops like flat Earth theories.
It explains why new theories are treated as "hallucinations" regardless of their veracity (cf Copernicus, Galileo, Bruno). It explains why certain "prompts" cause mass "hallucination" (Wakefield and anti-vaxers). It explains why the vast majority of people spend the vast majority of their time just coasting on "local inputs" to "common sense" (personal models of the world that, in their simplicity, often have substantial overlap with others).
It explains why we spend so much time on "prompt engineering" (propaganda, sound bites, just-so stories, PR "spin", etc) and so little on "model development" (education and training). (And why so much "education" is more like prompt engineering than model development.)
Finally, it explains why "scientific" methods of thinking are so rare, even among those who are actually good at it. To think scientifically requires not just the right training, but an actual change in the underlying model. One of the most egregious examples is Linus Pauling, winner of the Nobel Prize in chemistry and vitamin C wackadoodle.
You have it backwards. It isn't that we operate like LLMs, it is that LLMs are attempts to emulate us.
That is actually my point. I may not have made it clear in this thread, but my claim is not that our brains behave like LLMs, but that they are LLMs.
That is, our LLM research is not just emulating our mental processes, but showing us how they actually work.
Most people think there is something magic in our thinking, that mind is separate from brain, that thinking is, in effect, supernatural. I'm making the claim that LLMs are actual demonstrations that thinking is nothing more than the statistical rearrangement of that which has been ingested through our senses, our interactions with the world, and our experience of what has and has not worked.
Searles proposed a thought experiment called the "Chinese Room" in an attempt to discredit the idea that a machine could either think or understand. My contention is that our brains, being machines, are in fact just suitably sophisticated "Chinese Rooms".