this post was submitted on 22 Dec 2024
523 points (96.0% liked)
Technology
72932 readers
4183 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
That is my experience, it's generally quite decent for small and simple stuff (as I said, distillation of documentation). I use it for rust, where I am sure the training material was much smaller than other languages. It's not a matter a prompting though, it's not my prompt that makes it hallucinate functions that don't exist in libraries or make it write code that doesn't compile, it's a feature of the technology itself.
GPTs are statistical text generators after all, they don't "understand" the problem.
It's also pretty young, human toddlers hallucinate and make things up. Adults too. Even experts are known to fall prey to bias and misconception.
I don't think we know nearly enough about the actual architecture of human intelligence to start asserting an understanding of "understanding". I think it's a bit foolish to claim with certainty that LLMs in a MoE framework with self-review fundamentally can't get there. Unless you can show me, materially, how human "understanding" functions, we're just speculating on an immature technology.
As much as I agree with you, humans can learn a bunch of stuff without first learning the content of the whole internet and without the computing power of a datacenter or consuming the energy of Belgium. Humans learn to count at an early age too, for example.
I would say that the burden of proof is therefore reversed. Unless you demonstrate that this technology doesn't have the natural and inherent limits that statistical text generators (or pixel) have, we can assume that our mind works differently.
Also you say immature technology but this technology is not fundamentally (I.e. in terms of principle) different from what Weizenabum's ELIZA in the '60s. We might have refined model and thrown a ton of data and computing power at it, but we are still talking of programs that use similar principles.
So yeah, we don't understand human intelligence but we can appreciate certain features that absolutely lack on GPTs, like a concept of truth that for humans is natural.
No actually it has changed pretty fundamentally. These aren't simply a bunch of FCNs put together. Look up what a transformer is, that was one of the major breakthroughs that made modern LLMs possible.
That is a technical detail, not a fundamental change. By fundamental mechanism I mean what the machine is designed to do. Of course techniques and implementations evolve, refine and improve in 60 years, but the idea behind the technology did not evolve much (NLP).
Did back propagation even exist in the 60s? That was a pretty fundamental change in what they do.
If we are arguing about really fundamental changes then arguably any neural network is the same and humans are the same as ChatGPT or a mouse, or even something simpler like a single layer perceptron.