Did he forget that GPT is a computerized AI, not a human? It doesn't become old, it keeps getting smarter and smarter.
Machine Learning
Community Rules:
- Be nice. No offensive behavior, insults or attacks: we encourage a diverse community in which members feel safe and have a voice.
- Make your post clear and comprehensive: posts that lack insight or effort will be removed. (ex: questions which are easily googled)
- Beginner or career related questions go elsewhere. This community is focused in discussion of research and new projects that advance the state-of-the-art.
- Limit self-promotion. Comments and posts should be first and foremost about topics of interest to ML observers and practitioners. Limited self-promotion is tolerated, but the sub is not here as merely a source for free advertisement. Such posts will be removed at the discretion of the mods.
The magic will come from the interactions of millions of people with super smart AI. People who were not able to program before can, nurses can be as efficient as doctors, people who have no clue about fixing things can fix the things with augmented reality glasses etc. If GPT 4 level AI was widely adopted (which it is not), it alone could change the society fundamentally. Now we are in the adoption face and the AI's as they get better, will move more and more to the center of our lives, as did cellphones. Will they get better? I think the models they were trained on will get better. They will get faster, they will have memory, ability to connect with the world through senses, the AI chips will make them vastly faster. The magic would happen even if they had plateaued, which they have not.
gates just saying gpt5 won't be much better but not addressing the limits of the text it's trained on is a big miss. we need to acknowledge the role of the underlying data in shaping the capabilities of these models.
Speed, efficiency, relevancy can all the improved
Everytime someone makes this claim ai leaps ahead 30 years on the consensus of expected capabilities on the time line. AGI will probably be here before 2030.
Yes Bill that's why we are now innovating around and adding other functionality to what an LLM can be. It is just one component of what people talk about when we discuss AGI which will be a combination of hundreds of systems interacting, each of which may be extremely powerful and complex individually.
"640KB ought to be enough for anyone" - also Bill
But what about Qstar?
With all my respect to Bill, I clearly don't understand why we should to listen him. The guy has absolutely no vision of future!
He was ridiculously blind to WWW, blockchain technologies, smartphones, etc etc. Just let him leave in peace in his mansion. He is not a visionary and never was.
As to me personally and if I need a real hero, I would prefer to listen to Steve Wozniak.
The GPU limitations/requirements to go next level may also put a practical ceiling on things. Could you even run a model that was 10x larger than GPT4 without breaking the bank?
Who?
Tell that to Ilya Sutskever
I agree that the generation of LLM models will not be a breakthrough, but the overall capability increases from perfecting step-by-step and decision tree styles of prompting are very promising.
I expect gpt5 is going to be a similar jump as gpt3 was to 4.
I don't think Bill is right on this one, LLM may have achieved a plateau on performance with current architecture, but research is all about optimisation and efficiency, not mere parameter increase. Here's a good example:
https://venturebeat.com/ai/new-technique-can-accelerate-language-models-by-300x/
Remember GPT 5 is still at least 2-3 years away. Plenty of time.
I think that is mostly fine.
In the end adoption of the technologies is most important. And there is still a long way to go.
Of course better technology would make it easier, but we can already make a lot of steps with what we have today.
Yet, people at r/singularity hype about OpenAI having secretly reached AGI.