Need to share this post with people fearing about their jobs being taken over by gpt
Machine Learning
Community Rules:
- Be nice. No offensive behavior, insults or attacks: we encourage a diverse community in which members feel safe and have a voice.
- Make your post clear and comprehensive: posts that lack insight or effort will be removed. (ex: questions which are easily googled)
- Beginner or career related questions go elsewhere. This community is focused in discussion of research and new projects that advance the state-of-the-art.
- Limit self-promotion. Comments and posts should be first and foremost about topics of interest to ML observers and practitioners. Limited self-promotion is tolerated, but the sub is not here as merely a source for free advertisement. Such posts will be removed at the discretion of the mods.
“GPT4 is enough AI for everybody”
Guys sell Microsoft this is a full court press something is f****** wrong with openai
Gpt 5, 6, 7 aren't as interesting as putting Reinforcement learning models on top of GPTs
Not terribly surprised.
I think it's good to have a plateau for a few years. The thing is that we just realised what LLMs can do and we need some time to learn how to get the best out of them and learn from them. Having this time the technology matures and at the same time we mature with that.
Cos Gates is the authority.
It's true. However, the incremental improvements will have major implications in its implementation.
I remember saying that there was no reason to believe the explosive growth we've seen from AI would continue, and indeed no way to be sure that our current path in AI and LLM research would not eventually plateau and add a lot of years onto the way to AGI as we essentially go back to the drawing board to figure out a new path.
Got blasted by the GPT hype machine for even daring to suggest that such a thing could potentially happen, possibly. Feels good to have Bill Gates suggest the possibility too!
Well, if I can't see those reasons, I'm going to assume he's making stuff up to help Microsoft, which just had a panic over the openAI board firing the CEO because they were really worried about machine learning advancing too quickly.
I mean, everyone is just sorta ignoring the fact that no ML technique has been shown to do anything more than just mimic statistical aspects of the training set. Is statistical mimicry AGI? On some performance benchmarks, it appears better statistical mimicry does approach capabilities we associate with AGI.
I personally am quite suspicious that the best lever to pull is just giving it more parameters. Our own brains have such complicated neural/psychological circuitry for executive function, long and short term memory, types I and II thinking, "internal" dialog and visual models, and more importantly, the ability to few-shot learn the logical underpinnings of an example set. Without a fundamental change in how we train NNs or even our conception of effective NNs to begin with, we're not going to see the paradigm shift everyone's been waiting for.
Actually, the claim that "all ML models are doing is statistics" has proven to be a fallacy that dominated the field of AI for a long time.
See this video for instance, where Ilya (probably the #1 AI researcher in the world currently) explains how GPT is much more than statistics, it is more akin to "compression" and that can lead to intelligence: https://www.youtube.com/watch?v=GI4Tpi48DlA (4.30 - 7.30)