Chaotic could be good. Right now the whole thing descended into an arms race of brute forcing the same Transformer architectures. This is obviously a dead end at some point, and chaotic might mean that someone finally comes up with a new mechanism.
Machine Learning
Community Rules:
- Be nice. No offensive behavior, insults or attacks: we encourage a diverse community in which members feel safe and have a voice.
- Make your post clear and comprehensive: posts that lack insight or effort will be removed. (ex: questions which are easily googled)
- Beginner or career related questions go elsewhere. This community is focused in discussion of research and new projects that advance the state-of-the-art.
- Limit self-promotion. Comments and posts should be first and foremost about topics of interest to ML observers and practitioners. Limited self-promotion is tolerated, but the sub is not here as merely a source for free advertisement. Such posts will be removed at the discretion of the mods.
Damn who let r/singularity member out
Shouldn't we be seriously considering not attempting AGI? Other than the general philosophical and ethical considerations, achieving AGI is a near surefire way to ensure that most of us are out of a job.
OpenAI had a nice public headline, but in terms of AI research, they were far from the only ones doing it.
Why on earth would we need a "strong leader"? That sounds like a recipe for disaster, tbh
OpenAI is not a leader in AI, just because you know about chatgpt doesn't make them a leader... There are tons of research labs that are clearly at the forefront of ML
Also what is AGI? there is no clear definition yet, everyone has their own idea of AGI.
I love to work on AI, but can someone tell me what are the perks of AGI?
Controversial opinion: OpenAI never was a leader. Sure it did some cool things but it neither reached AGI nor became profitable. It was doomed to failure from the beginning based on the non-profit's mission.
That being said, I'm still very bearish on AGI in general. I don't think we're as close as we think we are and the chaos is natural since we don't actually know how to get there. Success in AI is an illusion.
"We need a strong leader..."
No, we don't. Nobody has a real clue about how to get to AGI and there isn't even a precise definition of AGI (my personal one is just by examples, namely R2D2, C3PO, or Commander Data but it's just my personal one, not an objective definition, and even those aren't "general" for 100% of problems but neither are humans).
There many individual leaders in AI and whether you choose one or more of them to focus on particular efforts, that's up to you. Science is supposed to be democratic, not a dictatorship.
Alright, I'm tired of this AGI stuff getting around. A bit of context, I have a master in generative AI and currently pursuing a PhD in explainable NLP.
Chat-GPT, and LLMs in general, are not remotely close to being an AGI. The best they can do is construct a pseudo representation of words' meaning (which, if we consider words to be the main descriptor of our world, could be a world representation).
They then use this word representation and try to find the closest ones that make sens together. It is essentially like counting from 1 to 5 and thinking, I see that the closest number after 1 is 2 and so on.
Granted, they have a really good representation of our language and that is what makes them so believable. But in reality they don't "think", they just compute distances in a really smart and complex manner.
However, one philosophical aspect that resonate with LLMs is in how we represent the world around us. Is it using only words? But then our representation is linked with language, which differs between person. How did we represent our world without words?
Is it using only words?
Clearly not
The sad truth is that even if such a thing as AGI can ever exist, we won't ever see it in any of our lifetimes, it's prob the conversation for many decades and centuries in the future, and where we are now is pretty much the nascency of widespread, conscious AI use among the masses. Of course, we've all been using AI for years but with these GPT-connected chatbots it's become much more common and active a decision to use AI than it ever was before. ChatGPT is the tip of the iceberg, we don't even know if the true best model going forward is going to remain the transformer (and it's not the best at everything, just the most "generalizable" at the moment as far as I'm aware). What I'm wondering is how advanced the societies of the distant future will be, where they look down on us and the state of our relatively primitive AI as we do monkeys with their great stone nutcrackers
Kye Gomez is that you?
I am the king of China
Send DM bro got a few questions
yo king, when you're going to return that seal?
i don't think we need a single leader, the diversity of approaches is what will push us forward. chaos can lead to creativity and innovation.
the main use for a leading orgenization in agi is that they will hopefully do it safely. but thats not really what we have been seeing recently from openai or meta.
what we really really want to avoid is the situation where an AI system is able to be profitable enough by itself to pay for its own compute and starts copying itself like crazy.
that sort of system wont have a centralized plug we can pull and copies would mutate and evolve and that can potentially go horribly horribly wrong.
but other than that nightmare scenario having a split in the industry is actually good and the best work came from a time like that. good research does not come from big organizations it comes from small (relatively) independent teams.
the current "leaders" of ai pushing for their specific narrative have made us miss a few things for instance the idea that transformers are the be all end all has made us overestimate VIT for years. (https://arxiv.org/abs/2207.11347)
So an AGI researcher and a duck walk into a bar…
Wrong sub dear