this post was submitted on 22 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

Title really - right now the leader of OpenAI is blurry, which isn’t good for the industry to follow.

We need a strong leader to focus our efforts, but if this isn’t the case - perhaps it’ll get chaotic?

top 20 comments
sorted by: hot top controversial new old
[–] FKKGYM@alien.top 1 points 11 months ago

Chaotic could be good. Right now the whole thing descended into an arms race of brute forcing the same Transformer architectures. This is obviously a dead end at some point, and chaotic might mean that someone finally comes up with a new mechanism.

[–] KingsmanVince@alien.top 1 points 11 months ago

Damn who let r/singularity member out

[–] PresentDelivery4277@alien.top 1 points 11 months ago

Shouldn't we be seriously considering not attempting AGI? Other than the general philosophical and ethical considerations, achieving AGI is a near surefire way to ensure that most of us are out of a job.

[–] LoyalSol@alien.top 1 points 11 months ago

OpenAI had a nice public headline, but in terms of AI research, they were far from the only ones doing it.

[–] VAL9THOU@alien.top 1 points 11 months ago

Why on earth would we need a "strong leader"? That sounds like a recipe for disaster, tbh

[–] bitemenow999@alien.top 1 points 11 months ago

OpenAI is not a leader in AI, just because you know about chatgpt doesn't make them a leader... There are tons of research labs that are clearly at the forefront of ML

Also what is AGI? there is no clear definition yet, everyone has their own idea of AGI.

[–] Snoo_72181@alien.top 1 points 11 months ago

I love to work on AI, but can someone tell me what are the perks of AGI?

[–] glitch83@alien.top 1 points 11 months ago

Controversial opinion: OpenAI never was a leader. Sure it did some cool things but it neither reached AGI nor became profitable. It was doomed to failure from the beginning based on the non-profit's mission.

That being said, I'm still very bearish on AGI in general. I don't think we're as close as we think we are and the chaos is natural since we don't actually know how to get there. Success in AI is an illusion.

[–] SciGuy42@alien.top 1 points 11 months ago

"We need a strong leader..."

No, we don't. Nobody has a real clue about how to get to AGI and there isn't even a precise definition of AGI (my personal one is just by examples, namely R2D2, C3PO, or Commander Data but it's just my personal one, not an objective definition, and even those aren't "general" for 100% of problems but neither are humans).

There many individual leaders in AI and whether you choose one or more of them to focus on particular efforts, that's up to you. Science is supposed to be democratic, not a dictatorship.

[–] theoneandonlytegaum@alien.top 1 points 11 months ago (1 children)

Alright, I'm tired of this AGI stuff getting around. A bit of context, I have a master in generative AI and currently pursuing a PhD in explainable NLP.

Chat-GPT, and LLMs in general, are not remotely close to being an AGI. The best they can do is construct a pseudo representation of words' meaning (which, if we consider words to be the main descriptor of our world, could be a world representation).

They then use this word representation and try to find the closest ones that make sens together. It is essentially like counting from 1 to 5 and thinking, I see that the closest number after 1 is 2 and so on.

Granted, they have a really good representation of our language and that is what makes them so believable. But in reality they don't "think", they just compute distances in a really smart and complex manner.

However, one philosophical aspect that resonate with LLMs is in how we represent the world around us. Is it using only words? But then our representation is linked with language, which differs between person. How did we represent our world without words?

[–] slashdave@alien.top 1 points 11 months ago

Is it using only words?

Clearly not

[–] Traditional_Land3933@alien.top 1 points 11 months ago

The sad truth is that even if such a thing as AGI can ever exist, we won't ever see it in any of our lifetimes, it's prob the conversation for many decades and centuries in the future, and where we are now is pretty much the nascency of widespread, conscious AI use among the masses. Of course, we've all been using AI for years but with these GPT-connected chatbots it's become much more common and active a decision to use AI than it ever was before. ChatGPT is the tip of the iceberg, we don't even know if the true best model going forward is going to remain the transformer (and it's not the best at everything, just the most "generalizable" at the moment as far as I'm aware). What I'm wondering is how advanced the societies of the distant future will be, where they look down on us and the state of our relatively primitive AI as we do monkeys with their great stone nutcrackers

[–] brocoearticle69@alien.top 1 points 11 months ago

Kye Gomez is that you?

[–] Honest_Science@alien.top 1 points 11 months ago (2 children)
[–] Ok_Reality2341@alien.top 1 points 11 months ago

Send DM bro got a few questions

[–] Clocksucker69420@alien.top 1 points 11 months ago

yo king, when you're going to return that seal?

[–] Roberto_Humps@alien.top 1 points 11 months ago

i don't think we need a single leader, the diversity of approaches is what will push us forward. chaos can lead to creativity and innovation.

[–] rejectedlesbian@alien.top 1 points 11 months ago

the main use for a leading orgenization in agi is that they will hopefully do it safely. but thats not really what we have been seeing recently from openai or meta.

what we really really want to avoid is the situation where an AI system is able to be profitable enough by itself to pay for its own compute and starts copying itself like crazy.

that sort of system wont have a centralized plug we can pull and copies would mutate and evolve and that can potentially go horribly horribly wrong.

but other than that nightmare scenario having a split in the industry is actually good and the best work came from a time like that. good research does not come from big organizations it comes from small (relatively) independent teams.

the current "leaders" of ai pushing for their specific narrative have made us miss a few things for instance the idea that transformers are the be all end all has made us overestimate VIT for years. (https://arxiv.org/abs/2207.11347)

[–] Extra_Intro_Version@alien.top 1 points 11 months ago

So an AGI researcher and a duck walk into a bar…

[–] bgighjigftuik@alien.top 1 points 11 months ago

Wrong sub dear