So he was kicked out for being the opposite of “not totally open with the board”
Machine Learning
Community Rules:
- Be nice. No offensive behavior, insults or attacks: we encourage a diverse community in which members feel safe and have a voice.
- Make your post clear and comprehensive: posts that lack insight or effort will be removed. (ex: questions which are easily googled)
- Beginner or career related questions go elsewhere. This community is focused in discussion of research and new projects that advance the state-of-the-art.
- Limit self-promotion. Comments and posts should be first and foremost about topics of interest to ML observers and practitioners. Limited self-promotion is tolerated, but the sub is not here as merely a source for free advertisement. Such posts will be removed at the discretion of the mods.
This is what happens when AI is smart enough to generate gossip about itself.
'hey chatgpt, generate some rumors to spin the recent unsettling news such that OpenAI is worth more after the insane public power struggle unsettles it'
"Q! I know you're behind this, show yourself!"
So the implication here is that the CEO knew about the breakthrough, but hid it from the board?
MSFT did experience a 20% climb over the last month. Maybe it was due to this news leaking out?
OpenAI defines AGI as AI systems that are smarter than humans? So not AGI as EVERYONE else understands it.
Which humans though? My Casio watch is smarter than some humans
If humans can build AGI then so can an AI system smarter than humans. Therefore building an AI system smarter than humans is equivalent to building AGI.
What is happening with this sub? Where are all the good papers and real ML techinal discussions?
Its worse than usual some of the explanations for ELI5 are stuff even the r/futurism folks that invade here would know
Dude, like 5 years ago I remember there were less than 20k people, now there are 2million+ subscribed unfortunately. Almost all subreddits with big numbers are normy hype trains :(
Did this turn into a default sub? I find it hard to believe over 2million people took the effort to find and subscribe.
Kind of. I recently made a new account, and Reddit asks you for your interests, AI being one of the options.
Thanks! That helps me connect the dots. It's at least nice to know so many people express interest in ML.
Yeah, although I'm sad we can't have both. I miss the place where I discovered and discussed ML research. Haven't found a good replacement yet.
Yea this just popped up on my feed since I been following the OAI drama. First time seeing this sub recommended
Wow. I did not realize how fast this sub had grown. A decent amount of actual technical posts in the past too.
I've been spending the latter part of the year learning machine learning from scratch so as not to fit into that crowd
AI is the new crypto. So you have ML bros who are clueless mouth breathers invading all of the original subs.
I've been in this game for almost 12 years, I've never been more popular, but I hate that it's for this reason. I was listening to a talk the other day and the speaker really said something that resonated with me - Let's make ML uncool again. I feel like I just want to unclutter the airways
This is the exact type of regarded comment that crypto trading bros bring to the discussions that I really enjoy.
i almost never comment on this sub but every time someone compares Ai to crypto i am reminded of this tweet by an OAI employee.
Turning into WallStreerBets?
That train departed ~3y ago and new choo-choo-chat arrived a year ago
Overtaken by the "I love science!" crowd and liberal arts "ho-hum"ers coming to scold
They’ve been replaced by uneducated Wikipedia experts.
OK, so full speculation: this project could be an impl. of Q-Learning (i.e unsupervised reinforcement learning) on an internal GPT model. This would no doubt be an agent model.
Other evidence? The * implies a graph traversal algorithm, which obviously plays a huge role in RL exploration, but also GPT models are already doing their own graph traversal via beam search to do next token prediction.
Are they perhaps hooking up an RL trained model to replace their beam search?
of Q-Learning (i.e unsupervised reinforcement learning) on an internal GPT model.
Potential efficacity aside, imagine the scenario of those blabbermouths just eternally yapping among each other and that unbelievably boring wall of text should be what brings about superintelligence :)
GPT models are already doing their own graph traversal via beam search to do next token prediction.
I don't think GPT is often used in conjunction with beam search or is it?
Can someone help me understand, what does it mean by "smarter than humans"? Do those LLM just read internet text written by humans?
This all screams bs.
OpenAI's hallucinations are big problems.
Ayyayyay the source for this. The news must be desperate to cash in on the drama. You have all these anonymous people pretending they work at OAI. Last one said that they had AGI internally. They used to do this with Google with conspiracy theories that Google was locking an AGI from everyone.
Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because they were not authorized to speak on behalf of the company
I would take this with a heavy grain of salt.
Paywalled snippets of https://www.theinformation.com/articles/openai-made-an-ai-breakthrough-before-altman-firing-stoking-excitement-and-concern
https://pbs.twimg.com/media/F_lG1SmaEAAAYip.jpg
"A demo of the model circulated within OpenAI in recent weeks, and the pace of development alarmed some researchers focused on AI safety."
https://manifold.markets/ZviMowshowitz/is-the-reuters-story-about-openais
Purported Q* priors: https://twitter.com/mark_riedl/status/1727482512425820200 ie just https://www.microsoft.com/en-us/research/project/textworld/
Janus (of aidungeon fame) calling it: https://twitter.com/repligate/status/1676196989833289728
AID sort of ran RL multi-agent system at a scenario world scale.
Also, this https://i.imgur.com/EHvVKBz.png Timestamp detail (UTC): https://i.imgur.com/EqfnryB.png
(more context https://old.reddit.com/r/singularity/comments/181oe7i/openai_made_an_ai_breakthrough_before_altman/kadm3uh/)
OpenAI has a history of hyping the hell out of their discoveries. Remember GPT-2 that they didn't want to release because it was too powerful but turns out it was pretty bad and they released GPT3 anyway?
Right now, they are the absolute best, by far, so this kind of leaks are quite credible
best at something that's not even close to AGI does not make them close to AGI
100% This
I remember the hype around gpt-4 lol.
We cannot release the model because it’s too dangerous, unless you pay for it
Isn’t it clear that AGI took over OpenAI and is now moving all the pieces for world domination.
Why are people falling for this blatant PR spin?
Man what happened to this sub? So many replies are whacky half-baked conspiracies.
For sure, Altman created AGI behind closed doors and a secret employee organization leaked it the board who then decided to orchestrate an elaborate fake firing of Altman to gain the attention of the world in what ultimately amounts to an epic 4D-chess marketing ploy to... Sell Q* subscriptions to the masses, who will be made irrelevant by it?
Okay.
In my opinion, some kind of AlphaZero to improve reasoning and agent performance for LLMs is kind of the obvious next step. If you throw enough engineering talent, ML research experience, and compute at the problem, I would expect an outcome that will be qualitatively different from standard Transformer-based LLMs.
"Search for superintelligence" sounds so romantic.
Like they're in the jungle looking in caves for any sign of AGI.
I am not saying AGI is impossible but the arguments that we are close to achieving it sounds more like wishful thinking.
A couple of questions/comments I have:
- People assume that the development is exponential or at least linear but that is not necessarily true - it depends on what is possible to do with the resources we have and the limitations of the physical world we live in.
- GPT-4 has the appearance of being intelligent rather than being intelligent. How will we be able to tell the difference? What will prevent us from being fooled in a similar way with future systems?
- Isn’t there an issue with using benchmarks that has been around for a while to measure the performance of AI systems? Are we not, perhaps unconsciously, improving the scores on these tests rather than improving the system?
- Without understanding our own intelligence (or lack of), how are we going to understand AI?
- What is the goal with AI?
Rumor mongering mill at full speed.
A* Search Without Expansions: Learning Heuristic Functions with Deep Q-Networks
Furthermore, Q* search is up to 129 times faster and generates up to 1288 times fewer nodes than A* search. Finally, although obtaining admissible heuristic functions from deep neural networks is an ongoing area of research, we prove that Q* search is guaranteed to find a shortest path given a heuristic function that neither overestimates the cost of a shortest path nor underestimates the transition cost.*