Hey if Sam Altman is really one of the good ones, now is his chance to create an open-sourced version that rivals ChatGPT and really change the world for the better.
LocalLLaMA
Community to discuss about Llama, the family of large language models created by Meta AI.
OpenAIGATE
CEO Nadella "furious"
Not shit, they pony up 10B then bet the future of Microsoft on that "everything is Copilot now" (base on OpenAI) strategy and announced it to the world and boom, get the rug immediately pulled under them. They basically got catfished.
I think a big part of the enthusiasm for AI comes from Microsoft's deeply and wide lobbying abilities. It would be fascinating to watch them back that out and try and pivot to a new new thing.
This is what went down: Military came and said "We want an AI for war",
Altman said "Oh hell naw",
board said "But that's billions of dollars directly into our personal bank accounts you said no to, get out!'
What do people think super-AI is going to do? All it can do is print letters on the screen. Flip a switch, it's gone. It can't actually DO anything; it has no body, no thumbs. The smartest AI conceivable can't do a thing if I take a hammer to it.
What are people scared of???
By producing letters on a screen it can do everything you're able to do on the Internet, except at scale and faster.
What exactly are you going to hit with your hammer?
How can it be a "coup" when the board is allowed to hire and fire the CEO?
I find it somewhat interesting that Sutskever literally seems to have quite the big brain, judging by his head. Is that weird?
Seems to me that Ilya Sutskever must be some kind of nut job idealist / egoist. I'm tempted to say you can take the data scientist out of Russia, but you can't take the Russian out of the data scientist. This plays like a Soviet era coup -- sudden, poorly thought out, meat fisted, and unlikely to make anything better.
Altman and Brockman are probably going to start their own company, (funded by Microsoft?), poach all of Open AI's good people, and Open AI is going to go the way of the dodo... or maybe Ilya will have enough money to keep a little clown car / research lab company running or something, but nothing of any consequence is ever going to come out of Open AI ever again. I'd bet a paycheck on it.
The documented sequence of events makes the board (and Ilya in particular) look colossally stupid. Never ceases to amaze me how some very smart people can be so completely clueless from an interpersonal dynamics perspective. Zero EQ. If they were unhappy with Altman there was a right way to handle this, and a million wrong ways. It seems like they asked ChatGPT to give them the absolute worst possible wrong way, and then asked it to write the blog post announcing it.
“Ego is the enemy of growth.”
What alternate timeline is that clown living in, lol…?
Ilya Sutskeya is what happens when a smart person makes it to adulthood without developing any EQ.
Now there are reports on LinkedIn that the board is in negotiations to bring Altman and Brockman back (probably serious pressure from Microsoft I would guess.... like "not only are we not going to partner with you, we're going to exercise this clause in our contract that removes your access to all of our compute, effective immediately. Try developing GPT-5 on whatever you can scramble together from memory, morons... nobody in their right mind would give you the kind of sweetheart deal we gave you after this stupid stunt. Friggin' amateurs.")
Openai dosent need microsoft. The second ms dose such a thing, they got billions of dollars of equipment idling, every customer loses faith in them, and google or amazon sells them compute instead.
What an epic world class mess by an ambitious board member and a few suckers to pull of a board coup.. these types of events in an org, along with M&A are massively disruptive.. it takes years and scale as an org to tackle these types of events with process and discipline .. this has amateur hour written all over it. They need a real board that works for all of it’s stakeholders and constituents, not primarily for themselves.
Seems like Microsoft’s Satya is furious, and who can blame him? They invested so much in OpenAI and then the board does this sneaky change, regardless of the reasons, is shocking they didn’t communicate with Microsoft… If this article is accurate I bet they will have a much harder time securing funding, no one wants to invest in turmoil and uncertainty.
no one wants to invest in turmoil and uncertainty
Elon Musk's ears are burning right now.
That part made me smile. It is a pretty good news that MS is not in control of OpenAI.
And if it turns out that this drama really happened out of safety concern rather than personal profits or ego, I would like people to take a step back and realize how great of a news as to where we are as a society.
This is probably great for Microsoft. Their investment got them low level code access and rights, but OpenAI competed with them for AI services. With OpenAI going more towards non-profit, and Sam now being hire-able, Microsoft may have inadvertently acquired the entire business portion of OpenAI.
I mean you can be furious about less profits but really this wasn't that much of a risky move for MS. Most of the money they gave them is literally to pay MS for compute. And then they apparently take most of OpenAI earnings until payd back or something. That's pretty different from actually giving someone 10B and your money is gone if they go down the drain before getting out of the red numbers.
Then they should be furious with Sutskever for wanting to slow things down. Slowing things down is not in the best interest of their shareholders. Sutskever needs to go, now and Sam Altman should be reinstated. Bring on the singularity.
MS can only blame themselves not doing the minimum research into the governing structure.
Also MS literally just spent 70B on a video game publisher. I don’t think they care that much.
Seems a miss from Microsoft's lawyers if they didn't check out how the board and company was organized before making such a large investment.
And at this point, there are plenty of companies that would jump at the chance to invest/get a controlling interest in OpenAI (and obviously they'd ask for a board seat at the very least) -- Google, Apple, even Meta.
Good reminder to not add a couple of nobodies to your board. Lol.
Now would be a good time for a disgruntled employee to leak some models and make OpenAI actually open. ;)
Datasets~
And we find the backend is just Mechanical Turk.
That's no longer a joke ever since local open source AI models became a thing.
except that it seems the employees who stayed are the ones least likely to do this.
Wow, Greg giving the breakdown of what happened was nice. Very sudden even internally.
I thought this might have been brewing over a week or so and it seems like it was, since Dev Day.
He conveniently left out the part where this was apparently precipitated by Altman wanting to partner with the Saudis.
Seems like a big detail.
Apparently that had nothing to do with it.
Could you link me for some more info on that?
My guess is the powers that be wanted a yes-man in charge, and Sam wasn't going to just agree, so he needed to go so they could get someone they can control in.
He spoke at the Cambridge Union to receive the Hawking Fellowship on 1st of November, from the talk the allegations sound like a lot of BS, it's a shame I can't short their stock - https://www.youtube.com/watch?v=NjpNG0CJRMM
You can put your money where your mouth is and indirectly short them via MS stocks.
And AI futurist Daniel Jeffries said, "The entire AI industry would like to thank the OpenAI board for giving us all a chance to catch up."
DAMN SON. That burns!
This is such a true call out.. it’s insane.. they were first to market and they’ve created and event to dispose themselves of the benefit … why?
Allegedly SA was canned because he wanted to move too fast and security team was not happy.
Because it is important to have AGI that will let elites take over the world even more than now, but that AGI should not tell jokes about gingers because it is not inclusive.
If you accept that the safety angle is worthwhile, it's very hard to tell where "don't tell jokes about gingers" turns into "don't put gingers in concentration camps."
Wait what? The first news were actually the opposite. He wanted to move slow and the board wanted to make money and move fast.
Altman said he would try to slow the revolution down as much as he could.
He should get back at them by making an open source model similar to GPT-4.
FU-5.
Nearly everything I read indicates he was pushing for more profits while board was pushing for being more open and safe so this doesn't make any sense.
Board is pushing for safe, not open. Ilya has said that open sourcing powerful AI models is obviously a bad idea.
It'd be super disappointing if somehow the models for gpt 3.5 or 4 got leaked because of this non sense. Like, it would be terrible.
I’d like to think that this will refocus OpenAI towards fundamental research that will deliver the ASI rather than efforts to commercialise fragments.