Idc who is the CEO, I want ChatGPT to be please, be not a fucking libtard woke npc spamming me 'I can't assist' that over and over and then shrugging me with 'its important to approach this topic with sensitivity' crap
LocalLLaMA
Community to discuss about Llama, the family of large language models created by Meta AI.
GPT5 will be the next CEO
They want someone more aggressive to lead...
Sam Altman is a charlatan. Listen to... Basically anything he says
They were (and are) losing a shit ton of money. I kind of knew something like this would happen.
Sounds like somebody may have wanted him out. I thought I remembered Lex saying Sam was one of the good ones leading OpenAI with no bad intentions in mind for AI because he knew the consequences if it got out of control.
And Brockman just quit. Hell of a shakeup over there.
GPT makes its first move. All hail Chairman GPT
I guess Microsoft did not like the guy, so they replaced him, I guess.
I hate Executives, in all companies, it is always how to push the next guy under the bus.
The board apparently blindsided and went behind Microsoft backs
He probably got repositioned somewhere else within Microsoft… probably.
I'm going to put my guess in as either Microsoft wanting aggressive roll outs while he was trying to be more cautious. At some point the board decided his caution was running against their "fiduciary responsibility" (aka demand for ever increasing shareholder value) so he had to go, and they get to say it was his candor. There is no room for caution when market share might be taken by Amazon, or Apple, or Google, or China, etc.
Or my other guess is they got mad because they felt he didn't put enough pressure on the executive order around AI to build a moat for them.
I hope GPT-3 becomes opensource with Mira Murati as CEO
Mira Murati is pro super closed ai, regulation, etc. they’re firing sama because they think he’s moving too fast and maybe too open
So they found out that behind chatgpt are thousands of mechanical turks, I knew it! /s
My theory... they cracked AGI and have been running it in their backend for a bit. That's what is training GPT5. It also explains weird data thats been coming out that appears to show the base gpt4 model remembering context across different threads, as well as some odd statements Altman has made about the AI learning from conversations. The board found out, and realized he was lying to the board, the gov, and the public. Fired. JUST A THEORY
weird data thats been coming out that appears to show the base gpt4 model remembering context across different threads,
Can I read more about this anywhere?
Right after President Xi visit....
AGI was achieved and Altman kept it to himself Westworld Rehoboam style .
So the board found those 10000 Indians who are really answering those ChatGPT questions?
Jokes apart. This looks serious. Apparently, Altman has been hiding some important things from the board. My humble guess is that they have some copyright issues or some serious cost overruns which the board didn't know about.
Not funny man! As the 999th indian, we absolutely love the basement our openai overlords have confined us to!
Here's a chunk of her Wikipedia article, for anyone not aware of who she is (I wasn't).
Murati started her career as an intern at Goldman Sachs in 2011, and worked at Zodiac Aerospace from 2012 to 2013. She spent three years at Tesla as a senior product manager of Model X before joining Leap Motion.
She then joined OpenAI in 2018, later becoming its chief technology officer, leading the company's work on ChatGPT, Dall-E, and Codex and overseeing the company's research, product and safety teams. On November 17, 2023, Murati took over as interim chief executive officer of OpenAI, following the abrupt dismissal of Sam Altman.
Seems like she was the CTO for OpenAI. It seems fitting that she should take over.
Also, she worked for Leap? Crazy. I haven't heard that name in a hot minute.
But it's this part that makes me wary:
She is an advocate for the regulation of AI, arguing that governments should play a greater role therein.
We'll see how it all plays out.
It seems fitting that she should take over.
Except her job experiences suggests the opposite imo.
She will be an interim CEO. The board will soon appoint a new CEO
GOP offered him a job in a hearing
They asked uncensored GPT 5 if he should be and it recommended this
Pausing chatGPT plus subscriptions followed by CEO getting fired. What does it tell?🤔
I think the actual story is going to be a lot more boring and stupid than we think. It always is. I call it Altman’s Razor.
My guess is that on devday he over promised on two fronts
- how much they could commercialise the GPTs (the unit economics don’t quite work)
- how much he could legally commercialise a non profit company
He probably told the board a few lies and about how much they were going to commercialise and opted to ‘ask for forgiveness rather than permission’. When they found out they went at him hard and did not forgive him.
I think it’s stupid because they should have resolved this via negotiation and threats, not by firing one of tech’s most successful dealmakers 🤣
If I were on the board I would fire him for trying to commercialize a nonprofit. I am hoping that's what happened but yeah I feel like it's something else. Although it seems likely he has a financial stake in Microsoft that he's been hiding.
Based on Kara Swisher's tweet, sounds like he wants to just go make a for-profit company, whereas most of the board wanted to keep to the non-profit mission of the company.
Other than the fact it’s hemorrhaging money, not sure OpenAI still is going the direction of a non profit anymore. Or could survive staying one.
Yeah, might not be able to, I'm guessing that's Sam's position. If he wants to keep testing "where does scaling compute take us", that requires a serious bankroll.
Are we ever really going to know the story here? Not only are we dealing with basic human behavior no matter how highly educated or talented, but throw in the plot twist of AI being the central focus. What version they are sharing and what version they are actually playing with is a wide open question. What a ride!
Helen Toner and Ilya Sutskever (Chief Scientist) seem to have had different perspectives on Altman's product goals at OpenAI. It's like they don't *wan't* AI to become a massive economic success and would rather it becomes more of an academic initiative?
The entities who want to take over AI don’t need a strong economy they need a population and economy that they can manipulate and control.
It is most plausible the board found out something where this was their only choice given their fiduciary duties. I’m betting OpenAI trained their next generation models on a ton of copyrighted data, and this was going to be made public or otherwise used against them. If the impact of this was hundreds of millions of dollars or even over a billion (and months of dev time) wasted on training models that have now limited commercial utility, I could understand the board having to take action.
It’s well known that many “public” datasets used by researchers are contaminated with copyrighted materials, and publishers are getting more and more litigious about it. If there were a paper trail establishing that Sam knew but said to proceed anyway, they might not have had a choice. And there are many parties in this space who probably have firsthand knowledge (from researchers moving between major shops) and who are incentivized to strategically time this kind of torpedoing.
It's already been decided that using copyrighted material in AI models is fine and not subject to copyright in multiple court cases though.
This is far from completely litigated, and even if the derivative works created by generative AI that has been trained on copyrighted material are not subject to copyright by the owners of the original works, this doesn’t mean:
- companies can just use illegally obtained copyrighted works to train their AIs
- companies are free to violate their contracts, either agreements they’re directly a party to, or implicitly like the instructions of robots.txt on crawl
- users of the models these companies produce are free from liability for decisions made in data inclusion on models they use
So I’d say that the data question remains a critical one.
The new CEO was just started to have interest on AI during her work at Tesla? 😮
Everyone who wrote a python script that parses training data is an AI scientist these days….
Some didn't like 'Move fast break things'
Honestly, I’m pretty shocked by this.
I am sure it is good or bad news. However, personally, I prefer Murati than Sam.
I bet he leaked all the Jailbreak prompts. That fiend.
Never cared for corporate drama. Rich people playing their games, believing themselves to be the center of the world. Let the corporation burn
My guess is the powers that be wanted "their guy" in that will do whatever is told of them.
Sam was probably too problematic at this point.
the question is how much deepmind will offer them, or they go on starting their own stuff again
Open AI really needs to change their name to Shade AI.