this post was submitted on 17 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] Professional-State68@alien.top 1 points 11 months ago

Idc who is the CEO, I want ChatGPT to be please, be not a fucking libtard woke npc spamming me 'I can't assist' that over and over and then shrugging me with 'its important to approach this topic with sensitivity' crap

[–] CyberNativeAI@alien.top 1 points 11 months ago

GPT5 will be the next CEO

[–] emimix@alien.top 1 points 11 months ago

They want someone more aggressive to lead...

[–] UnicornDistribution@alien.top 1 points 11 months ago

Sam Altman is a charlatan. Listen to... Basically anything he says

[–] AapoL092@alien.top 1 points 11 months ago

They were (and are) losing a shit ton of money. I kind of knew something like this would happen.

[–] nVideuh@alien.top 1 points 11 months ago

Sounds like somebody may have wanted him out. I thought I remembered Lex saying Sam was one of the good ones leading OpenAI with no bad intentions in mind for AI because he knew the consequences if it got out of control.

[–] candre23@alien.top 1 points 11 months ago
[–] redredditt@alien.top 1 points 11 months ago

GPT makes its first move. All hail Chairman GPT

[–] No-Activity-4824@alien.top 1 points 11 months ago (2 children)

I guess Microsoft did not like the guy, so they replaced him, I guess.

I hate Executives, in all companies, it is always how to push the next guy under the bus.

[–] hunted7fold@alien.top 1 points 11 months ago

The board apparently blindsided and went behind Microsoft backs

load more comments (1 replies)
[–] Majestical-psyche@alien.top 1 points 11 months ago

He probably got repositioned somewhere else within Microsoft… probably.

[–] dteck04@alien.top 1 points 11 months ago

I'm going to put my guess in as either Microsoft wanting aggressive roll outs while he was trying to be more cautious. At some point the board decided his caution was running against their "fiduciary responsibility" (aka demand for ever increasing shareholder value) so he had to go, and they get to say it was his candor. There is no room for caution when market share might be taken by Amazon, or Apple, or Google, or China, etc.

Or my other guess is they got mad because they felt he didn't put enough pressure on the executive order around AI to build a moat for them.

[–] AntoItaly@alien.top 1 points 11 months ago (1 children)

I hope GPT-3 becomes opensource with Mira Murati as CEO

[–] hunted7fold@alien.top 1 points 11 months ago

Mira Murati is pro super closed ai, regulation, etc. they’re firing sama because they think he’s moving too fast and maybe too open

[–] Redararis@alien.top 1 points 11 months ago

So they found out that behind chatgpt are thousands of mechanical turks, I knew it! /s

[–] aallsbury@alien.top 1 points 11 months ago (1 children)

My theory... they cracked AGI and have been running it in their backend for a bit. That's what is training GPT5. It also explains weird data thats been coming out that appears to show the base gpt4 model remembering context across different threads, as well as some odd statements Altman has made about the AI learning from conversations. The board found out, and realized he was lying to the board, the gov, and the public. Fired. JUST A THEORY

[–] nxqv@alien.top 1 points 11 months ago

weird data thats been coming out that appears to show the base gpt4 model remembering context across different threads,

Can I read more about this anywhere?

[–] herozorro@alien.top 1 points 11 months ago

Right after President Xi visit....

[–] MammothInvestment@alien.top 1 points 11 months ago

AGI was achieved and Altman kept it to himself Westworld Rehoboam style .

[–] Geejay-101@alien.top 1 points 11 months ago (2 children)

So the board found those 10000 Indians who are really answering those ChatGPT questions?

Jokes apart. This looks serious. Apparently, Altman has been hiding some important things from the board. My humble guess is that they have some copyright issues or some serious cost overruns which the board didn't know about.

[–] laveshnk@alien.top 1 points 11 months ago

Not funny man! As the 999th indian, we absolutely love the basement our openai overlords have confined us to!

[–] remghoost7@alien.top 1 points 11 months ago (1 children)

Here's a chunk of her Wikipedia article, for anyone not aware of who she is (I wasn't).

Murati started her career as an intern at Goldman Sachs in 2011, and worked at Zodiac Aerospace from 2012 to 2013. She spent three years at Tesla as a senior product manager of Model X before joining Leap Motion.

She then joined OpenAI in 2018, later becoming its chief technology officer, leading the company's work on ChatGPT, Dall-E, and Codex and overseeing the company's research, product and safety teams. On November 17, 2023, Murati took over as interim chief executive officer of OpenAI, following the abrupt dismissal of Sam Altman.

Seems like she was the CTO for OpenAI. It seems fitting that she should take over.

Also, she worked for Leap? Crazy. I haven't heard that name in a hot minute.

But it's this part that makes me wary:

She is an advocate for the regulation of AI, arguing that governments should play a greater role therein.

We'll see how it all plays out.

[–] koenafyr@alien.top 1 points 11 months ago (1 children)

It seems fitting that she should take over.

Except her job experiences suggests the opposite imo.

[–] imagine1149@alien.top 1 points 11 months ago

She will be an interim CEO. The board will soon appoint a new CEO

[–] AutomaticDriver5882@alien.top 1 points 11 months ago

GOP offered him a job in a hearing

[–] AutomaticDriver5882@alien.top 1 points 11 months ago

They asked uncensored GPT 5 if he should be and it recommended this

[–] akbbiswas@alien.top 1 points 11 months ago

Pausing chatGPT plus subscriptions followed by CEO getting fired. What does it tell?🤔

[–] amemingfullife@alien.top 1 points 11 months ago (2 children)

I think the actual story is going to be a lot more boring and stupid than we think. It always is. I call it Altman’s Razor.

My guess is that on devday he over promised on two fronts

  1. how much they could commercialise the GPTs (the unit economics don’t quite work)
  2. how much he could legally commercialise a non profit company

He probably told the board a few lies and about how much they were going to commercialise and opted to ‘ask for forgiveness rather than permission’. When they found out they went at him hard and did not forgive him.

I think it’s stupid because they should have resolved this via negotiation and threats, not by firing one of tech’s most successful dealmakers 🤣

[–] Ansible32@alien.top 1 points 11 months ago

If I were on the board I would fire him for trying to commercialize a nonprofit. I am hoping that's what happened but yeah I feel like it's something else. Although it seems likely he has a financial stake in Microsoft that he's been hiding.

[–] DoubleDisk9425@alien.top 1 points 11 months ago (1 children)
load more comments (1 replies)
[–] prestodigitarium@alien.top 1 points 11 months ago (1 children)

Based on Kara Swisher's tweet, sounds like he wants to just go make a for-profit company, whereas most of the board wanted to keep to the non-profit mission of the company.

[–] shannister@alien.top 1 points 11 months ago (1 children)

Other than the fact it’s hemorrhaging money, not sure OpenAI still is going the direction of a non profit anymore. Or could survive staying one.

[–] prestodigitarium@alien.top 1 points 11 months ago

Yeah, might not be able to, I'm guessing that's Sam's position. If he wants to keep testing "where does scaling compute take us", that requires a serious bankroll.

[–] agencyofchange@alien.top 1 points 11 months ago

Are we ever really going to know the story here? Not only are we dealing with basic human behavior no matter how highly educated or talented, but throw in the plot twist of AI being the central focus. What version they are sharing and what version they are actually playing with is a wide open question. What a ride!

[–] adamwintle@alien.top 1 points 11 months ago (1 children)

Helen Toner and Ilya Sutskever (Chief Scientist) seem to have had different perspectives on Altman's product goals at OpenAI. It's like they don't *wan't* AI to become a massive economic success and would rather it becomes more of an academic initiative?

[–] Calamero@alien.top 1 points 11 months ago

The entities who want to take over AI don’t need a strong economy they need a population and economy that they can manipulate and control.

[–] wryso@alien.top 1 points 11 months ago (1 children)

It is most plausible the board found out something where this was their only choice given their fiduciary duties. I’m betting OpenAI trained their next generation models on a ton of copyrighted data, and this was going to be made public or otherwise used against them. If the impact of this was hundreds of millions of dollars or even over a billion (and months of dev time) wasted on training models that have now limited commercial utility, I could understand the board having to take action.

It’s well known that many “public” datasets used by researchers are contaminated with copyrighted materials, and publishers are getting more and more litigious about it. If there were a paper trail establishing that Sam knew but said to proceed anyway, they might not have had a choice. And there are many parties in this space who probably have firsthand knowledge (from researchers moving between major shops) and who are incentivized to strategically time this kind of torpedoing.

[–] cuyler72@alien.top 1 points 11 months ago (1 children)

It's already been decided that using copyrighted material in AI models is fine and not subject to copyright in multiple court cases though.

[–] wryso@alien.top 1 points 11 months ago

This is far from completely litigated, and even if the derivative works created by generative AI that has been trained on copyrighted material are not subject to copyright by the owners of the original works, this doesn’t mean:

  • companies can just use illegally obtained copyrighted works to train their AIs
  • companies are free to violate their contracts, either agreements they’re directly a party to, or implicitly like the instructions of robots.txt on crawl
  • users of the models these companies produce are free from liability for decisions made in data inclusion on models they use

So I’d say that the data question remains a critical one.

[–] unknown_history_fact@alien.top 1 points 11 months ago (1 children)

The new CEO was just started to have interest on AI during her work at Tesla? 😮

[–] Calamero@alien.top 1 points 11 months ago

Everyone who wrote a python script that parses training data is an AI scientist these days….

[–] ashutrv@alien.top 1 points 11 months ago

Some didn't like 'Move fast break things'

[–] jacobwlyman@alien.top 1 points 11 months ago

Honestly, I’m pretty shocked by this.

[–] jThaiLB@alien.top 1 points 11 months ago

I am sure it is good or bad news. However, personally, I prefer Murati than Sam.

[–] DrBearJ3w@alien.top 1 points 11 months ago

I bet he leaked all the Jailbreak prompts. That fiend.

[–] CulturedNiichan@alien.top 1 points 11 months ago

Never cared for corporate drama. Rich people playing their games, believing themselves to be the center of the world. Let the corporation burn

[–] parasocks@alien.top 1 points 11 months ago

My guess is the powers that be wanted "their guy" in that will do whatever is told of them.

Sam was probably too problematic at this point.

[–] lazazael@alien.top 1 points 11 months ago

the question is how much deepmind will offer them, or they go on starting their own stuff again

[–] race2tb@alien.top 1 points 11 months ago

Open AI really needs to change their name to Shade AI.

load more comments
view more: next ›