this post was submitted on 20 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

Source: https://blogs.microsoft.com/blog/2023/11/19/a-statement-from-microsoft-chairman-and-ceo-satya-nadella/

We remain committed to our partnership with OpenAI and have confidence in our product roadmap, our ability to continue to innovate with everything we announced at Microsoft Ignite, and in continuing to support our customers and partners. We look forward to getting to know Emmett Shear and OAI’s new leadership team and working with them. And we’re extremely excited to share the news that Sam Altman and Greg Brockman, together with colleagues, will be joining Microsoft to lead a new advanced AI research team. We look forward to moving quickly to provide them with the resources needed for their success.

News article covering the situation: https://www.theverge.com/2023/11/20/23968829/microsoft-hires-sam-altman-greg-brockman-employees-openai

Altman’s Microsoft hiring comes just hours after negotiations with OpenAI’s board failed to bring him back as OpenAI CEO. Instead, former Twitch CEO and co-founder Emmett Shear has been named as interim CEO.

Altman had been negotiating to return as OpenAI CEO, but OpenAI’s four-person board refused to step down and let him return.

you are viewing a single comment's thread
view the rest of the comments
[–] ReasonablyBadass@alien.top 1 points 11 months ago (2 children)

This has damaged AGI safety research massively.

If OpenAI goes to slow, others will over take it.

Other companies will look at any safety oriented researchers as potential "traitors".

If GPT is shuttered, people will turn to more ruthless competitors.

And last, they may even turn to open source directly, massively accelerating research there.

[–] visarga@alien.top 1 points 11 months ago

On top of that, many OAI people will leave spreading their inside knowledge to other companies. All the secrets will be out.

But the irony is that AI doom fears triggered this avalanche, hurting AI safety while doing it for safety. And if it wasn't fear, then it was greed, but that is not any better for our risk level.

[–] tripple13@alien.top 1 points 11 months ago (1 children)

AGI safety research is the root cause of this mess.

Ignorant people wanting to steer the train of progression for their own interests.

And then people like you who believe them.