this post was submitted on 23 Nov 2023
1 points (100.0% liked)
Machine Learning
1 readers
1 users here now
Community Rules:
- Be nice. No offensive behavior, insults or attacks: we encourage a diverse community in which members feel safe and have a voice.
- Make your post clear and comprehensive: posts that lack insight or effort will be removed. (ex: questions which are easily googled)
- Beginner or career related questions go elsewhere. This community is focused in discussion of research and new projects that advance the state-of-the-art.
- Limit self-promotion. Comments and posts should be first and foremost about topics of interest to ML observers and practitioners. Limited self-promotion is tolerated, but the sub is not here as merely a source for free advertisement. Such posts will be removed at the discretion of the mods.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I am slightly confused.
So, Sam Altman and collegues discovered a very powerful thing called Q*. This will make OpenAI very powerful and will make the board a lot of money.
So, why did this cause the board to fire him?
From the article, it seemed like the board was too afraid of Q* and fired him to stop it being released without proper security features.
Could someone please help clarify this?
Thanks.
It’s complicated. The board at OpenAI is (or was) focused on AI safety and is not entirely comprised of investors. Their goal was not to maximize profits.
I forget the exact phrasing, but the board said they fired Altman for not being completely honest with them. Based on the wording of the board’s rationale for firing Altman, it seems likely that Altman was not forthright about the capabilities of this breakthrough, possibly because the board would then halt its development out of safety concerns.