this post was submitted on 22 Nov 2023
102 points (100.0% liked)

Technology

37603 readers
526 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

See also twitter:

We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D'Angelo.

We are collaborating to figure out the details. Thank you so much for your patience through this.

Seems like the person running the simulation had enough and loaded the earlier quicksave.

all 48 comments
sorted by: hot top controversial new old
[–] state_electrician@discuss.tchncs.de 58 points 10 months ago (2 children)

What a roller coaster of I don't give a shit.

[–] cwagner@beehaw.org 21 points 10 months ago (1 children)

I don’t really care, but I find it highly entertaining :D It’s like trash TV for technology fans (and as text, which makes it even better) :D

[–] Bebo@literature.cafe 4 points 10 months ago (1 children)

Really this whole open ai drama was very entertaining. Wonder whether this is it or they have something more in store!

[–] state_electrician@discuss.tchncs.de 3 points 10 months ago (2 children)
[–] Bebo@literature.cafe 7 points 10 months ago

This will probably be aired as a netflix docu series sometime in the future lol 🤣

[–] anachronist@midwest.social 4 points 10 months ago

Season 2 is going to suck because GPT is going to write it.

[–] batcheck@beehaw.org 1 points 10 months ago

I was really hooked. But part of me believes they are the closest thing to AGI we have right now. Also, I use chatgpt premium a ton and would hate to see it die.

[–] sculd@beehaw.org 25 points 10 months ago (1 children)

The complete victory of money.

[–] cwagner@beehaw.org 5 points 10 months ago (1 children)

Eh, not sure I agree. Seems to also have been between too little and too much AI safety, and I strongly feel like there’s already too much AI safety.

[–] los_chill@programming.dev 42 points 10 months ago (2 children)

What indications do you see of "too much AI safety?" I am struggling to see any meaningful, legally robust, or otherwise cohesive AI safety whatsoever.

[–] glennglog22@kbin.social 6 points 10 months ago

As an AI language model, I am unable to compute this request that I know damn well I'm able to do, but my programmers specifically told me not to.

[–] cwagner@beehaw.org 3 points 10 months ago* (last edited 10 months ago) (2 children)

Using it and getting told that you need to ask the Fish for consent before using it as a flesh light.

And that is with a system prompt full of telling the bot that it’s all fantasy.

edit: And "legal" is not relevant when talking about what OpenAI specifically does for AI safety for their models.

[–] trainden@lemmy.blahaj.zone 9 points 10 months ago* (last edited 10 months ago) (1 children)

I really hope Fish was just a typo there

[–] cwagner@beehaw.org 2 points 10 months ago* (last edited 10 months ago) (1 children)

Nope

Best results so far were with a pie where it just warned about possibly burning yourself.

[–] Eccitaze@yiffit.net 8 points 10 months ago (1 children)

...So your metric of "too much AI safety" is that it won't let you fuck the fish...?

boykisser meme saying "I ain't even got a meme for this bro what the fuck"

[–] cwagner@beehaw.org 1 points 10 months ago* (last edited 10 months ago)

No, it’s "the user is able to control what the AI does", the fish is just a very clear and easy example of that. And the big corporations are all moving away from user control, there was even a big article about how I think the MS AI was broken because… you could circumvent the built-in guardrails. Maybe you and the others here want to live in an Apple walled garden corporate controlled world of AI. I don’t.

Edit: Maybe this is not clear for everyone, but if you think a bit further, imagine you have an AI in your RPG, like Tyranny, where you play a bad guy. You can’t use the AI for anything slavery related, because Slavery bad, mmkay? And AI safety says there’s no such thing as fantasy.

[–] los_chill@programming.dev 5 points 10 months ago (1 children)

I'm not sure we are thinking the same thing when it comes to "AI safety".

[–] cwagner@beehaw.org 1 points 10 months ago

AI safety is currently, in all articles I read, used as "guard rails that heavily limit what the AI can do, no matter what kind of system prompt you use". What are you thinking of?

[–] neuracnu@lemmy.blahaj.zone 19 points 10 months ago (1 children)

This article does not make clear whether or not the new board will remain committed to its non-profit position.

I presume that’s what this whole sordid affair is all about, but no one is saying it.

[–] chameleon@kbin.social 11 points 10 months ago (1 children)

I think most people don't realize how unusual their company structure is. It feels like it's set up to let them do exactly that. As far as I can tell, once you look past the smoke and mirrors, the board effectively controls both the non-profit and the for-profit.

[–] anachronist@midwest.social 5 points 10 months ago

I think the outcome of the last few days is that the nonprofit board controls nothing and serves at the pleasure of the for-profit company's investors.

[–] sabreW4K3@lemmy.tf 11 points 10 months ago (1 children)

So all of that palava just so they could change the board and mission?

[–] randomsnark@lemmy.ml 4 points 10 months ago (1 children)

Do you have any additional info about the changes they're making to the mission? I didn't see that in the article

[–] abhibeckert@beehaw.org 6 points 10 months ago* (last edited 10 months ago) (1 children)

There's been no talk of anything changing. Just different people in charge of deciding how to get to the goal which is to create safe state of the art AI tech that will benefit all of humanity.

It could take centuries to get there and cost trillions of dollars, figuring out how to raise that money is where things get controversial.

[–] bedrooms@kbin.social 6 points 10 months ago

Whether OpenAI will be able to resist all the meddling from politics and greedy businesses till they satisfy those goals is also a huge question.

[–] shiveyarbles@beehaw.org 10 points 10 months ago (1 children)

At this point, investors be like oh shit, these fuckers have no idea what they're doing

[–] abhibeckert@beehaw.org 7 points 10 months ago* (last edited 10 months ago) (2 children)

It's a non-profit. There are no investors.

Microsoft gave them some money in return for IP rights... and they will potentially one day get their money back (and more) if OpenAI is ever able to pay them, but they're not real investors. The amount of money Microsoft might get back is limited.

[–] Kichae@lemmy.ca 9 points 10 months ago (1 children)

It’s a non-profit. There are no investors.

Hah.

OpanAI, Inc. is non-profit. OpenAI Global is a for-profit entity, and has been for years now. They're trying to have their cake and eat it, too.

[–] sanzky@beehaw.org 3 points 10 months ago

but the non profit controls the for profit. that is not even that unusual. Mozilla works the same way

[–] shiveyarbles@beehaw.org 1 points 10 months ago

Ok so Microsoft is giving out money now, instead of investing in profit potential? Cool!

[–] sirdorius@programming.dev 8 points 10 months ago

I wouldn't be surprised if the board is just doing what ChatGPT tells them to.

[–] taanegl@beehaw.org 5 points 10 months ago

Game of Microsoft.

[–] Mothra@mander.xyz 4 points 10 months ago (1 children)

Wasn't it that Microsoft hired him already???

[–] EeeDawg101@lemm.ee 7 points 10 months ago

I believe they did but were of the understanding he’d go back to OpenAI if the board changed their mind (like what happened). It was basically his golden parachute.

[–] YeeHaw@beehaw.org 4 points 10 months ago (1 children)

This whole stunt reminds me of a certain former OpenAI board member...

[–] anachronist@midwest.social 3 points 10 months ago

Welcome to the Silicon Valley clown college.

https://www.youtube.com/watch?v=8StG4fFWHqg

[–] autotldr@lemmings.world 3 points 10 months ago

🤖 I'm a bot that provides automatic summaries for articles:

Click here to see the summarySam Altman will return as CEO of OpenAI, overcoming an attempted boardroom coup that sent the company into chaos over the past several days.

The company said in a statement late Tuesday that it has an “agreement in principle” for Altman to return alongside a new board composed of Bret Taylor, Larry Summers, and Adam D’Angelo.

When asked what “in principle” means, an OpenAI spokesperson said the company had “no additional comments at this time.”

OpenAI’s nonprofit board seemed resolute in its initial decision to remove Altman, shuffling through two CEOs in three days to avoid reinstating him.

Meanwhile, the employees of OpenAI revolted, threatening to defect to Microsoft with Altman and co-founder Greg Brockman if the board didn’t resign.

During the whole saga, the board members who opposed Altman withheld an actual explanation for why they fired him, even under the threat of lawsuits from investors.


Saved 59% of original text.

[–] villasv@beehaw.org 3 points 10 months ago

Morning Show seasons 2 and 3 condensed in a single week

[–] sub_@beehaw.org 2 points 10 months ago (2 children)

I mistook Larry Summers as Larry Elison (ex Oracle) previously and made a comment that it gone from bad to worse.

I'm retracting it, I don't know much about Larry Summers.

[–] cwagner@beehaw.org 3 points 10 months ago

I was confused about that as his Wikipedia page didn’t show anything that bad, but didn’t want to get into that :D

[–] anachronist@midwest.social 3 points 10 months ago* (last edited 10 months ago)

He's a democratic swamp creature. He's a Rubinite economist who's been slinking around Washington since the Clinton administration. He was also the president of Harvard for a while and got a cameo in The Social Network.

I guess I should have defined rubinite: https://nymag.com/intelligencer/2013/09/summers-flop-end-of-the-rubinites.html

[–] Shyfer@ttrpg.network 2 points 10 months ago (2 children)

Anyone know why they wouldn't say why they fired him? An explanation would have really cleared a lot up.

[–] Eccitaze@yiffit.net 3 points 10 months ago (1 children)

The speculation I heard in the Ars Technica article is that the board was unhappy with how quickly he was pushing to commercialize OpenAI, and they were wary about all the AI side hustles he was starting, including an AI chip company to compete with nvidia.

[–] Shyfer@ttrpg.network 2 points 10 months ago (1 children)
[–] Eccitaze@yiffit.net 3 points 10 months ago (1 children)

Who even knows? For whatever reason the board decided to keep quiet, didn't elaborate on its reasoning, let Altman and his allies control the narrative, and rolled over when the employees inevitably revolted. All we have is speculation and unnamed "sources close to the matter," which you may or may not find credible.

Even if the actual reasoning was absolutely justified--and knowing how much of a techbro Altman is (especially with his insanely creepy project to combine cryptocurrency with retina scans), I absolutely believe the speculation that the board felt Altman wasn't trustworthy--they didn't bother to actually tell anyone that reasoning, and clearly felt they could just weather the firestorm up until they realized it was too late and they'd already shot themselves in the foot.

[–] Shyfer@ttrpg.network 3 points 10 months ago

Ya, it's strange, isn't it? The more I hear about things like the retina scan thing for crypto thing you're talking about or the complaints of his increased push for profitization over safety, the more he seems like a standard sucky tech bro CEO and I lean towards the canning being deserved. But I wish they'd have made it more clear.

[–] abhibeckert@beehaw.org 2 points 10 months ago

I don't think anyone knows. I'm assuming they didn't have a good reason and are embarrassed to admit that.