this post was submitted on 17 Nov 2023
134 points (100.0% liked)

Technology

37712 readers
563 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
top 34 comments
sorted by: hot top controversial new old

Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.

That's wild. Given the abruptness and his profile, I was thinking it must be an improper conduct investigation. But either way, I hope we get more details.

[–] sculd@beehaw.org 17 points 11 months ago

Lol The finally realized that Altman brings more bad press because of his association with crypto and his weird views on things.

I doubt the successor can be a good person but hopefully a less creepy one.

[–] scrubbles@poptalk.scrubbles.tech 15 points 11 months ago

Damn, time for wild thoughts as to why. I wonder who the bad guy is here. Did Sam want to focus on profit? Does the board and they're hiding behind that? I have no idea.

Still this is a scene right from Silicon Valley, the founder being voted out of their own company

[–] brothershamus@kbin.social 13 points 11 months ago (4 children)

They found out he raped his 4 year old sister and kept doing it well into her teens?

[–] highsight@programming.dev 17 points 11 months ago

You got a source there for that, buddy?

[–] k_rol@lemmy.ca 6 points 11 months ago
[–] dannym@lemmy.escapebigtech.info 5 points 11 months ago* (last edited 11 months ago)

To be fair, the allegations haven't been proven and allegedly he was 13 at the time... not that it makes it any better, but context matters

[–] cwagner@beehaw.org 4 points 11 months ago

Maybe don’t sling mud like you are Fox News, even if you are a fan.

[–] sabreW4K3@lemmy.tf 13 points 11 months ago (2 children)

They make it sound like all the for profit stuff was Sam Altman and they just wanna make cool tech.

[–] intensely_human@lemm.ee 13 points 11 months ago

OpenAI’s original mission was extremely serious: to ensure the wide proliferation of AI to ensure a multipolar ecosystem instead of a monopolar one, to force AI to learn to play nice via parity with other AIs.

I was amazed that an organization existed which recognized this hard to swallow but ultra important fact.

[–] snowe@programming.dev 3 points 11 months ago

The fiduciary duty of the board was not profit. They’re required by charter to make sure the company advances ai safely. 4 of the board members have no investment in the company at all.

[–] furzegulo@lemmy.dbzer0.com 12 points 11 months ago

now he can fully focus on the magic eyeball money

[–] devz0r@kbin.social 11 points 11 months ago

He’s sneaking out with the AGI in his pocket and gonna become a supervillain.

[–] cwagner@beehaw.org 8 points 11 months ago

Crazy, the news almost took hackernews down when it broke. MS also was taken by surprise, and today 3 lead researchers resigned. Currently only speculation and no one really knows what’s going on.

[–] shiveyarbles@beehaw.org 7 points 11 months ago (1 children)

They found out that AI is a lame fad that will go the way of crypto?

[–] YeeHaw@beehaw.org 11 points 11 months ago (1 children)

Except that's definitely not the case, since unlike crypto shit, the latest wave of AI tech is already useful and found lots of applications. It may never reach AGI level, but that doesn't mean it's not immensely useful.

[–] shiveyarbles@beehaw.org 6 points 11 months ago (3 children)

Yeah I guess, I just see a mishmash of consumed data presented where you have to tweak parameters and so forth. Some gobbledygook nonsense presented as facts, three arms and six fingers in generated art, etc. just seems like shit to me

[–] abhibeckert@beehaw.org 6 points 11 months ago* (last edited 11 months ago)

Yesterday I gave OpenAI's latest chatbot a photo of a challenging board game quiz card with questions that I couldn't answer.

The questions were intentionally difficult, no ordinary human is expected to be able to answer them all - at least not without spending an hour googling/etc. Most of us could only answer a couple of the questions before the timer ran out and we all compared answers.

The new version of ChatGPT answered every question, perfectly, in two seconds. It couldn't do that a week ago, the tech is advancing incredibly fast.

There are definitely some things it's not very good at, but there are equally things it's very very good at - the technology is useful, unlike crypto which I see as an interesting solution to a problem that nobody has.

[–] vanderbilt@beehaw.org 3 points 11 months ago

At my company we have already used it to great length. We had a backlog on several thousand support tickets we wanted categorized. GPT-4 did it in about 8 hours and with over 80% accuracy, at a fraction of the cost (and higher quality) it would have taken to get humans to do it.

We’re rolling out a chat bot too using it, with a local model as backup, to reply to leads when our clients are busy. So far they love it.

We’re making our money back despite the costs, and we’re able to spend that money paying people to not do busy work.

[–] MayonnaiseArch@beehaw.org 2 points 11 months ago

I'm sure there are use cases, it's just that there's much too much hullabaloo about it all. Some people can get something out of it, they'll fire their workers, that's about it. But I can't use it for shit because I want to control what's being done, don't do text processing at all. But I smell the crypto dudes that are clinging to the whole thing, a dripping mess of turds wanking off to fanciful stories about ai. It's language models all the way down

[–] shnizmuffin@lemmy.inbutts.lol 7 points 11 months ago
[–] t3rmit3@beehaw.org 6 points 11 months ago* (last edited 11 months ago)

I'm gonna guess this was a security compromise that he failed to disclose to the board, and failed to report to the SEC.

It makes the most sense given the board's statement that his "repeated lack of candor" prevented the board from executing on its duties.

[–] bedrooms@kbin.social 4 points 11 months ago (1 children)

As a happy subscriber, the last thing I want is the influence from the board.

Monopoly established, the max profit phase about to start...

[–] HappyFrog@lemmy.blahaj.zone 1 points 11 months ago (1 children)

The board is not technically profit motivated. Because of the fact that microsoft is screaming at the board and threatening to gake away their servers if Sam is not reinstated, I think that he was the one driving profits and lied to the board.

[–] bedrooms@kbin.social 1 points 11 months ago

Makes sense.

[–] bioemerl@kbin.social 4 points 11 months ago (1 children)

I'd say good riddance but the replacement is worse

[–] hascat@programming.dev 4 points 11 months ago (1 children)
[–] bioemerl@kbin.social 2 points 11 months ago (2 children)
[–] Segab@beehaw.org 10 points 11 months ago (2 children)
[–] ulkesh@beehaw.org 8 points 11 months ago

Seriously. Their neocon attitude toward regulation is leaking. If anything should be regulated, it’s AI advancements. The public good is still a thing, despite what conservatives want to dismiss.

[–] bioemerl@kbin.social 3 points 11 months ago

Do you want AI to exclusively be in the hands of big companies and the government?

Do you want the future of technology locked behind pay walls and censored so that you can't use it to do anything they don't want you to do?

If you think AI regulation comes in the form of making sure big companies can't do bad things to you, you haven't been paying attention.

[–] Lockely@pawb.social 2 points 11 months ago

It should be regulated.

[–] shiveyarbles@beehaw.org 2 points 11 months ago* (last edited 11 months ago)

I mean I get it, but you can Google for answers as well.. check stack overflow, etc, get answers from true industry masters. at the end of the day it seems like there's not much added value.. especially if you have to vet the answers for reliability.

[–] nhgeek@beehaw.org 1 points 11 months ago

This one really surprised me.