What does OpenAI need a CEO for anyway? Just let chatGPT run the company if they are so gung-ho about it.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
I've been saying this for almost a year. Not open AI specifically but any company with a board of directors.
They aren't considering the shareholder value of their most expensive liability: the CEO.
He (because let's face it. It's going to be a he in most cases) is paid millions of dollars with a golden parachute. Literally money that could be given back to shareholders through dividends.
The fact that Boards of Directors aren't doing this could be evidence that they aren't looking out for shareholders' interests
Here's why:
Boards of directors are CEOs of other companies that are buddies of the CEO of the company they are directors of. This is like a shitty musical chair of board of directors.
There's an important thing that the CEO provides that no AI can: the acceptance of risk.
On a day-to-day basis the CEO makes decisions, ignores expert advice, knocks off early for tee time, etc. For this work they are wildly overpaid and could easily be replaced by having their responsibilities divvied up amongst a small group of people in leadership roles.
To see the true purpose of the CEO we need to look at a bigger scale - the quarter-to-quarter scale. What could be bigger than that in the world of the MBA?
Every quarter the CEO must have the company meet the financial performance expectations of the board/owner(s)/shareholders. Failure is likely to result in them losing their job and getting a reputation as an underperformer, thus ruining their career. If the company does poorly or those expectations are unreasonably high then the CEO must cut corners in the operation. This of course hampers their ability to meet expectations later, but they'll make it through this quarter.
When (inevitably) too many corners have been cut something catastrophic will happen. Either the company's reputation will go to shit with customers slowly, or a high-profile scandal will blow up in the company's face.
This is the moment when the CEO provides their most valuable service: to fall (or be pushed) onto their sword. The CEO is fired, ousted, or resigns. This allows the board/owner/shareholders to get a new face in and demand that they fix the most egregious issues, or at least the most glaring ones that don't cost too much to fix.
This service cannot be provided by an AI. Why? Because the AI is a creation of the company. If it is used as a scapegoat it solves nothing. The company is pointing at their own creation and saying "see, that's the problem". It's much more effective to point at a human they didn't make and scream that that person made a mistake.
CEOs? Ruin their career? By what, jumping ship and taking a $100M bonus?
Lol, like the previous Boeing CEO. Kill a few hundred people, planes or plane components falling out of the sky, absolutely tank all manufacturing quality control, fail a NASA contract and strand a few astronauts...
Peace out and take a $62M bonus.
I wish I could get paid that much for failure or "accepting responsibility" lol
I can't wait for the AI bubble to burst.
We are all waiting. If they don't come up with proven revenue opportunities in the next ~18 months, it's going to be difficult to justify the astronomical capex spend.
This podcasting bro is NOT chasing revenue (yet).
He wants power.
He wants to collect 11-12 figure sums of venture capital and then built things that let him rule the world.
And afterwards, maybe revenue.
Another year of this shit? I don't think we can take it, honestly
Mah, won't happen like this. It was similar with Facebook 10 years ago and look at where it is now.
I've heard someone call it billionaire brain rot. I think at some point you end up with so much money and not enough people telling you no, that it literally changes your brain.
Seems likely.
Imagine never hearing the word "No." as a complete sentence ever again in your life.
Or when you do just assuming you can override them eventually
We had a guy like that at work, he said basically "how far above you do I have to go to get what I want"
That's not a bad power to have if you use it for good
He wanted a second laptop dock for home when we had limited supply and not everyone had one yet
I think it's also likely that it's very hard to amass billions unless you already have some sort of brain rot.
$7trillion is three times the GDP if Brazil. It is bigger then the US federal budget. Seriously it is insane.
This guy is an absolute lunatic.
"Gimme all of the world's money several times over for this fancy T9 that I'm playing with."
If someone wrote a cartoon villain using his quotes, it would be dismissed as unbelievable and rubbish.
He is an empty husk of a man who has completed his transformation into a pure PR machine
His involvement in the infamous WorldCoin provides useful insight into his character.
An oligarch and a degenerate (outside the US many oligarchs have a more or less sober understanding of who they are, although degeneracy among oligarchs is a global issue).
Open AI has a projected revenue of 3 Billion this year.
It is currently projected to burn 8 Billion on training costs this year.
Now it needs 5 Gigawatt data centers worth over 100 Billion.
And new fabs worth 7 Trillion to supply all the chips.
I get that it’s trying to dominate a new market but that’s ludicrous. And even with everything so far they haven’t really pulled far ahead of competing models like Claude and Gemini who are also training like crazy.
There is no market, or not much of one. This whole thing is a huge speculative bubble, a bit like crypto. The core idea of crypto long term make some sense but the speculative value does not. The core idea of LLMs (we are no where near true AI) makes some sense but it is half baked technology. It hadn't even reached maturity and enshittification has set in.
OpenAI doesn't have a realistic business plan. It has a griftet who is riding a wave of nonsense in the tech markets.
No one is making profit because no one has found a truly profitable use with what's available now. Even places which have potential utility (like healthcare) are dominated by focused companies working in limited scenarios.
IMO it's even worse than that. At least from what I gather from the AI/Singularity communities I follow. For them, AGI is the end goal - a creative thinking AI capable of deduction far greater than humanity. The company that owns that suddenly has the capability to solve all manner of problems that are slowing down technological advancement. Obviously owning that would be worth trillions.
However it's really hard to see through the smoke that the Altmans etc. are putting up - how much of it is actual genuine prediction and how much is fairy tales they're telling to get more investment?
And I'd have a hard time believing it isn't mostly the latter because while LLMs have made some pretty impressive advancements, they still can't have specialized discussions about pretty much anything without hallucinating answers. I have a test I use for each new generation of LLMs where I interview them about a book I'm relatively familiar with and even with the newest ChatGPT model, it still makes up a ton of shit, even often contradicting its own answers in that thread, all the while absolutely confident that it's familiar with the source material.
Honestly, I'll believe they're capable of advancing AI when we get an AI that can say 'I actually am not sure about that, let me do a search...' or something like that.
I follow a YouTube channel, AI explained, that has some pretty grounded analysis of the latest models and capabilities. He compared LLMs to the creative writing center of the brain, as in they're really nice to interact with, output things that sound correct, but ultimately are missing the capabilities of reasoning and factuality that are needed for AGI
This guy is losing touch of reality
I know musk is bipolar. Is Altman too?
Bipolar can cause this kind of request. It's called a delusion of grandiosity.
Oh wow I didn't actually know he was bipolar (checked to confirm it is the case). I knew he was on the autism spectrum.
I can't imagine those 2 play nice together.
My wife is both bipolar and autistic and Musk is just as unfathomable to her as he is to me. The two things clash in some regards but his shittiness is his own.
"But the breakthrough will come just as soon as the chips no one can make are delivered."
Probably.
The climate? What climate? Who cares about the climate?
You can buy a lot of Twitters for that money
Middle Eastern money
Something tells me the Saudis don't want AI for the betterment of all humanity.
Could be the human rights abuses, dunno.
Imagine an AI bound by Sharia law, the current ones limited by American puritanical bullshit are already bad enough. "How did prostitution during the gold rush affect the economy of mining towns?"
ERROR THIS QUESTION VIOLATES GUIDELINES
It needs lots of energy.
The Bloomberg podcast series 'Foundering – The OpenAI Story' is quite insightful in regard to Sam Altman's psyche.
There are five episodes, first is here:
He will get that. The ultra rich ignore all healthy limits.