this post was submitted on 28 Sep 2024
431 points (98.2% liked)
Technology
59358 readers
4869 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Open AI has a projected revenue of 3 Billion this year.
It is currently projected to burn 8 Billion on training costs this year.
Now it needs 5 Gigawatt data centers worth over 100 Billion.
And new fabs worth 7 Trillion to supply all the chips.
I get that it’s trying to dominate a new market but that’s ludicrous. And even with everything so far they haven’t really pulled far ahead of competing models like Claude and Gemini who are also training like crazy.
There is no market, or not much of one. This whole thing is a huge speculative bubble, a bit like crypto. The core idea of crypto long term make some sense but the speculative value does not. The core idea of LLMs (we are no where near true AI) makes some sense but it is half baked technology. It hadn't even reached maturity and enshittification has set in.
OpenAI doesn't have a realistic business plan. It has a griftet who is riding a wave of nonsense in the tech markets.
No one is making profit because no one has found a truly profitable use with what's available now. Even places which have potential utility (like healthcare) are dominated by focused companies working in limited scenarios.
IMO it's even worse than that. At least from what I gather from the AI/Singularity communities I follow. For them, AGI is the end goal - a creative thinking AI capable of deduction far greater than humanity. The company that owns that suddenly has the capability to solve all manner of problems that are slowing down technological advancement. Obviously owning that would be worth trillions.
However it's really hard to see through the smoke that the Altmans etc. are putting up - how much of it is actual genuine prediction and how much is fairy tales they're telling to get more investment?
And I'd have a hard time believing it isn't mostly the latter because while LLMs have made some pretty impressive advancements, they still can't have specialized discussions about pretty much anything without hallucinating answers. I have a test I use for each new generation of LLMs where I interview them about a book I'm relatively familiar with and even with the newest ChatGPT model, it still makes up a ton of shit, even often contradicting its own answers in that thread, all the while absolutely confident that it's familiar with the source material.
Honestly, I'll believe they're capable of advancing AI when we get an AI that can say 'I actually am not sure about that, let me do a search...' or something like that.
Every LLM answer is a hallucination
The hallucination is in the mind of the user – people fall for the illusion of talking to a creature.