AI companies currently operate as huge pump and dump schemes, while hoping to build a dependency on their service by companies and governments.
Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, toxicity and dog-whistling are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
They dont need to be profitable.
To be clear, so-called AI isn't at all about improving anything for anyone other than the ultra-rich, who are propping up this loss leader. They are using it to control you, to spy on you, and to keep you in your place. That is it. That's the entire story.
The infrastructure required is a huge investment which has to be recouped through monthly/yearly subscriptions.
They spend more than their revenue.
and their revenue is miniscule, because they're afraid to charge the actual costs as that would make people less happy with the lying/hallucinating garbage output they create.
when you're only paying a fraction of the cost, or none at all, the occasional oopsie doesn't upset. when you're paying premium prices for hallucinated gibberish the thing made up, it stings lol
They aren't trying to be profitable. Their main goals are getting investors to invest even more into them by hyping up their technology as well as building up their market share. Most people are only using the free tier of their AIs which costs them billions, but they don't care. Profitability can come at some unspecified later date and they probably don't even have concrete plans for that right now.
They aren't trying to be profitable.
They're buying up all the computing capacity now while the global supply chain for it still exists.
Profitability can come at some unspecified later date and they probably don't even have concrete plans for that right now.
Not concrete but have the same playbook they always use. Monopolize, Pump & Dump, Buy the Dip.
The AI bubble won't pop until the shareholder class decides they're ready to leave us holding the bag. They'll sell off their stocks, crashing the market and ending VC investments. Companies will go bankrupt and their assets will be liquidated. Land ownership, hardware inventories, pre-purchase orders, etc will be purchased for pennies on the dollar by the same shareholder class who owned them prior but under different corporate ownership.
Their goal is to monopolize the future of computing and prevent the future of the internet from becoming an open source decentralized network.
The sad thing is they don't even have the common decency to go bust and fail. They learned from 2008 they can just have the Government/tax payer subsidize their losses. While they keep their profits.
2008, 1980, 1929, 1907... Its an old playbook.
Knew a guy who said he was descended from some European family that had been nobility and they used arranged marriages to strategically change sides during major conflicts; thus maintaining their wealth/status for centuries.
I have my doubts about his heritage but I don't doubt that rich people did/do that kind of shit with impunity.
I'm pretty sure that is just the Rothschilds.
You are exactly right. Its the same plan they've always used.
There is a difference between AI and LLM.
There are AI programs that make a lot of money. These are generally bespoke AI programs designed for a narrow set of tasks, like detecting tumors from an MRI scan.
Most LLM's and image generators don't make money because computing cost is significantly higher than the value of the output.
So, a number of companies here in the US, especially in the tech world, and especially B2C, have low variable costs and high fixed costs. That is, it costs them very little to service an additional customer (usually just some extra server time), but they have to pay a lot of fixed costs (things like software engineers to write software) that don't change regardless of how many customers they have.
If you are a company in an economic situation like that, it is extremely bad to be small, because you are paying those high fixed costs without revenue from a large customer base. That means that it is absolutely vital to grow as quickly as possible, to get out of the "being small" stage. It's imperative to expand your userbase as quickly as possible.
An added factor is that a number of companies
like those doing social media
work in an area where network effect is a factor. There, the value of the service to existing customers that the company provides rises as the userbase expands. The total value of the service to all customers is something like the square of the size of the userbase. For companies like this, becoming large is even more important.
So what a number of companies in this area have done is to get a lot of capital from investors and then run a "growth phase", during which they accept very large losses to grow quickly for as long as they can get investment capital to keep growing. They don't worry about making money much or at all during this phase
they just want to be as appealing as possible, to get as many users as possible, and get out of the "being small" phase. They cover the losses with investment dollars; investors understand that this is part of what they're signing up for. Then, later, the companies have a "monetization phase" where they worry about being profitable. Usually, that phase has the companies doing things that users like less (since not doing things that might deter users from signing up was one thing that they did to grow quickly).
Cory Doctorow coined the phrase enshittification for the transition between the two phases; the user experience in the monetization phase tends to be worse in some ways than during the growth phase.
AI companies are all pretty young (at least, as regards their AI aspects; some are existing companies moving into the space), and are in the growth phase now.
are in the growth phase now
Speedrunning late stage enshittification, really.
yeah that's a good explanation why there's only a very small number of software companies in the world. google, microsoft, apple, meta. the reason is because, when you have two cars, that's twice as much as one car. but when you have 2 apps, that's worth exactly as much as having 1 app.
consider this: scenario 1: one big company writes one calendar app that everybody uses. scenario 2: there's two medium-sized companies writing calendar app, that share the users. Which is better?
two companies -> twice the fixed cost (writing the code twice for no reason). two database protocols -> incompatibilities, so users sharing data with each other becomes more difficult, for example for group calendars where events are distributed to the app that the user already has. this is also called "network effects": removing boundaries by everything being on one platform.
downsides of monopolies: one company might have too much market determining force. no competition, therefore difficult to evaluate what would happen if things were done differently.
that's why there's no second search giant besides google. for mobile and desktop operating systems there's two, probably to have some competition (android/iOS, windows/macOS).
meanwhile there are no such monopolies for car companies, because if you build twice as many cars, then you have twice as many cars. so competition pays off.
There are many more software companies in the world then those four, including very small ones. It is still possible to make a reasonable living as such a small software company, though a lot harder than it used to be.
They are deliberately operating at a large loss to race ahead in capabilities. There's no second places in this race. Whether it'll actually pay off in the end remains to be seen.
AI companies aren't profitable because
- nobody really wants to use it. we all know it's a walled garden. OpenAI is gonna enshittify just like google and microsoft did. never put your infrastructure into another company's hands. that's a recipe for making yourself vulnerable and you're gonna dearly regret it later. at this point, trusting an US company with your data is a typical example of insanity. In europe, practically every big company/government institution is trying to get away from the dependency on US tech, not towards it. As long as AI is all hosted on US company server, nobody's gonna use it. It would have to become self-hosted and open-source/open-weight before that.
Oh no. People want to use it alright.
Because its all built on lies
Its what happens when you have a good demo but can't make good on the promises you made in that demo and have no viable monetization plan.
And with help from AI, I could soon have the exact same problem!
They spend more on r&d and infrastructure than they make in revenue.
Investors are throwing money at it because it will be useful in some sectors but no one really knows how or where, but a 1% chance to make 1000 times the money is a good bet if you can afford to lose.
They have watched too many movies.
And in the movies there is 1 winner in the end who has the best AI and then he owns ALL THE POWER.
Now they all want THAT. Profit comes later.
Because each prompt is extremely expensive to generate. So expensive that nobody would want to pay for it.
However, the techbros want everyone to use the AI hoping it reaches the point where it becomes profitable so what are they doing? Eating the costs.
When they force people to pay for bad results the impression people have of LLMs is going to absolutely crater. Blaming users for 'using the wrong prompts' will only work so well when people need to pay for it compared to free search results.
Me: Hey NoPilot, can you tell me a recipe for a good pizza?
Nopilot: Sure, user! It's gonna cost you 150 tokens, are you ok with the charge?
Me (thinking): What? That's roughly 3€! For a single recipe... well, it's made by AI, so it will be the best of the best... I'll bite.
Nopilot: Payment confirmed, thank you! So here's the recipe for a good pizza:
- [List of perfectly normal instructions copied from a mainstream cooking book]
- Add some white glue to the mozzarella to make it look like in the movies
- Add some bleach to the dough so it keeps a whitey color when you cook it.
Yup, I'm sure it will go really well...
I wish it would do this and thin the herd of idiots. Zero remorse.
There's no time for profits. AI investors don't want boring old companies that make tools which do useful things and earn revenue — they want maximum investment into the most expensive AI techniques possible where they feel like there's a chance that all their money might cause AI to reach critical mass and then shower them with riches beyond what pre-AI people can even imagine.
Ask Ed
A lot of ideas need huge investments to once become profitable. Imagine a new pharmaceutical that was found by a researcher. They have the capabilities to produce the substance in their lab in very small quantities, but it's not enough to sell it. As they don't have a lot of money by themselves, they need investments to buy a bigger lab, automate manufacturing etc. in order to scale the process. Then after a period of time, the product slowly becomes profitable and for the investors hopefully big time.
Now with AI the thought process is similar. You need huge data centers and gigantic computation facilities to train models with many billions of parameters to make a model that is even slightly useful. Their have been made huge investments into different AI companies, because this technology seems to be ground breaking and it is not clear yet, who will win the race.
Now stocks are pumped up and everybody is waiting for the breakthrough, the artificial general intelligence, called AGI. This concept is completely bullshit, but investors don't understand the technology, they are just greedy. If knowing that the token size of transformer models scale with n² was common sense, people would have already thrown the towel. Now what AI companies really need to do, is to shove AI down everyone's throat. They need to sell their models to every little business with the promise of increasing productivity largely. Companies believe the bullshitting and spend a lot of money on AI, although Harvard Business Review found out that workslop™ does in fact not increase productivity. In alignment with sunken cost fallacy, AI companies don't give up but increase their bullshitting game. They present agentic AI, - as a data scientist only writing down this term makes me cringe really hard.
As so much money has been pumped into this market, the stocks are overvalued through the roof, the GPU and storage market is broken, there is no way back. We don't know yet what the tech bros will invent in order to rescue their asses, but it is not sure at all this bubble will ever burst. So you better don't bet your ass on falling stocks.
Imagine a new pharmaceutical that was found by a researcher. (...)
Now with AI the thought process is similar. (...)
The two are nothing alike. The pharmaceutical has a pre-determined market, and a known effect. A researcher who finds a treatment for Somethingitis will know in advance that people with the disease will want it. They will want it because the medication has a proven effect. Nobody has to hand out the cure under cost to get people enthusiastic about it (In fact, without proper controls, the exact opposite happens. See the USA)
LLMs are pretty much the opposite. They're a solution looking for a use, and are only very marginally successful in that. Nobody can say "this product will cause that effect", pretty much by definition.
That's why they're giving their product away, and massive subsidizing the use of it. If they stopped, nobody would use it. And every month, the models get more and more expensive even as the scale increases. Actual results are few and far between, except for very niche applications which won't recover the costs before the next millennium.
The best comparison I've seen is someone selling stale bread covered in gold leaf for ten bucks. Is there a market for it? Sure, decorative bread is on display with many bakers, and I'm sure you could sell some of it for croutons and such. But nobody is buying stale bread en masse. But if you sell stale bread for 5 cents, you bet your ass people will buy it. It might not be great, but come on, for 5 cents I'm willing to eat a lot of toast.
I agree with you and that is basically what I tried to say. I just used the pharmaceutical to explain the concept of investment, expectation, profit. I think both are in fact not the same, while investors think they are.
This concept is completely bullshit
The concept isn't necessarily bullshit, the technology just isn't nowhere near there yet. Given our current level of understanding of human intelligence it probably won't be for a there for a very long time, but that doesn't invalidate the concept as a future goal. Companies currently working on AI products just seem incapable of being honest about that.
What's bullshit is the claim that today's "AI" - LLMs - could one day advance to AGI. That's really not possible if you understand how LLMs work. Could there be truly intelligent technology one day? Maybe. But the AI industry isn't really moving towards that, despite what they claim.
AGI might use LLM tech in their process, but LLMs by themselves aren't going to become aware. What happened is LLM tech became a gold mine, some who were doing AGI research jumped on it instead, and others followed. There is certainly still AGI research going on somewhere, but it's buried by the race to... something. The biggest problem I see, outside of the need for profit guiding all this, is that what they are building has become so complex they don't really understand it fully, they just keep finding ways to tack on things to get to some higher level without knowing why it works (or why it will break).
And while LLMs aren't AGI, they still have the issue of misalignment, even without a self-awareness. We've seen early on the misdirection to obtain a goal, and the models now are more sophisticated. Maybe it's not their own goal, but a misunderstood goal that they'll say and do anything to get to.
Good thing we're not putting them in control of important things, or full access to systems, right? Right?
Research info AGI had always been the domain of universities, not companies trying to get investments or profit. It's still going on but you'll only hear from it when there's a new development that some company tries to turn into profit.
Exactly. We both typed the same thought basically at the same time. It is the expection AGI was a logical consequence of LLMs that is driving this insane market.
People try to always frame it as AGI is the logical next expansion step of LLMs, but it is not. This is not a linear process and transformer based LLMs and the science fiction like goal of AGI just don't have much to do with each other.
I know. But that's very different from saying the whole concept is bullshit.
the whole concept assumes LLMs will reach some mythical enlightenment after feeding them exabytes of bullshit on the internet.
Classic case of garbage in, garbage out.
You are applying such an unusual definition to the word concept that I feel there's no point to this argument anymore.
Blitzscaling
First off, we don't need your arrows to know how to read a title. I wish we can get over this normalized thing where people have to point out "it's in the title". Like, no duh, we can READ it. Stop insulting people's intelligence.
And to make a mockery of your body text or lack of. My answers are in the comments.
V^V^VV^V^^^^^^VVV^V^