I've read some of Ed Zitron's long posts on why the AI industry is a bubble that will never be profitable (and will bring down a lot of companies and investors), and one of the recurring themes is that the AI companies are trying to capture growing market share in an industry where their marginal profits are still negative, and that any increase in revenue necessarily increases their costs of providing their services.
But some of the comments in various HackerNews threads are dismissive, saying that each new generation of models makes the cost of inference lower, so that with sufficient customer volume, the companies running the models can make enough profit on inference to make up for the staggering up-front capital expenditures it took to build out the data centers, train their models, etc.
It's all pretty confusing to me. So for those of you who are familiar with the industry, I have several questions:
- Is the cost of running any given pretrained model going down, for specific models? Are there hardware and software improvements that make it cheaper to run those models, despite the model itself not changing?
- Is the cost of performing a particular task at a particular quality level going down, through releases of newer models of similar performance (i.e., a smaller model of the current generation performing similarly to a bigger model of the previous generation, such that the cost is now cheaper)?
- Is the cost of running the largest flagship frontier models going down for any given task? Or does running the cutting edge show-off tasks keep increasing in cost, but where the companies argue that the improvement in performance is worth the cost increase?
I suspect that the reason why the discussion around this is so muddled online is because the answers are different depending on which of the 3 questions is meant by "is running an AI model getting cheaper over time?" And the data isn't easy to synthesize because each model has different token prices and different number of tokens per query.
But I wanted to hear from people who are knowledgeable about these topics.
cost per quality is definitely going down at a fast rate. LLM providers are in extremely competitive field, where open weight models are at a huge competitive advantage for any quality level (privacy, customizability). The competition is all on 2 month release cycles that essentially throw away the old version/code/weights each time. When Claude pretends its newest model is too powerful for non oligarchs to use, it limits its token reach, and then required contribution margin per token.
The buisness model flaw is "one day, a winner becomes a monopoly, and AGI self improves the model at low (except for ultra expensive compute) cost." Monopoly pricing power is very hard/impossible to achieve, because if necessary, foreign governments will subsidize competition to not let a hostile US empire AGI monoplist take hold. Due to corrupt energy oligarchy, it is categorically impossible for US hosted services to ever provide comparative value compared to rational economic energy policies outside of the US. Distillation (Teacher/student RL) means that using another AGI (or leading LLM) will improve models that are behind. There will always be competition on the price/quality curve that prevents even the best/most expensive model from capturing all share. There's always free tier LLM competition availability as well.
Finally, there are layers above LLMs. Agentic and swarm and "deterministic program access"/validation front ends to LLMs can add various levels of token burn, but also divert most tokens from the expensive LLMs, and iteratively improve output. There isn't just a cost/quality curve there is a cost/speed/quality/privacy curve, where non AI coordination tools can improve on the latter curve points independently of leading/expensive LLM/AGI quality.