this post was submitted on 05 May 2026
43 points (97.8% liked)

Ask Lemmy

39454 readers
1425 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, toxicity and dog-whistling are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

I've read some of Ed Zitron's long posts on why the AI industry is a bubble that will never be profitable (and will bring down a lot of companies and investors), and one of the recurring themes is that the AI companies are trying to capture growing market share in an industry where their marginal profits are still negative, and that any increase in revenue necessarily increases their costs of providing their services.

But some of the comments in various HackerNews threads are dismissive, saying that each new generation of models makes the cost of inference lower, so that with sufficient customer volume, the companies running the models can make enough profit on inference to make up for the staggering up-front capital expenditures it took to build out the data centers, train their models, etc.

It's all pretty confusing to me. So for those of you who are familiar with the industry, I have several questions:

  1. Is the cost of running any given pretrained model going down, for specific models? Are there hardware and software improvements that make it cheaper to run those models, despite the model itself not changing?
  2. Is the cost of performing a particular task at a particular quality level going down, through releases of newer models of similar performance (i.e., a smaller model of the current generation performing similarly to a bigger model of the previous generation, such that the cost is now cheaper)?
  3. Is the cost of running the largest flagship frontier models going down for any given task? Or does running the cutting edge show-off tasks keep increasing in cost, but where the companies argue that the improvement in performance is worth the cost increase?

I suspect that the reason why the discussion around this is so muddled online is because the answers are different depending on which of the 3 questions is meant by "is running an AI model getting cheaper over time?" And the data isn't easy to synthesize because each model has different token prices and different number of tokens per query.

But I wanted to hear from people who are knowledgeable about these topics.

you are viewing a single comment's thread
view the rest of the comments
[–] brucethemoose@lemmy.world 2 points 3 days ago* (last edited 3 days ago) (1 children)

Okay, I fudged the part about "for free." The problem is DeepSeekv4 is literally in preview, and its architecture is so new that engine support for its weights is poor.

Right this second, you can either pay a few cents to try it from some API (there are many providers since its open weights), or rent a GPU (or maybe a CPU) instance if you don't trust the public tests, and actually want to test resource usage yourself.

Or you can quantize it and self host it. I plan to do so on my 128GB RAM/RTX 3090 desktop, which is a affordable config to rent if you don't have a desktop like that.

But llama.cpp support is a work-in-progress. Same with other backends like Ktransformers. Realistically your options are:

  • Wait a week, maybe a few weeks, for the llama.cpp/ik_llama.cpp developers to implement to DSV4 architecture.

  • Try one of the janky GPU/Apple forks availible right now.

  • Try one of the slightly-less-janky, but slow CPU-only chinese forks.

But once its implemented, I'm going to make my own personal IQ3_KS mixed quantization for 128G desktops, and see how it compares to older architectures myself.


Another confounding factor is, if you're researching "AI farm inference costs," thats very different.

Frugal providers like Deepseek use complicated schemes to batch requests over many GPUs, with each taking requests in parallel. In other words, the more GPUs they have, the more speed per GPU they can squeeze out. For DeepseekV3, last I heard, Around 300 GPUs or so was an ideal deployment number...

And they aren't even going to be using Nvidia GPUs anyway. I believe Deepseek is switching to Huawei for inference.

But however you slice it, they're using order of magnitudes fewer resources than Tech Bro providers like OpenAI or Grok. They have been, for over a year.

[–] scrubbles@poptalk.scrubbles.tech 2 points 3 days ago (2 children)

That all makes sense to me, and lines up with what I've been reading too. I saw the model download and I was like "guhhhh" to it because I was also excited to try it on my 3090. I'll be waiting for the quants.

Yeah I like the end there too, that OpenAI / Anthropic have been desperately trying to figure out how to do this, and a few guys with limited hardware did it. When you have unlimited resources, you end up needing unlimited resources. When you only have 300 GPUs, you make it work. It's why tech is littered with people starting in garages, they found a way to make it work.

[–] brucethemoose@lemmy.world 2 points 3 days ago* (last edited 3 days ago)

And to be clear, you need 3090 + at least 96GB of fast CPU RAM (really 128GB) to run Deepseek Flash coherently. It is a big model; there's no way around it.

If you have less RAM, try Qwen 27B now (which also uses an exotic attention mechanism). It'll fit on your 3090 just fine.

For DeepSeek Pro, you'd need a Xeon or EPYC homelab.

[–] brucethemoose@lemmy.world 1 points 3 days ago* (last edited 3 days ago)

I view it differently.

In the US, there are either megacorps, or "people in garages" which honestly don't have resources and stuff like legal support to do huge innovations. They publish cool papers, which never get implemented because they don't have $200k+ for a bigger test, and can't work on it themselves for a living. Any "garage devs" who get too big, get smited or amalgamated into Big Tech gray goo, and whatever was interesting gets lost in oblivion.

There's no cooperation, no sharing, either.

And OpenAI/Anthropic are way more conservative than you'd think. Same with Meta; they want results next quarter. Zuckerburg literally fired the whole Llama team, which put meta on the AI map and basically founded the open weights space, when they had one failed experiment. In other words, I'd argue clueless billionaires and the Tech Bro acolytes surrounding them are poisoning LLM development, and it's starting to catch up.


In China, things are different. The GPU sanctions forced these gigantic companies like Alibaba or Tencent to be compute-thrifty, but they all seem to have access to suspiciously good training data... I would be the Chinese govt is helping them under the table. Chinese devs also have an interesting attitude; I would characterize them as "cooperative," with lots of private forum sharing going on, most models being open-weights, and clearly not a lot of desire to censor their models for the government. But they have their own forms of dysfunction too, sometimes by copying other firms a little to closely, or corporate/personal drama like anywhere.