this post was submitted on 30 Oct 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

We've put together an article using some guesstimates of what it would be like for an enterprise to deploy LLM's on prem.

https://bionic-gpt.com/blog/llm-hardware/

In short, I'm estimating $20,000 in hardware costs per 1000 users, minimum.

I'd be grateful if people could give me some feedback on the numbers and whether my assumptions look realistic.

Thanks

top 3 comments
sorted by: hot top controversial new old
[–] AsliReddington@alien.top 1 points 1 year ago (1 children)

It's extremely overpriced. With INT4 llama.cpp does even crazier numbers. A system with 4090s can be made for $2500 in India & cheaper elsewhere for sure.

[–] pmelendezu@alien.top 1 points 1 year ago

Didn’t Nvidia ban the use of consumer grade cards for professional uses? You will need to use A100s and whatnot for a datacenter

[–] dahara111@alien.top 1 points 1 year ago

Interesting, but I think there will be considerable bias based on time of day, day of the week, season, closing date, etc.