this post was submitted on 30 Oct 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

We've put together an article using some guesstimates of what it would be like for an enterprise to deploy LLM's on prem.

https://bionic-gpt.com/blog/llm-hardware/

In short, I'm estimating $20,000 in hardware costs per 1000 users, minimum.

I'd be grateful if people could give me some feedback on the numbers and whether my assumptions look realistic.

Thanks

you are viewing a single comment's thread
view the rest of the comments
[–] pmelendezu@alien.top 1 points 1 year ago

Didn’t Nvidia ban the use of consumer grade cards for professional uses? You will need to use A100s and whatnot for a datacenter