this post was submitted on 24 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

I plan to infer 33B models at full precision, 70B is second priority but a nice touch. Would I be better off getting an AMD EPYC server cpu like this or a RTX 4090? With the EPYC, i am able to get 384GB DDR4 RAM for ~400USD on ebay, the 4090 only has 24GB. Moreover, both the 4090 and EPYC setup + ram cost about the same. which would be a better buy?

you are viewing a single comment's thread
view the rest of the comments
[–] multiverse_fan@alien.top 1 points 11 months ago

If I had the money, I'd go with the cpu.

Also, I'm not sure a 4090 could run 33B modes at full precision. Wouldn't that require like 70GB of vRAM?