this post was submitted on 24 Nov 2023
1 points (100.0% liked)

LocalLLaMA

4 readers
4 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 2 years ago
MODERATORS
 

I plan to infer 33B models at full precision, 70B is second priority but a nice touch. Would I be better off getting an AMD EPYC server cpu like this or a RTX 4090? With the EPYC, i am able to get 384GB DDR4 RAM for ~400USD on ebay, the 4090 only has 24GB. Moreover, both the 4090 and EPYC setup + ram cost about the same. which would be a better buy?

you are viewing a single comment's thread
view the rest of the comments
[–] mcmoose1900@alien.top 1 points 2 years ago

If you must run at high precision... the best system in that budget is probably a compromise?

Grab a 3090 or 3060 and slap it on the most RAM bandwidth you can get, with a more modest CPU. The GPU will offload prompt processing and enough response layers to help.