this post was submitted on 25 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

Hey guys, thinking of upgrading my PC, I’m a dev and i wanna run my own LLMs, it’s more to run my own copilot locally instead on relying on outside services. This is what I got now

Ryzen 7 3700x 32GB RAM 5500XT

Debating whether I should get a 3950x or 5800x3D as I can game abit better as well As for the GPU I might just go for the 4090, but if this is overkill please let me know. What you guys think?

you are viewing a single comment's thread
view the rest of the comments
[–] FullOf_Bad_Ideas@alien.top 1 points 10 months ago

I upgraded from gtx 1080 to rtx 3090 ti 2 weeks ago. I think going with rtx 3090 / 3090 ti / 4090 would be a good option for you, I don't know how big of a difference having stronger cpu would have, I think exllama v2 has some cpu bottlenecking going on, but I have no idea what is computed on cpu and why. There were moments during generation where it seemed like it was using only 1 thread and it was maxing it out, being bottleneck for gpu. I don't think ram matters a lot unless you train and merge loras and models.