this post was submitted on 13 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Hmm. Im not a fan of the 7900x. It has two CCUs like the 7950x, but only 6 cores per. It ends up being pretty awkward in practice on my 7950x3d. I’d recommend going all in on a 7950x, or do a 7800x3d. Im not educated on if the extra cache benefits LLM, but it’s a fantastic value for the money.
The lopsided CCUs in X3D parts are not the same as the ones on 7900X/7950X. The cache ensures that you need a scheduler that can put loads that need it on the cache-enabled portion and that's asking a lot from a scheduler. The AMD parts without extra cache don't suffer from this issue... it's why I got a 7950X, but the 7900X is also fine and all three of these CPUs will be entirely limited by memory bandwidth if used for CPU inference.
Information moving between CCU’s is troublesome with certain workloads. I’m not referring to the lopsided cache, but rather the limitations of what is basically two CPUs merged together and the complications that adds to their shared I/O.
I don’t recommend the 7900x because it is a failed 7950x. I recommend the 7800x3D and the 7950X as long as the prices are within stretching for. The 7950x3D fits a very niche role as well.