this post was submitted on 13 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Information moving between CCU’s is troublesome with certain workloads. I’m not referring to the lopsided cache, but rather the limitations of what is basically two CPUs merged together and the complications that adds to their shared I/O.
I don’t recommend the 7900x because it is a failed 7950x. I recommend the 7800x3D and the 7950X as long as the prices are within stretching for. The 7950x3D fits a very niche role as well.