this post was submitted on 30 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
So that really depends. You're talking about running a multi gpu setup. If all of your model is in the gpu, then your processor will not be a bottleneck at all. The clock speed of the PCIe bus is independent of the cpu cores, unless you're messing with overclocking. That's why they advertise PCIe 3.0, 4.0, 5.0, etc. The PCIe version dictates the bandwidth per lane.
That being said, multi gpu setups do introduce some overhead. If a model is split between GPUs, the PCIe interface becomes a modest bottleneck as they pass data back and forth. The greater the number of GPU's the model is split across, the greater the bottleneck.