this post was submitted on 30 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

Running multiple GPUs requires PCIe lanes. Consumer PCs have too few of those to even run 2x GPUs at full bandwidth (2x16).

Threadrippers are prohibitively expensive for many.

AMD have announced EPYC 8004 Siena in September. These low-power server CPUs start at 8 cores @ ~$400 and offer 96 lanes. The catch is that the clock is pretty low.

So, the question is: How bottlenecked are LLMs by CPU clock?

I.e., would it make much of a difference if you run 4x 3090s on the $2000+ Threadripper vs $400 Epyc 8004?

top 7 comments
sorted by: hot top controversial new old
[–] Worldly-Mistake-8147@alien.top 1 points 9 months ago

Holy... 4x3090! No wonder it was hard to find my third one for reasonable price.

[–] JustOneAvailableName@alien.top 1 points 9 months ago

I don’t even think lanes really matter when you’re not training.

[–] ThisGonBHard@alien.top 1 points 9 months ago (1 children)

Pretty much not at all. The main bottleneck is memory speed.

I barely see a difference between 4 and 12 cores on 5900X when running on CPU.

When running multi GPU, the lanes are the biggest bottleneck.

On single GPU, CPU does not matter.

[–] _Erilaz@alien.top 1 points 9 months ago

8004 has six DDR5 channels afaik. That takes care of the memory bandwidth. The only issue would be an SP6 motherboard.

[–] 0xd00d@alien.top 1 points 9 months ago (1 children)

I would imagine that this new option you're talking about will be a good budget inference workhorse paired with multiple cards such as 3090s. 96 lanes of gen 5 will be a real enabler. That said, I think zen 2 epycs providing gen 4 lanes are cheaper still so there are good options available.

[–] _Erilaz@alien.top 1 points 9 months ago

3090 doesn't support PCIE 5.0, only 4.0

The 4090 does, and it makes some sense to use them in x8 5.0 configuration, but only if you have a pallet of these GPUs.

[–] Imaginary_Bench_7294@alien.top 1 points 9 months ago

So that really depends. You're talking about running a multi gpu setup. If all of your model is in the gpu, then your processor will not be a bottleneck at all. The clock speed of the PCIe bus is independent of the cpu cores, unless you're messing with overclocking. That's why they advertise PCIe 3.0, 4.0, 5.0, etc. The PCIe version dictates the bandwidth per lane.

That being said, multi gpu setups do introduce some overhead. If a model is split between GPUs, the PCIe interface becomes a modest bottleneck as they pass data back and forth. The greater the number of GPU's the model is split across, the greater the bottleneck.