this post was submitted on 22 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It is very important if you care about performance. On inference, a lot of data has to go from one card to another. I was using 1x risers and it sucked. If you have two similar nvidia cards, you can get around by using nvlink bridge.
Otherwise you should aim at pcie 4 8x at least when looking for a Mainboard. I sniped a epyc system from ebay for 1000€ that has 6 pcie 4 16x and it rocks it all with 4 3090.
https://preview.redd.it/u0bvy2kkzw1c1.jpeg?width=4032&format=pjpg&auto=webp&s=ecb164bbf59504e590c19403554e24df8f9236c8
Was the cpu from ebay too? Any reliability issues? It seems a lot of the cheap ones on ebay are gray market / production candidates.