this post was submitted on 29 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

Continuing my quest to choose a rig with lots of memory, one possibility is dual socket MBs. Gen 1 to 3 EPYC chips have 8 channels of DDR4, so this gives 16 total memory channels, which is good bandwidth, if not beating GPUs, but can have way more memory (up to 1024GB). Builds with 64+ threads can be pretty cheap.

My questions are

  • Does the dual CPU setup cause trouble with running LLM software?
  • Is it reasonably possible to get windows and drivers etc working on 'server' architecture?
  • Is there anything else I should consider vs going for a single EPYC or Threadripper Pro?
you are viewing a single comment's thread
view the rest of the comments
[–] vikarti_anatra@alien.top 1 points 11 months ago

That's why you have 'numa' option in llama.cpp.

From my experience, number of memory channels do matter a lot so this mean that all memory sockets better be filled.