this post was submitted on 27 Nov 2023
1 points (100.0% liked)
LocalLLaMA
1 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 10 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Your 512GB of RAM is overkill. Those Xeons are probably pretty mediocre for this sort of thing due to the slow memory, unfortunately.
With a 4090 or 3090, you should get about 2 tokens per second with GGUF q4_k_m inference. That's what I do and find it tolerable but it depends on your use case.
You'd need a 48GB GPU, or fast DDR5 RAM to get faster generation than that.
Op seems to want 5-10 T/s on a budget with 70B.... Not going to happen I think.