this post was submitted on 17 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
From what I see rtx8000 is a bit slower than p40 in inference, a bit faster in training. The only speed up would be from running 2 cards instead of 6. Out of curiosity - what speeds did you have with p40s?