this post was submitted on 25 Nov 2023
1 points (100.0% liked)
LocalLLaMA
1 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 10 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm running a Tesla M40 12gb and I'm real close to pulling the trigger on a 24gb. I also have one of the Tesla P4's in my server. With the the M40 I can fully off load 13b models to vram.
how does a M40 compare with a A4000?