3090s are faster. P40s are mostly stuck with GGUF. P100s are decent for FP16 ops but you will need twice as many.
All depends on what you want to do. 8 cards are going to use a lot of electricity and make a lot of noise.
Community to discuss about Llama, the family of large language models created by Meta AI.
3090s are faster. P40s are mostly stuck with GGUF. P100s are decent for FP16 ops but you will need twice as many.
All depends on what you want to do. 8 cards are going to use a lot of electricity and make a lot of noise.
Have a look at this post, might help you on making a decision - https://www.reddit.com/r/LocalLLaMA/comments/17phkwi/powerful_budget_aiworkstation_build_guide_48_gb/
Don’t get those the token rate will be crazy slow
I like my 4x3090 Frankenstein server. 70b awq model runs at 27t/s. Cost below 4k€