this post was submitted on 12 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I've got a 3060 Ti 8GB and 16 GB RAM and I can run 13B GGUFs with 30 layers offloaded to GPU and get 8-12 t/s no problem. I cannot run a 20B GGUF at all though.
If you want to run GPU inference only though, you'll need 16+ (more likely 20+) GB of VRAM.