this post was submitted on 14 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
What about Cat 13b 1.0? It slipped through here without much attention but it looks really good, with 16gb you could run q8
Not really though. Any kind of context will push you over 16gb. Or I'm doing something wrong.
GGUF? Even on gtx 1080 you get like 4t/s with q8 which is almost as fast as average person read speed, with 16gb it should be 4-5x faster
Hadn't thought of that. I have 24gb so I've always used GPTQ and with that, you really need more than 16gb.