this post was submitted on 18 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
gguf goliath will give you best answers but will be very slow. you can unload like 40 layers to vram and your ram will still be a speed bottleneck, but i think 2 t/s are possible on 2 bit quant.