this post was submitted on 25 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
From my understanding, if you say you want to run the models without quality loss, then quantized models are not exactly what you are looking for, at least not below a certain threshold. With your setup you should be able to run 7B models in 8-bit.
For everything beyond that you'll need higher quantized models (e.g., 4-bit), which also introduce higher quality loss.
There was this post a while back which lined out the hardware requirements for 8-bit and 4-bit, for GPU and CPU setups. Of course you can go even higher with quantization and run even larger models, but it'll introduce more loss as well.