this post was submitted on 26 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Q4_0 and Q4_1 would both be legacy.
The k_m is the new "k quant" (I guess it's not that new anymore, it's been around for months now).
The idea is that the more important layers are done at a higher precision, while the less important layers are done at a lower precision.
It seems to work well, thus why it has become the new standard for the most part.
Q4_k_m does the most important layers at 5 bit and the less important ones at 4 bit.
It is closer in quality/perplexity to q5_0, while being closer in size to q4_0.