So, it was bothering me a bit that the only metric people really had to understand the 'loss' of quantization objectively was perplexity.
So, after hacking with koboldcpp's sampler code to force output probabilities for a predetermined sequence so that I can make a fair comparison...
Mistral 7b Avg Quantization Differences
Ta-da!
This is Mistral 7b GGUF's various popular quantizations, compared to the fp16 base model, as measured by KL divergence. What I'm specifically doing to measure this is comparing the probability similarities between models. Specifically, I did this for a predetermined sequence of about ~350 tokens worth of Wikipedia text.
This means (if we adapt the scale for readability):
- fp16 = ~0 measured KL change from original probabilities (cause it's the original)
- Q8_0 = ~0.06 avg. measured KL change from original probabilities
- Q6_K = ~0.1 avg. measured KL change from original probabilities
- Q5_K_M = ~0.3 avg. measured KL change from original probabilities
- Q4_K_M = ~1.0 avg. measured KL change from original probabilities
- Q3_K_M = ~3.7 avg. measured KL change from original probabilities
- Q2_K = ~8.2 avg. measured KL change from original probabilities
"Average difference" obscures the bigger problem with low quantization, though. Technically, if many tokens are easily predictable or predetermined no matter what quant, this will contribute to the average. So what happens if, out of the 300+ tokens of text I tested on, we specifically pick the highest reported difference in KL divergence for each respective quantization and graph that?
Now it becomes clear how big the gap can be for 'difficult' tokens!
To make the differences less aggressive, let's take the top ~5% of the most affected by quantization tokens for each quant, and graph that out.
So, if we soley compare the top 5% of tokens that were 'most affected' by quantization when doing an average (we do that to exclude the 'obvious' tokens), the scale is significantly more dramatic.
I'll be updating this post with 13b soon enough. I'd also do it for 70b, but since I'm on 12GB VRAM, measuring would be extremely slow as it'd go into the pagefile for every single quant. ~~is this the part where I should shill a kofi or something?~~
I hope this helps the sub understand how much quantization really impacts models in a somewhat more objective sense.
You could also use this to measure different models against each other right? And just in general, use this as a model benchmark.
-Separate Idea- Also isn’t getting the true probabilities useful anyway, because then we could have the training process be:
Like instead of training twice (sequence to probabilities):
So you are training on less data which would reduce training costs and whatnot.