this post was submitted on 27 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

The title, pretty much.

I'm wondering whether a 70b model quantized to 4bit would perform better than a 7b/13b/34b model at fp16. Would be great to get some insights from the community.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] Secret_Joke_2262@alien.top 1 points 11 months ago (1 children)

A friend told me that for 70b when using q4, performance drops by 10%. The larger the model, the less it suffers from weight quantization

[โ€“] Nkingsy@alien.top 1 points 11 months ago

Or the more undertrained it is, the more fat can be trimmed