this post was submitted on 20 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] mpasila@alien.top 1 points 11 months ago (2 children)

Did anyone manage to get them working? I tried GGUF/GPTQ and running then unquantized with trust-remote-code and they just produced garbage. (I did try removing BOS tokens and still same thing)

[–] watkykjynaaier@alien.top 1 points 11 months ago (1 children)

I've completely fixed gibberish output on Yi-based and other models by setting the RoPE Frequency Scale to a number less than one, which seems to be the default. I have no idea why that works, but it does.

What I find even more strange is the models often keep working after setting the frequency scale back to 1.

[–] Aaaaaaaaaeeeee@alien.top 1 points 11 months ago

What value specifically worked?

[–] Jelegend@alien.top 1 points 11 months ago

Yeah, exactly the same thing. Produced absolutely rubbish whatever i tried. I tried 8B 15B and 23B