this post was submitted on 14 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

Somehow I keep coming back to MythoMax. I dunno if I'm prompting newer models wrong or what but in the 13B space MythoMax just keeps giving me the best results.

Anyone have someone else they like and can recommend? Maybe something with a longer context? I feel like I have to be screwing something up is why newer models aren't performing as well for me but I also kind of want a head nod saying that's the case and that there's better stuff out there.

Edit: Sorry for typo in the title but I can't fix it. T_T

you are viewing a single comment's thread
view the rest of the comments
[–] Tacx79@alien.top 1 points 10 months ago (1 children)

What about Cat 13b 1.0? It slipped through here without much attention but it looks really good, with 16gb you could run q8

[–] Herr_Drosselmeyer@alien.top 1 points 10 months ago (1 children)

with 16gb you could run q8

Not really though. Any kind of context will push you over 16gb. Or I'm doing something wrong.

[–] Tacx79@alien.top 1 points 10 months ago (1 children)

GGUF? Even on gtx 1080 you get like 4t/s with q8 which is almost as fast as average person read speed, with 16gb it should be 4-5x faster

[–] Herr_Drosselmeyer@alien.top 1 points 10 months ago

Hadn't thought of that. I have 24gb so I've always used GPTQ and with that, you really need more than 16gb.