USM-Valor

joined 10 months ago
[–] USM-Valor@alien.top 1 points 9 months ago

13B and 20B Noromaid for RP/ERP.

I am experimenting with comparing GGUF to EXL2 as well as stretching context. So far, Noromaid 13b at GGUF Q5_K_M stretches to 12k context on a 3090 without issues. Noromaid 20B at Q3_K_M stretches to 8k without issues and is in my opinion superior to the 13B. I have recently stretched Noromaid 20B to 10k using 4bpw EXL2 and it is giving coherent responses. I haven't used it enough to assess the quality however.

All this is to say, if you enjoy roleplay you should be giving Noromaid a look.

[–] USM-Valor@alien.top 1 points 9 months ago

I've had the same experiences with the Yi finetunes. I tried them on single-turn generations and they were very promising. However, starting with one from scratch I was having a ton of repetition and looping. Some models need a very tight set of parameters to get them to perform well, whereas other ones will function will under almost any sane set of guidelines. I'm thinking Yi leans more towards the former, which will have users thinking they are inferior to simpler, but more flexible models.

[–] USM-Valor@alien.top 1 points 10 months ago

Backend: 99% of the time, KoboldCPP, 1% of the time (testing EXL2 etc) Ooba

Front End: Silly Tavern

Why: GGUF is my preferred model type, even with a 3090. KoboldCPP is the best that I have seen at running this model type. SillyTavern should be obvious, but it is updated multiple times a day and is amazingly feature rich and modular.