Haiart

joined 11 months ago
[โ€“] Haiart@alien.top 1 points 10 months ago

Great, also, remember to always keep an eye on the KoboldCPP github for updates, I noticed that when you said two days ago that you were using 1.48 they already version 1.50 there.

[โ€“] Haiart@alien.top 1 points 10 months ago (3 children)

It's working perfectly fine for me in KoboldCPP.

Check if you forgot to disable any other sampling methods, you have to disable everything and leave (Top-p at 1, Top-K at 0, Top-A at 0, Typ. at 1, TFS at 1, Seed at -1 and Mirostat Mode OFF) ONLY Min-p enabled (and, if you NEED, you can activate Repetition Penalty at 1.05~1.20 at maximum and I personally use RpRng. 2048 RpSlp. 0.9 but don't bother with these, only if you enable Repetition Penalty.)

Also, with Min-p, you should be using higher Temperature, start with Temperature at 1.5 and Min-p at 0.05, then you can finetune these two numbers at will, read the post to understand why.

 

Hi. I am not behind the model in any capacity nor those who are asked me to do so, before anyone asks.
I am just a normal LLM enjoyer that wants better 13B models in the near future, because at the moment, they're being plummeted onto the ground by many Mistral 7B finetunes and since we don't have any Mistral 13B base model...

The Model in question is this one right here, which seems to be flying under the radar for some reason:
https://huggingface.co/sequelbox/DaringFortitude
TheBloke already did his magic on it, just search his profile on Hugging Face with Ctrl+F.

The reason as to why I am doing this is: I honestly think this is a really, really good (I did some little testing, but my machine is garbage to test any further) and useful Base Model for further finetuning/merging and etc...