The first thing I looked for was the number of training tokens. I think yi34 got a lot of benefit from 3 trillion, so this model having 3 trillion bodes well.
ambient_temp_xeno
I agree. I have these for yichat34 --top-k 0 --min-p 0.05 --top-p 1.0 --color -t 5 --temp 3 --repeat_penalty 1 -c 4096 -i -n -1
I think the --min-p I have is a bit low, so maybe you have the min-p back to front? Lower is more precise I think.
Orca still memeing strong.
Does it have min-p sampling?
I'm not sure where this chart is from, but I remember it was made before qlora even existed.
I still have this feeling in my gut that closedai have been doing this for a while. It seems like a free lunch.
Seems amazingly good. I might get a real use out of a raspberry pi after all.
Fully open source?
I had nous capy34 get it right a couple of days ago with these settings
Which is interesting. I want to test this new yi chat because apparently it can do decent ASCII?! Need a gguf though.
Somebody wake up ~~Hicks~~ Thebloke
Apparently the chat version has about 64 for humaneval.