nice job!
LocalLLaMA
Community to discuss about Llama, the family of large language models created by Meta AI.
works really well to get it on the 16k version https://huggingface.co/NurtureAI/OpenHermes-2.5-Mistral-7B-16k
would it have to be a different dataset?
It's a good question, I can give it a try. Ideally, you'd want a 16k version of the preference dataset to make sure that DPO doesn't ruin it. But considering the low number of training samples, it probably works fine.
what is the difference between normal and 16 K?
New favorite model!
what does it feel like to generate tokens?
Wow
really cool! what do you think about using gpt3.5 as the worst output in the hopes to resurface some extra edge?
Yes, I'd say it'd probably work better than the current approach. If you look at the reward plots on wandb, it feels like the problem is too easy for the model, hence slight improvement.
I find it odd that your chosen rewards went negative... Doesn't this imply that the chosen samples became less likely than they were under the base model? You still get model improvements, since the rejected rewards got even less likely, but it's still odd feeling. Any thoughts there?
The improvement is so small it can be a margin of error
It holds up pretty decent! What Mirostat Tau value would you recommend with it?
Would be cool to see this in a 34b and 70b.