It's a good question, I can give it a try. Ideally, you'd want a 16k version of the preference dataset to make sure that DPO doesn't ruin it. But considering the low number of training samples, it probably works fine.
mlabonne
joined 11 months ago
Thanks for your excellent library! It makes sense because I started writing this article about two months ago (chatcode.py
is still mentioned in the README.md
by the way). I had a very low throughput using ExLlamaV2 without flash-attn-2. Do you know if it's still the case? I updated these two points, thanks for your feedback.
I'm the author of this article, thank you for posting it! If you don't want to use Medium, here's the link to the article on my blog: https://mlabonne.github.io/blog/posts/ExLlamaV2_The_Fastest_Library_to_Run%C2%A0LLMs.html
Yes, I'd say it'd probably work better than the current approach. If you look at the reward plots on wandb, it feels like the problem is too easy for the model, hence slight improvement.
https://preview.redd.it/xhuyiquojg3c1.png?width=2398&format=png&auto=webp&s=67725747e6cd9254e38728149fb6cea3ba85d71e