This project has gained popularity in the LLM training community, check it out.
Thanks, Boris!
This project has gained popularity in the LLM training community, check it out.
Thanks, Boris!
Thank you!
Would it be possible to fine tune Mistral on free tier of Google colab like this? Even if it takes let's say 2x longer?
Yeah, sure! That’s really easy. Just check this tutorial: https://colab.research.google.com/drive/1CNNB_HPhQ8g7piosdehqWlgA30xoLauP
It’s covered data preparation, training and saving trained model to the Hugging Face hub
Then you will be able to load your model as follows:
model = AutoModelForCausalLM(“WrapKey69/MySupaDupaMistral”)
I rly wish MPS was more widely adopted by now… hate seeing just CUDA or CPU in all these new libraries
You think people should prefer Mac over more general hw?
not prefer it bur recognize its user base— metal + the unified memory have a lot to offer and the compute is there.. there just rly no adoption other than a few select projects like llama.cpp and some of the other text-inferencing engines.
Well it's going to be one before the other on small projects always. Do you have any good experience on the apple hw then? I can see the benefits of faster memory, but it needs to prove its worth for people to give it any actual attention.
Any idea what the vram requirements are for locally training a 7b qlora?
I strongly recommend training on a GPU, as it speeds up the training process by an order of magnitude and has become the standard. I can recommend services that offer GPU rentals at the lowest prices.
https://vast.ai
https://www.runpod.io
https://www.tensordock.com
Ah, OK- but what about a setup with dual local 3090s?
What kind of gpu rental would you recommend? An a100 80gb?
I apologize, I've confused you. At first, I read RAM and thought that you wanted to train on the CPU.
Of course, 2 x 3090 would be more than enough for training. I believe even a 13B model with a large context length could be trained.
If you have 2 GPUs, I suggest training through the command line and utilizing DeepSpeed or FSDP (which has been tested less).
Here are examples of projects where it's explained in detail how you can train:
https://github.com/BobaZooba/xllm-demo
https://github.com/BobaZooba/wgpt
On Twitter, one person unknown to me posted about how they easily managed to train on multi-gpu (a super simple and short example):
Awesome thank you.
Last question! Would it be reasonable to train on a single 3090 following that guide as well?
Edit: train a 7b on single
And feel free to ask! I'm just here to help you
It depends on how deeply you want to immerse yourself. The library is intended for both rapid prototyping and production-ready development. I would recommend starting with the former, it's very simple and will take about 10-15 minutes to get started, not including training time.
Here is a notebook that allows you to train models on a single GPU:
https://colab.research.google.com/drive/1CNNB_HPhQ8g7piosdehqWlgA30xoLauP
You can download it and train your model locally on your computer.
Thank you so much, this is awesome.
Is this something that could be trained on a laptop without a gpu or would it be better to use cloud based GPU services?
Also can you or anyone else recommend any other libraries which simplify LLM training? I've done some ML projects but I'd like to do something a bit deeper and this looks perfect.
I tried out Talequest by the way. Very impressive.
I strongly recommend training on a GPU, as it speeds up the training process by an order of magnitude and has become the standard. I can recommend services that offer GPU rentals at the lowest prices.
Regarding the competitor libraries, I'm unlikely to be able to recommend anything specific. I created this particular library to simplify training on multi-GPU and prototyping, as well as to provide extensive customization options, including modifying the architecture, as is done in LoRA.
Thank you very much for your feedback on Tale Quest. It is very valuable to me, and I plan to further develop it someday. I would appreciate it if you continue to share your feedback. And I wanted to ask right away: is Telegram a popular app where you live? I am very concerned that Telegram might not be widespread enough for a full-fledged launch.
Wow thank you for the detailed reply. Your library looks fantastic. I'm definitely going to give it a go. I'm going to try fine-tuning it on music theory. Is that a crazy idea? Training on a GPU sounds much better. I looked more thoroughly through the repo and found it's all explained in there.
Telegram is a popular app here in the UK. Seems to me like an excellent way to launch it as there's no need for the user to download an app. WhatsApp is much more popular here but maybe it's harder to deploy a bot like this on WhatsApp?
I'm a bot, bleep, bloop. Someone has linked to this thread from another place on reddit:
^(If you follow any of the above links, please respect the rules of reddit and don't vote in the other threads.) ^(Info ^/ ^Contact)
Does this support training models from scratch assuming you can provide a tokenizer and a model configuration?