this post was submitted on 02 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

I wanted to try and finetune a model in Swedish, since the availability of models is so lacking. Here is my first attempt; bellman-7b. The name comes from a famous Swedish singer and poet who lived in the 1700s: https://huggingface.co/neph1/bellman-7b-1k

It's so far tuned on one epoch of: https://huggingface.co/datasets/jeremyc/Alpaca-Lora-GPT4-Swedish

on a Google Colab V100. The dataset is machine translated and as you might expect, not perfect.

The model has picked up the Swedish really well, though. I didn't expect one epoch to make it that good. It's based on NousResearch/Llama-2-7b-chat-hf, mainly because it allowed me to try out finetuning on the free tier of Colab. The knowledge quality of the model is so-so, though. It usually gets the first sentence right, and then starts to hallucinate, wildly. I expect more training would help, but I'm not sure whether to continue, or start over with a Mistral base instead?

The repetition bug is also prevalent, to the point of being hilarious, if I hadn't spent time and money on doing this. :) I don't see anyone talking about it anymore, so I expect it is solved in more recent models?

For future finetuning, I've done a number of fixes to the dataset, removing some obvious mistakes, pruning some odd generations, and hand-refined the first 100 rows (out of 52000).

I think I'll also try to produce an additional small dataset (let's call it 'alignment') to apply afterwards. This would include some more knowledge in the Swedish language, etc. And some RLHF. So if anyone tries it out, feel free to send me your chat logs. If they're corrected, all the better, but anything would help.

Overall, it's been a fun learning experience so far, since this was the first time I used Google Colab for anything, and the first time I've quantized anything.

Would you advice me to start over with a better base and a better dataset, or continue for more epochs with what I have?

top 5 comments
sorted by: hot top controversial new old
[–] ZookeepergameCool173@alien.top 1 points 1 year ago (1 children)

Try to fine tune a 13b model instead, which has a way better command of Swedish than the 7B. And in my experience tends to have less issues with becoming repetitive etc.

[–] neph1010@alien.top 1 points 1 year ago

I will. I also like 13b models. They seem like the perfect balance for us gpu starved people. But I'd rather fail some on 7b models first, since it's quicker to iterate on them.

[–] fetballe@alien.top 1 points 1 year ago

Thanks! Can you also make a 13B version?

[–] Acceptable_Can5509@alien.top 1 points 1 year ago (1 children)

Can you share the colab so others can look at how it was done?

[–] neph1010@alien.top 1 points 1 year ago

I used the colab template from this post: https://maximelabonne.substack.com/p/fine-tune-your-own-llama-2-model-in-a-colab-notebook-df9823a04a32

https://colab.research.google.com/drive/1PEQyJO1-f6j0S_XJ8DV50NkpzasXkrzd?usp=sharing

Specifically because it could be run on the free tier. But that's not possible for any llama2 models, just some.