this post was submitted on 23 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

Hey everyone,

I came across a post recently where someone found it hard to find simple scripts to fine-tune LLMs with their data. So I put together a repo where you can just type out your requirements in a config.yaml file and the training happens flawlessly based on that.

Here's the repo - LLM-Trainer

It is still a wip so lemme know if guys want some other features added to this.

TIA.

top 3 comments
sorted by: hot top controversial new old
[–] kaszebe@alien.top 1 points 11 months ago (1 children)

Is there a guide for dummies (read: me) to get this to work on oobabooga?

[–] Dry_Long3157@alien.top 1 points 11 months ago

Hey, you could just download the config file and lora_train.py file and run it as I've explained in the readme!

To simplify it further, open both the file in any editor and load up the same environment you use for oobabooga. Then make all the changes based on your req in the lora_config.yaml file. Once you're done just run "python lora_train.py".

If you need further help, feel free to ask!

[–] uhuge@alien.top 1 points 11 months ago

I assume auth_token is for storing the merged model in HF? Seems worth noting/clarifying.

I'll get back with more feedback when I get to test it.+)