this post was submitted on 23 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

Hey everyone,

I came across a post recently where someone found it hard to find simple scripts to fine-tune LLMs with their data. So I put together a repo where you can just type out your requirements in a config.yaml file and the training happens flawlessly based on that.

Here's the repo - LLM-Trainer

It is still a wip so lemme know if guys want some other features added to this.

โ€‹

TIA.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] Dry_Long3157@alien.top 1 points 10 months ago

Hey, you could just download the config file and lora_train.py file and run it as I've explained in the readme!

To simplify it further, open both the file in any editor and load up the same environment you use for oobabooga. Then make all the changes based on your req in the lora_config.yaml file. Once you're done just run "python lora_train.py".

If you need further help, feel free to ask!