this post was submitted on 25 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

Is there a good way (or rule of thumb) to decide when looking at a problem if peft/lora finetuning might be successful or if it only makes sense to do a complete finetuning of all weights? Given the big difference in cost knowing if peft/lora might work for a problem feels pretty essential.

top 3 comments
sorted by: hot top controversial new old
[–] Exotic-Estimate8355@alien.top 1 points 11 months ago (1 children)

Unless it’s some really heavy stuff like learning a new language, you should be fine with LoRA

[–] trollbrot@alien.top 1 points 11 months ago

Ok, interesting. One obvious use-case I could see is, that we want to train it on internal documents, to interact with the documents in a more dynamic way. That should be easier than learning a new language.

[–] sshh12@alien.top 1 points 11 months ago

My rule of thumb has been to LoRA (r between 4 and 16) until unsatisfied with results. It of course depends on data/task but imo most cases don't require full fine-tune and perf/compute ROI is low.