trollbrot

joined 10 months ago
[–] trollbrot@alien.top 1 points 9 months ago

Ok, interesting. One obvious use-case I could see is, that we want to train it on internal documents, to interact with the documents in a more dynamic way. That should be easier than learning a new language.

 

Is there a good way (or rule of thumb) to decide when looking at a problem if peft/lora finetuning might be successful or if it only makes sense to do a complete finetuning of all weights? Given the big difference in cost knowing if peft/lora might work for a problem feels pretty essential.