this post was submitted on 17 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
hmm, one of the really interesting details here - normal lora in rank 8 tested better than in rank 128 - genuine question - how is it possible? medicore data used for lora? I have done few finetunes recently and see a similar situation between rank 128 and 256
There are tests in the original lora paper where the boost is very small once the rank is greater than 8.
https://preview.redd.it/ii53qcx8031c1.png?width=1080&format=png&auto=webp&s=821bac1232255bf791120afde7d9e9f3506a89f5