this post was submitted on 17 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm getting tired of all those merges, as if this was the magical solution to everything
At a high level, merges allows you to get a lot of training done cheaply.
If you have a model finetuned on set A and another model finetuned on set B, merging these would allow you to very cheaply create a model that was trained on both set A and set B.
It is the magical solution to "I can't afford to finetune a model".