this post was submitted on 22 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm kind of cautious how random merging affects the overall quality, since many of these merges models were trained with different prompt formats. In my experience that would inevitably lead to AI outputs that attempt some gibberish by adding bits of other used prompt formats (e.g. "### Response:" being printed out while using the ChatML template). To my surprise I witnessed that with OpenHermes 2.5 in some edge cases. But I would be eager to hear other people's experience on this.
+1 on the fact that I have infrequently seen my own merge experiments spit out "### Reponse:" while using the standard Alpaca format. It so far hasn't happened enough times to be a concern. I've seen more benefits than deficits when it comes to merging models overall however. I'm coming at this primarily from a creative writing aspect, so I'll usually find models that have characteristics I enjoy, merge them together, and then continue merging models down the line to experiment with output quality.