this post was submitted on 22 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

Gryphe, creator of MythoMax basically merged the best Mistral models together. This should be a really fantastic model!

https://huggingface.co/Gryphe/MythoMist-7b (links to quantized models by TheBloke can be found there)

Edit: oof, messed up the title. It's MythoMist, not Mix.

you are viewing a single comment's thread
view the rest of the comments
[–] Robot1me@alien.top 1 points 11 months ago (1 children)

There is no real logic in how these models were divided throughout the merge

I'm kind of cautious how random merging affects the overall quality, since many of these merges models were trained with different prompt formats. In my experience that would inevitably lead to AI outputs that attempt some gibberish by adding bits of other used prompt formats (e.g. "### Response:" being printed out while using the ChatML template). To my surprise I witnessed that with OpenHermes 2.5 in some edge cases. But I would be eager to hear other people's experience on this.

[–] Zetsumeii@alien.top 1 points 11 months ago

+1 on the fact that I have infrequently seen my own merge experiments spit out "### Reponse:" while using the standard Alpaca format. It so far hasn't happened enough times to be a concern. I've seen more benefits than deficits when it comes to merging models overall however. I'm coming at this primarily from a creative writing aspect, so I'll usually find models that have characteristics I enjoy, merge them together, and then continue merging models down the line to experiment with output quality.