i self host it on my homelab very good
LocalLLaMA
Community to discuss about Llama, the family of large language models created by Meta AI.
It’s very good
I find it slightly better than mistral and system prompt does wonderful into aligning better to my needs
It's data is public, but OpenHermes-2.5 dataset is gated and not accessible.
I feel like I woke up one day and "open" meant "closed".
I feel it has some formulaic writing, but overall feels fine. Here's its response to your post:
In the realm of AI models, Intel has indeed introduced a new 7B neural chatbot named "Intel neural-chat-7b-v3-1." This model was trained on Slim Orca, which is currently leading the HF open LLM leaderboard for 7B language models.
As you've experienced with other 7B models, they can be quite helpful in various tasks such as annotation and data extraction. The Intel neural-chat-7b-v3-1 model might offer similar benefits or even surpass them due to its advanced training on Slim Orca.
Comparing it to teknium/OpenHermes-2.5-Mistral-7B, you may find that the new Intel model could potentially provide better performance in certain tasks. However, it's essential to test both models and evaluate their capabilities based on your specific requirements before making a decision.
In conclusion, exploring the Intel neural-chat-7b-v3-1 model is definitely worth considering if you are looking for an advanced 7B language model that can assist with various tasks.
Actually there was an even higher ranked model which is now deleted, by the same a account which made SlimOrca. It had very high scores in one of the two new benchmarks in the leaderboard. I think that was a test model, so expect a new model coming soon that will be even better.
I could only get pretty muddled responses from the model.
Despite seemingly having a simple prompt template I suspect I didn't enter all the data correctly into simpletavern as the outputs I was getting were similar to when I have a wrong template selected for a model.
Shrugs
If a model wants to be successful they should really pick a standard template (pref ChatML) and clearly state that's what they are using.
IMPORTANT!
this isnt trained, its another mistral finetune, with dpo, but with slimorca, not ultrachat.
I would be using openHermes, its much more trialed, and its proven solid
Explain your train of thinking about open Hermes and what examples do you have ?
OpenHermes 2.5 is the latest version, but the openHermes series has a history in ai models of being good, and I used it for some function calling, its really good
Just tried it can confirm this guy knows what he is talking about ^ , pretty great model tbh
Openhermes 2.5 still feels significantly better imo
I find it extremely slow on 2x A6000 gpus.