Ah is this a model based off of intels newly released model? Also I don't like it because when I get it to help me create malware as a test , it will lobotomize itself Have not tried this one though let me know if it's uncensored
LocalLLaMA
Community to discuss about Llama, the family of large language models created by Meta AI.
This post is an automated archive from a submission made on /r/LocalLLaMA, powered by Fediverser software running on alien.top. Responses to this submission will not be seen by the original author until they claim ownership of their alien.top account. Please consider reaching out to them let them know about this post and help them migrate to Lemmy.
Lemmy users: you are still very much encouraged to participate in the discussion. There are still many other subscribers on !localllama@poweruser.forum that can benefit from your contribution and join in the conversation.
Reddit users: you can also join the fediverse right away by getting by visiting https://portal.alien.top. If you are looking for a Reddit alternative made for and by an independent community, check out Fediverser.
Can it summarize documents (say, around 5k words)? Anything that is adapted to that?
Can any of the small ones do this well?
TheBloke openhermes 2.5 mistral 16k 7B q5_k_m gguf. what about this? anyone using this model?
I am using it, my favorite finetune so far, however, ignore the instructions to use chatml prompt formattig and use instead vicuna, with USER: and ASSISTANT:, possibly one of the best model I've seen for long conversation
I've tried 10 coding models ranging from 7b to 13b, and open hermes is by far the best. All the other models struggled with anything not python. They couldn't even get hello world to be typed backwards in c++. If anyone has any proven suggestions to a 13B model thats better, please do share.
That's v3-1, there's already v3-2
https://huggingface.co/TheBloke/OpenHermes-2.5-neural-chat-7B-v3-2-7B-GGUF
https://huggingface.co/TheBloke/OpenHermes-2.5-neural-chat-7B-v3-2-7B-AWQ
They were added 11 hours ago
yeah possibly one of the best for rag at this size, it sticks to facts extremely well, but it's hard to have it do any form of creative interpretation of the context.
Curious to know if you have built a RAG application with it ? Any specific embedding models you used ?
Can you please tell about how and with what tools you are using it? Like with as much details as you can, because I would like to setup it today and work with it.
Does the model have a clean opensource dataset?
(free from OpenAI model/proprietary generated data)?
How much does it take to give you an answer to your prompt?
How many tokens is its limit? And how do you change its token parameters in python?