this post was submitted on 23 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 
  1. for coding
  2. for generating stories, writing email, poems etc.
  3. good overall
  4. etc.
top 40 comments
sorted by: hot top controversial new old
[–] Inevitable-Highway85@alien.top 1 points 11 months ago

Mistral Instruct for overall

[–] vasileer@alien.top 1 points 11 months ago (3 children)
[–] Mescallan@alien.top 1 points 11 months ago

second for this. I haven't tested it for code yet, but it's very enjoyable to converse with. I find it does summaries quite well, I have asked it about a wide range of topics and it has been ~90% correct on the first response, and can kind of fall apart after going back and forth a few times, but it's only 7b.

[–] smile_e_face@alien.top 1 points 11 months ago

Could you share your temperature and sampler settings for OpenHermes? I see it recommended all over the place, but I get only mediocre results with it in SillyTavern.

[–] shivam2979@alien.top 1 points 11 months ago

Loved the responses from OpenHermes 2.5, however found the inference on the slower side especially when comparing it to other 7B models like Zephyr 7B or Vicuna 1.5 7B

[–] Savings_Scholar@alien.top 1 points 11 months ago

Although I'm not a fan myself, I think is falcon…

[–] ProperShape5918@alien.top 1 points 11 months ago

I could just upvote the top comment, but yeah OpenHermes 2.5 is probably the best for now.

[–] Amgadoz@alien.top 1 points 11 months ago

In no specific order:

Zephyr B OpenHermes2.5 OpenChat3.5

[–] GasBond@alien.top 1 points 11 months ago

are these uncensored?

[–] GasBond@alien.top 1 points 11 months ago (1 children)

how is neural chat 7b v3 1? i will try out openhermes

[–] JohnExile@alien.top 1 points 11 months ago

Neural Chat 7b works fine with normal instructions for assistant use, but after trying to give it custom instructions for things like summarization, using code blocks or formatting, it completely broke. The same instructions that worked fine with other models I use. YMMV.

[–] Revolutionalredstone@alien.top 1 points 11 months ago

Orca2 7B (released just the other day) DEFINITELY competes with OpenHermes 2.5 but its hard to pick a clear winner (tho i would lean toward Orca2 myself)

Synthia Mystral 7B was pretty glorious but OpenHermes 2.5 is just better.

https://huggingface.co/microsoft/Orca-2-7b

[–] Illustrious-Lake2603@alien.top 1 points 11 months ago (3 children)

For Coding, DeepSeek coder 6.7b is exceptional

[–] davew111@alien.top 1 points 11 months ago (1 children)

Is it exceptional in any language other than Python?

[–] Dry-Vermicelli-682@alien.top 1 points 11 months ago

I'd like to know how it does in Java, Go, Rust and Zig as well as can it handle SQL quite well?

[–] ModsAndAdminsEatAss@alien.top 1 points 11 months ago (2 children)

I haven't had a chance to get hands on with DeepSeek yet. How does it compare to Code Llama?

[–] Illustrious-Lake2603@alien.top 1 points 11 months ago (1 children)

In my opinion it's amazing it's close to gpt4

[–] Dry-Vermicelli-682@alien.top 1 points 11 months ago

what hardware are you running it on? cpu/gpu, ram, etc? Trying to figure out what I need. My old gen 1 16 core threadripper with 64GB ram doesnt seem to work very well. Multiple minutes for a simple hello response. No GPU though, but looking to put a 6700XT GPU.. not sure if that GPU will help a lot or what.

[–] danigoncalves@alien.top 1 points 11 months ago

I was actually today comparing both (codellama 7B) and man codellama just gave crap, deepseek was vey accurate.

[–] Sufficient-Math3178@alien.top 1 points 11 months ago (3 children)

Models requiring remote code without any explanation are shady imo

[–] Illustrious-Lake2603@alien.top 1 points 11 months ago

shady maybe, but it can code decent without depending on the internet. So theres that

[–] valdev@alien.top 1 points 11 months ago

Im a little new here, does DeepSeek coder 6.7b somehow phone home?

[–] Knaledge@alien.top 1 points 11 months ago (2 children)
[–] Illustrious-Lake2603@alien.top 1 points 11 months ago

I for one just don't trust these Chinese models at all. Not saying there's anything wrong with this but it's clear it's aligned with the Chinese agenda when I try to ask it anything about Taiwan. But for coding it works good and you can run it offline

[–] Sufficient-Math3178@alien.top 1 points 11 months ago

AFAIK models used to be just plain code, when you load one, for example, it would do so by calling a method pickled inside the model file. Uploader could set up this method to do practically anything they want, and it doesn’t need to be obviously malicious since code runs just like a normal python script. For example, it could simply load/render a webp image that is designed to use the recent libwebp vulnerability.

They changed this a while back, so now you need to pass an argument when loading the model to allow this behavior, and this model requires it.

[–] Nebbit123@alien.top 1 points 11 months ago

For coding probably DeepSeek Coder Instruct 6.7B

For more general stuff I've found Zephyr 7B to be really good, but I still need to try OpenHermes 2.5

[–] themiro@alien.top 1 points 11 months ago

openchat 3.5 by pretty much all metrics

[–] WAHNFRIEDEN@alien.top 1 points 11 months ago

Nous Capybara

[–] Kriima@alien.top 1 points 11 months ago (1 children)
[–] Tupletcat@alien.top 1 points 11 months ago

Which settings do you use for it? Like context, prompt, etc? People swear by Toppy but I'm not really seeing it and I wonder if it's my configuration.

[–] Gmroo@alien.top 1 points 11 months ago (1 children)
[–] Feztopia@alien.top 1 points 11 months ago

Bro that's the link to the dataset not the model

[–] Mbando@alien.top 1 points 11 months ago (1 children)

We’ve been fine-tuning models for specific applications like RAG and structured data extraction. Falcon – 7B has been really good for training. It’s both shifting to understand the target domains use of language from the training data, but also picking up instructions really well. Going to try mistral-7B soon for a comparison.

[–] ___defn@alien.top 1 points 11 months ago

Same here. Switched to Mistral a few weeks ago. The results will blow you away, the difference is remarkable.

[–] DontPlanToEnd@alien.top 1 points 11 months ago

For generating full and uncensored stories (I provide a starting paragraph), collectivecognition-v1.1-mistral-7b has been by far the most creative and well written in my testing.

[–] danigoncalves@alien.top 1 points 11 months ago

Best ones I tried were zephyr, dolphin, openorca, synthia and naberius.

[–] Disgruntled-Cacti@alien.top 1 points 11 months ago

Why does no one talk about the Yi models?

[–] ntn8888@alien.top 1 points 11 months ago

Oh god 🤦 But seriously we need a wiki with a leader board with votes😁

[–] akumaburn@alien.top 1 points 11 months ago

For all of the above, Tess-XS-v1.0 (Here's the updated GGUF https://huggingface.co/TheBloke/Tess-XS-v1.1-GGUF) . Nothing else I've tested at the same parameter size is quite as good, though intel's neural-chat (https://huggingface.co/TheBloke/neural-chat-7B-v3-1-GGUF) comes close. Yi-6B is unimpressive, and consistently outperformed by mistral based fine-tunes in my actual testing (Despite performing extremely well on the benchmarks). Yi-34B is in a class of its own but you asked for a 7B size model..

For stories thespis-mistral-7b (https://huggingface.co/TheBloke/Thespis-Mistral-7B-v0.6-GGUF) can be better if you're looking for NSFW.

If you're willing to step up a bit, the newly released Orca 2 13B (https://huggingface.co/TheBloke/Orca-2-13B-GGUF) drastically outperforms the above in all but NSFW content (and even then it punches well). The license isn't great however..

[–] LeanderGem@alien.top 1 points 11 months ago

for 2.) I like dolphin 2.1 and mythomist-7b