this post was submitted on 14 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

openchat 3.5 16k

top 20 comments
sorted by: hot top controversial new old
[–] hibbity@alien.top 1 points 1 year ago (1 children)

I would but anyone that puts that much effort into a model release and doesn't include the trained prompt formats just seems like they must not want me to use it.

[–] perlthoughts@alien.top 1 points 1 year ago (1 children)

Yeah I agree, its kind of weird, but you dont have to use GPT4 Correct User: etc, GPT4 User: works better imo. However, this is just the prompt they used when training the model, so its best to follow it.

[–] hibbity@alien.top 1 points 1 year ago

I would be stoked and actually mess with it if it had a proper instruct or system tag. The results from models trained like that are just easier to tune.

[–] paryska99@alien.top 1 points 1 year ago (1 children)

I know these benchmarks are a tough topic, but this on paper looks really impressive. It states to be better than mistral and I loved the progress mistral brought. If someone tries this model out can you give feedback under this post? Much appreciated

[–] _HAV0X_@alien.top 1 points 11 months ago

from my experience, its significantly better than mistral. its training method REALLY shows, and it makes responses significantly better.

[–] rkzed@alien.top 1 points 1 year ago (2 children)

I'm confused with their prompt format, do we really need to use their library to try the model?

[–] perlthoughts@alien.top 1 points 1 year ago (1 children)

nah you can use llama.cpp or whatever you like, thebloke already has multiple gguf versions up already.

[–] involviert@alien.top 1 points 1 year ago

They were talking about the prompt format. Because obviously their library will be translating that OpenAI API-style to actual proper prompt format internally, which is not documented at all.

[–] Dear_noobs@alien.top 1 points 1 year ago

I came across this yesterday, one interface to be able to jump between all the things.

Find what you want to try, click Download, then chat with it..

[–] fish312@alien.top 1 points 1 year ago (3 children)

New drinking challenge: Take one shot every time a new 7B claims to outperform chatgpt/llama70b (difficulty impossible)

[–] perlthoughts@alien.top 1 points 1 year ago (1 children)

lol I hope your not driving...

[–] Danny_Davitoe@alien.top 1 points 1 year ago

Yeah, don't want to spill your drink

[–] Herr_Drosselmeyer@alien.top 1 points 1 year ago

My poor liver!

[–] ReMeDyIII@alien.top 1 points 11 months ago

Plus, isn't GPT-3.5-Turbo multimodal? There's no way a 7B can outperform that.

[–] benados@alien.top 1 points 1 year ago

Does the increased context increase the requirement, even if they are the same 7b models?

[–] Darlanio@alien.top 1 points 1 year ago

Worth testing... probably not this weekend though...

[–] WolframRavenwolf@alien.top 1 points 1 year ago (1 children)

This isn't an official release by the OpenChat team, though, right? Is NurtureAI affiliated or what's the background here?

[–] perlthoughts@alien.top 1 points 1 year ago

No, nurtureai and openchat are not affiliated. NurtureAI just extended the context, it looks like another guy did a openchat 16k merge of some models as well.

[–] luncheroo@alien.top 1 points 1 year ago

Just a quick note for anyone using LM Studio who doesn't want to fiddle too much--the Codellama OpenAssistant preset works fine without ask/answer loops.

[–] pseudonerv@alien.top 1 points 1 year ago

I don't get it. What did they do to extend the context from the original openchat 3.5?