this post was submitted on 13 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

Eric Hartford, the author of dolphin models, released dolphin-2.2-yi-34b.

This is one of the earliest community finetunes of the yi-34B.

yi-34B was developed by a Chinese company and they claim sota performance that are on par with gpt-3.5

HF: https://huggingface.co/ehartford/dolphin-2_2-yi-34b

Announcement: https://x.com/erhartford/status/1723940171991663088?s=20

you are viewing a single comment's thread
view the rest of the comments
[–] WolframRavenwolf@alien.top 1 points 10 months ago (9 children)

I took a short break from my 70B tests (still working on that!) and tried TheBloke/dolphin-2_2-yi-34b-GGUF Q4_0. It instantly claimed 4th place on my list.

A 34B taking 4th place among the 13 best 70Bs! A 34B model that beats 9 70Bs (including dolphin-2.2-70B, Samantha-1.11-70B, StellarBright, Airoboros-L2-70B-3.1.2 and many others). A 34B with 16K native context!

Yeah, I'm just a little excited. I see a lot of potential with the Yi series of models and proper finetunes like Eric's.

Haven't done the RP tests yet, so back to testing. Will report back once I'm done with the current batch (70Bs take so damn long, and 120B even more so).

[–] denru01@alien.top 1 points 10 months ago (3 children)

Which is the best 70B on your list?

[–] WolframRavenwolf@alien.top 1 points 10 months ago (2 children)

I'm still working on the updated 70B comparisons/tests, but right now, the top three models are still the same as in the first part of my Huge LLM Comparison/Test: 39 models tested (7B-70B + ChatGPT/GPT-4): lzlv_70B, SynthIA-70B-v1.5, chronos007-70B. Followed by dolphin-2_2-yi-34b.

[–] Healthy_Cry_4861@alien.top 1 points 10 months ago (1 children)

SynthIA-70B-v1.5 seems to have the same context length of 2k as SynthIA-70B-v1.2, not the same 4k context length as SynthIA-70B-v1.2b

[–] WolframRavenwolf@alien.top 1 points 10 months ago

You're right with your observation, when I load the GGUF, KoboldCpp says "n_ctx_train: 2048". Could that be an erroneous display? Because I've always used v1.5 with 4K context, did all my tests with that, and it's done so well. If it's true, it might even be better with native context! Still, 2K just doesn't cut it anymore, though.

load more comments (5 replies)