this post was submitted on 13 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I took a short break from my 70B tests (still working on that!) and tried TheBloke/dolphin-2_2-yi-34b-GGUF Q4_0. It instantly claimed 4th place on my list.
A 34B taking 4th place among the 13 best 70Bs! A 34B model that beats 9 70Bs (including dolphin-2.2-70B, Samantha-1.11-70B, StellarBright, Airoboros-L2-70B-3.1.2 and many others). A 34B with 16K native context!
Yeah, I'm just a little excited. I see a lot of potential with the Yi series of models and proper finetunes like Eric's.
Haven't done the RP tests yet, so back to testing. Will report back once I'm done with the current batch (70Bs take so damn long, and 120B even more so).
Agreed - This is the best conversational model I have tried yet.
34B is the largest model size that I prefer running on my GPU, and this along with Nous-Capybara are fantastic.