this post was submitted on 14 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
top 12 comments
sorted by: hot top controversial new old
[–] mcmoose1900@alien.top 1 points 10 months ago (1 children)
[–] AdOne8437@alien.top 1 points 10 months ago (1 children)

if it is based in yi, should it not have the yi-licence instead of mit?

[–] mcmoose1900@alien.top 1 points 10 months ago (1 children)

Yes.

But its ML land! Everyone violates licenses anyway :P

[–] metalman123@alien.top 1 points 10 months ago

Can't wait to see the benchmarks on these things.

[–] toothpastespiders@alien.top 1 points 10 months ago

Dang, after that 34b drought it's like suddenly stumbling onto the great lakes right now.

[–] vasileer@alien.top 1 points 10 months ago (1 children)
[–] mcmoose1900@alien.top 1 points 10 months ago

Precisely 47K fits in 24GB at 4bpw.

I have not tried 3.5, but I think it could be much more.

[–] Combinatorilliance@alien.top 1 points 10 months ago (1 children)

I believe these are TheBloke's GGUF quants if anyone's interested: https://huggingface.co/TheBloke/Nous-Capybara-34B-GGUF

[–] WolframRavenwolf@alien.top 1 points 10 months ago (1 children)

Also note this important issue that affects this and all other Yi-based models:

BOS token as 1 seriously hurts these GGUF Yi models

[–] a_beautiful_rhind@alien.top 1 points 10 months ago (1 children)

So we can just skip BOS token on all these models?

[–] ambient_temp_xeno@alien.top 1 points 10 months ago

I did the gguf-py/scripts/gguf-set-metadata.py some-yi-model.gguf tokenizer.ggml.bos_token_id 144

and it's changed the outputs a lot from yesterday.