this post was submitted on 13 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Which is the best 70B on your list?
I'm still working on the updated 70B comparisons/tests, but right now, the top three models are still the same as in the first part of my Huge LLM Comparison/Test: 39 models tested (7B-70B + ChatGPT/GPT-4): lzlv_70B, SynthIA-70B-v1.5, chronos007-70B. Followed by dolphin-2_2-yi-34b.
SynthIA-70B-v1.5 seems to have the same context length of 2k as SynthIA-70B-v1.2, not the same 4k context length as SynthIA-70B-v1.2b
You're right with your observation, when I load the GGUF, KoboldCpp says "n_ctx_train: 2048". Could that be an erroneous display? Because I've always used v1.5 with 4K context, did all my tests with that, and it's done so well. If it's true, it might even be better with native context! Still, 2K just doesn't cut it anymore, though.