this post was submitted on 29 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

For the sake of argument, let's say VRAM is no object.

If I set alpha_value to around 2.5 to 3 when loading a normally 4k base context model, I can get up to about 10k context before things start noticeably falling apart. Extending context higher than this, even if I increase alpha_value higher to go along with it, the model gets progressively less coherent.

I've found that I can attenuate this a little bit by messing around with different alpha values at different context loads, but it never really gets usable. It gets closer to where it needs to be, but still nothing I'd actually want to run.

Is this just the nature of the beast when it comes to extending context?

top 5 comments
sorted by: hot top controversial new old
[–] FieldProgrammable@alien.top 1 points 11 months ago (1 children)

Yes. See this post and the graphs in it for an illustration of what happens to model performance with different context (this post is for 2k native context Llama 1 models so just scale the X axis accordingly for Llama 2).

As you increase the RoPE scaling, the positional embeddings of the prompt are deviating further and further from what the model was trained on. The different compression methods simply attempt to trade off usable quality at longer contexts in exchange for reduced performance at lower contexts. If the model is fine tuned on the compressed scaling, then this alleviates some of the losses, this is what is done with models like SuperHOT and Llongma, which fine tune the model on linear RoPE scaled data.

[–] qrios@alien.top 1 points 11 months ago

I don't think it's so simple as "the nature of the beast."

From my own experiments, you can maintain coherence by having stuff scale more the further back it is, but at some cost to accuracy. So stuff further back is more confused, but still accessible, and stuff more recent is still grounding the generation.

I haven't tested super thoroughly though.

[–] mrjackspade@alien.top 1 points 11 months ago (1 children)

Switch to using YARN is the best I'm aware of at the moment.

YARN is basically dynamic alpha scaling with extra steps, functions better without fine tuning, and also benefits from fine tuning.

https://private-user-images.githubusercontent.com/567732/276779985-6b37697c-896e-4199-a541-a489b6fad213.png

[–] SomeOddCodeGuy@alien.top 1 points 11 months ago

I've seen a couple of YARN models, but I honestly have no idea how to use them lol. That and the mistral models; they always want to load up at 32k tokens, but then coherency of the model just dies after 5k. I can't find really clear instructions on what's expected to get maximum context value from either, so I tend to just ignore using either at high context.

[–] mcmoose1900@alien.top 1 points 11 months ago

Have you considered running a Yi 200K model instead?