this post was submitted on 29 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Yes. See this post and the graphs in it for an illustration of what happens to model performance with different context (this post is for 2k native context Llama 1 models so just scale the X axis accordingly for Llama 2).
As you increase the RoPE scaling, the positional embeddings of the prompt are deviating further and further from what the model was trained on. The different compression methods simply attempt to trade off usable quality at longer contexts in exchange for reduced performance at lower contexts. If the model is fine tuned on the compressed scaling, then this alleviates some of the losses, this is what is done with models like SuperHOT and Llongma, which fine tune the model on linear RoPE scaled data.
I don't think it's so simple as "the nature of the beast."
From my own experiments, you can maintain coherence by having stuff scale more the further back it is, but at some cost to accuracy. So stuff further back is more confused, but still accessible, and stuff more recent is still grounding the generation.
I haven't tested super thoroughly though.