this post was submitted on 24 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
While speed decreases with context length, I wonder if with a small context you’re fitting entirely in GPU, but as the context length increases, you exceed the available VRAM and it has to offload part of it to CPU?
To check if this is the issue, take a chat that’s performing poorly and load a smaller model on it.
If that is your issue, you can try to fine-tune your context length to fit inside your available VRAM, or use a smaller model if you need the longer context.