this post was submitted on 22 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 11 months ago
MODERATORS
 

I'm looking for insights and advice on extending the context window of the LLMs (most specifically Mistral).

Whether you're a researcher, developer, or enthusiast in the field, I'd love to hear about your experiences and recommendations. Are there any specific techniques, methodologies, or tools you've found effective in extending the context window for LLMs?

Additionally, if you've encountered challenges in this area, how did you overcome them? Any resources, papers, or community discussions you can point me to would be greatly appreciated.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] sshh12@alien.top 1 points 10 months ago (1 children)

I've been working on some experimental context window extensions using multimodal models https://github.com/sshh12/multi_token

Similar to the idea of putting text into an image for GPT4V, I'm just directly encoding chunks of texts into embeddings and injecting them in the models. This gives you a very lossy 128x extension of your context window which is pretty massive.

[โ€“] Infamous-Belt8671@alien.top 1 points 10 months ago

Thanks for the input. This seems amazing.