have you considered utilizing sliding window techniques to expand the context window for LLMs? It's a commonly used approach that can effectively increase the context window without overwhelming computational resources. Additionally, leveraging hierarchical approaches or incorporating external knowledge sources could also be beneficial for extending the context window. Good luck with your exploration!
Machine Learning
Community Rules:
- Be nice. No offensive behavior, insults or attacks: we encourage a diverse community in which members feel safe and have a voice.
- Make your post clear and comprehensive: posts that lack insight or effort will be removed. (ex: questions which are easily googled)
- Beginner or career related questions go elsewhere. This community is focused in discussion of research and new projects that advance the state-of-the-art.
- Limit self-promotion. Comments and posts should be first and foremost about topics of interest to ML observers and practitioners. Limited self-promotion is tolerated, but the sub is not here as merely a source for free advertisement. Such posts will be removed at the discretion of the mods.
I have been able to expand the context window of multimodal models like gpt4 simply by rendering the text to images at a small font size and then feeding it in as images. I've not done large scale studies to determine the total increase in perplexity or anything but my empirical results have been great. Plus you get the ability to analyze non standard text.
If it were me, I would LoRA adapt a model to take in image input. There's a lot of space in the token embedding space that is completely barren that could be used for reasoning.
Wait, WHAT
https://arxiv.org/abs/2309.17453
This paper might be useful. They use window attention with attention sinks to deal with longer texts.
I've been working on some experimental context window extensions using multimodal models https://github.com/sshh12/multi_token
Similar to the idea of putting text into an image for GPT4V, I'm just directly encoding chunks of texts into embeddings and injecting them in the models. This gives you a very lossy 128x extension of your context window which is pretty massive.
Thanks for the input. This seems amazing.