this post was submitted on 25 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I have only recently found the correct awnswr which is take the information and use Sparse Priming Representations (SPR) to distil the information. Next feed this text to privateGPT to use as a vector db document. Since SPR condenses the text you will be able to use more items as part of the retrieval phase.
Now query the LLM using the vector db, due to the SPR encoded text you get highly detailed and accurate results with a small LLM that is easy to run.
Hi! It’s the first time I’m seeing SPR, any resource where I can learn more about it? I’ve seen privateGPT, I believe it’s a front end that lets you upload files and I guess it build a database using something like chromaDB that learns what you feed it and takes it into consideration when giving answers, is that right?
SPR is not a technology it self it’s a methodology to “compress information” in a way that ai can effectively achieve larger context input for the same size, David does a great video explaining the methodology behind it. Iv found it to be useful as hell.