this post was submitted on 30 Oct 2023
1 points (100.0% liked)

LocalLLaMA

14 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 2 years ago
MODERATORS
 

I'm trying to build an application using RAGs. I know how RAGs help ground the responses and all, but how do I handle generic queries from users which have nothing to do with what's stored in my vector database? For example, queries such as: "How many gold medals did China win during Tokyo Olympics?" vs "Parapharse this email for me: ... ". I would assume LLMs without RAGs would do a much better job answering the second question.

How do people usually handle these scenarios? Are there any tools that I can look at? Any help would be greatly appreciated. Thank you.

you are viewing a single comment's thread
view the rest of the comments
[–] DarthNebo@alien.top 1 points 2 years ago

The way to do this is to generate a bunch of hypothetical questions from the FAQ, index these in the vDB

Then for the user prompt do a two stage inference with very small CTX size which only determines if the user is asking a question related to items specifically mentioned on the FAQ. Then you can retrieve the relevant FAQ section or source document accordingly only if the score is within a threshold