this post was submitted on 08 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
you could do a rag that uses vector embeddings, but you could also just ask the llm for a search query and use that to search a database and that would still be a rag
This is interesting, you are saying like , you have embeddings on vector db , and you ask llm to give you some kind of sql query to search in vec db ?
Most often you search the vector db with natural language, there is no special schema to use but you do need to consider how the embedding model is capturing the vectors so it is matched with the embedded query. RAG actually also describes when the LLM is driving the searches, and is the only way I have coded it, the user may ask for something but the LLM creates the search query based on that and the conversation history.