this post was submitted on 21 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

I want to use an open source LLM as a RAG agent that also has memory of the current conversation (and eventually I want to work up to memory of previous conversations). I was looking into conversational retrieval agents from Langchain (linked below), but it seems they only work with OpenAI models. Is it possible to get an open source LLM to work with RAG and conversational memory using Langchain?

https://python.langchain.com/docs/use_cases/question_answering/conversational_retrieval_agents

top 1 comments
sorted by: hot top controversial new old
[–] AndrewVeee@alien.top 1 points 10 months ago

I think you might be able to plug in another model as a chat agent there. LangChain is pretty flexible, but I do remember being confused about the difference between a chat agent and LLMs. I think you can plug in any of these: https://python.langchain.com/docs/integrations/chat/

I quickly gave up on LangChain and went with custom llama-cpp-python because it was too difficult to figure out what LangChain was doing and customize the behavior.

But I also never got around to conversation memory because my rag prompt alone took 1 minute to start getting a response on my poor little laptop haha