All you need a 32K LLM. Everything beyond that needs a tool invocation where the archived texts can be pulled from. You'll have to make your orchestrator smart enough to know that there is content beyond just needs to be invoked
this post was submitted on 01 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS