this post was submitted on 23 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I found it to be worse than openhermes 2.5. It just gives shorter, more robotic responses
same, i found it tends to give short response.
But are the short responses more correct?
Exactly. It didn't hallucinate even once in my tests. I used RAG and it gave me perfect to-the-point answers. But I know most people want more verbose outputs it's just that it's good for factual retrieval use cases.
This is a fine-tuned/instruction-tuned model. Explicit system prompts or instructions like “generate a long, detailed answer” can make the model generate longer responses. 🙂
--Kaokao, AI SW Engineer @ Intel
Maybe for RAG, short answer is less possible for hallucination?I will test more. thanks