this post was submitted on 23 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

The new chat model released by Intel is now at the top of the OpenLLM leaderboard (among the 7B models).

https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard

you are viewing a single comment's thread
view the rest of the comments
[–] Shoddy_Vegetable_115@alien.top 1 points 10 months ago (2 children)

Exactly. It didn't hallucinate even once in my tests. I used RAG and it gave me perfect to-the-point answers. But I know most people want more verbose outputs it's just that it's good for factual retrieval use cases.

[–] Intel@alien.top 1 points 9 months ago

This is a fine-tuned/instruction-tuned model. Explicit system prompts or instructions like “generate a long, detailed answer” can make the model generate longer responses. 🙂

--Kaokao, AI SW Engineer @ Intel

[–] julylu@alien.top 1 points 10 months ago

Maybe for RAG, short answer is less possible for hallucination?I will test more. thanks