this post was submitted on 14 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

Couldn't wait for the great TheBloke to release it so I've uploaded a Q5_K_M GGUF of Intel/neural-chat-7b-v3-1.

From some preliminary test on PISA sample questions it seems at least on par with OpenHermers-2.5-Mistral-7B

https://preview.redd.it/bkaezfb51c0c1.png?width=1414&format=png&auto=webp&s=735d0f03109488e01d65c1cf8ec676fa7e18c1d5

you are viewing a single comment's thread
view the rest of the comments
[โ€“] AdamDhahabi@alien.top 1 points 10 months ago (1 children)

Interested to know how it scores for RAG use cases, there is a benchmark for that https://github.com/vectara/hallucination-leaderboard

Up to now, Mistral underperforms Llama2.

[โ€“] fakezeta@alien.top 1 points 10 months ago

Currently all the finetuned version of Mistral I've tested have a high rate of hallucination: this one also seems to have this tendency.