"llama2 7b > llama2 13b"
lol
Community to discuss about Llama, the family of large language models created by Meta AI.
"llama2 7b > llama2 13b"
lol
Oof 3% is a lot
I don't think they actually tested base models. Look at the description of their methods - they don't run the models themselves, they only use public apis They say they used mistral-instruct, not Mistral. Those are not the same models, you shouldn't put "Mistral" in the table if you ran tests on "Mistral-Instruct". There is no information what actual model was used for llama test, or the output of the test. I suspect that they used llama-2-chat models which were RHLFed. Mistral Instruct is not RHLFed. It's likely that RHLF can reduce hallucination rate and we are seeing it's effects.
Noob question: What is the recommended method to interact with a non finetuned/chat model?
How is possible that Llama2 13B and 7B have lower hallucination rate than Claude?