For anyone wondering, you can actually rent Gaudi from Intel's Dev Cloud to finetune like this:
https://developer.habana.ai/intel-developer-cloud/
The blog cites $10/hour for 8 HPUs.
Community to discuss about Llama, the family of large language models created by Meta AI.
For anyone wondering, you can actually rent Gaudi from Intel's Dev Cloud to finetune like this:
https://developer.habana.ai/intel-developer-cloud/
The blog cites $10/hour for 8 HPUs.
Intel has entered the game. Things are getting interesting.
If we ever get access to a mistral or yi 70b± model I think a lot of companies are going to be in trouble with their current models.
Interested to know how it scores for RAG use cases, there is a benchmark for that https://github.com/vectara/hallucination-leaderboard
Up to now, Mistral underperforms Llama2.
Currently all the finetuned version of Mistral I've tested have a high rate of hallucination: this one also seems to have this tendency.
Thank you for your work! Is it possible to download this model if I can't run Ollama? I couldn't find a download link or a HF repo.