this post was submitted on 16 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

So Mistral-7b is a pretty impressive 7B param model ... but why is it so capable? Do we have any insights into its dataset? Was it trained very far beyond the scaling limit? Any attempts at open reproductions or merges to scale up # of params?

you are viewing a single comment's thread
view the rest of the comments
[–] meetrais@alien.top 1 points 11 months ago (3 children)

I second this. Mistral-7B gave me good results. After fine-tuning it's result is even better.

[–] PwanaZana@alien.top 1 points 11 months ago (1 children)

Are there notable finetunes to your knowledge? I've started using LLMs today, starting with openorca mistral 7B and it seems pretty good.

[–] meetrais@alien.top 1 points 11 months ago

On HuggingFace you can find many fine-tuned/quantized models. Look for models from TheBloke on HuggingFace.

load more comments (1 replies)