this post was submitted on 30 Oct 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

Wondering what everyone thinks in case this is true. It seems they're already beating all open source models including Llama-2 70B. Is this all due to data quality? Will Mistral be able to beat it next year?

Edit: Link to the paper -> https://arxiv.org/abs/2310.17680

https://preview.redd.it/kdk6fwr7vbxb1.png?width=605&format=png&auto=webp&s=21ac9936581d1376815d53e07e5b0adb739c3b06

you are viewing a single comment's thread
view the rest of the comments
[โ€“] BalorNG@alien.top 1 points 10 months ago (5 children)

Given how good 7b Mistral is in my personal experience, it seems that a model 3x its size can BE GPT3.5 Turbo is no longer implausible.

[โ€“] Fun_Analyst_1234@alien.top 1 points 10 months ago

I think so too. I really hope those guys are funded to improve the model. Serious talent in that team.

load more comments (4 replies)