this post was submitted on 19 Nov 2023
1 points (100.0% liked)

LocalLLaMA

11 readers
4 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 2 years ago
MODERATORS
 

We've seen pretty amazing performance of mistral 7b when comparing with Llama 34B & Llama2 13B. I'm curious, theoretically, will it be possible to build an SLM, with 7-8B parameters, able to outperform GPT4 in all tasks? If so, what are potential difficulties / problems to solve? And when do you expect such SLM to come?

ps: sorry for the typo. This is my real question.

Is it possible for SLM to outperform GPT4 in all tasks?

you are viewing a single comment's thread
view the rest of the comments
[–] vasileer@alien.top 1 points 2 years ago

"A 34B model beating all 70Bs and achieving the same perfect scores as GPT-4 and Goliath 120B in this series of tests!"

https://www.reddit.com/r/LocalLLaMA/comments/17vcr9d/llm_comparisontest_2x_34b_yi_dolphin_nous/

from a link another commenter posted