this post was submitted on 12 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

Look at this, apart Llama1, all the other "base" models will likely answer "language" after "As an AI". That means Meta, Mistral AI and 01-ai (the company that made Yi) likely trained the "base" models with GPT instruct datasets to inflate the benchmark scores and make it look like the "base" models had a lot of potential, we got duped hard on that one.

โ€‹

https://preview.redd.it/vqtjkw1vdyzb1.png?width=653&format=png&auto=webp&s=91652053bcbc8a7b50bced9bbf8638fa417387bb

you are viewing a single comment's thread
view the rest of the comments
[โ€“] FPham@alien.top 1 points 1 year ago

Shouldn't be the proof in the pudding?

If Mistral 7B is better than most other 7b models, then they did something right, no?

I understand that the base model then can inherit some biases - but it's onto them that they didn't cleaned those "As and AI..." answers strings from their dataset. So despite this, it performs better.