this post was submitted on 22 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

According to this tweet,

when gpt4 first finished training it didn’t actually work very well and the whole team thought it’s over, scaling is dead…until greg went into a cave for weeks and somehow magically made it work

So gpt-4 was kind of broken at first. Then greg spent a few weeks trying to fix it and then it somehow worked.

So why did it not work at first and how did they fix it?
I think this is an important question to the OSS community,

you are viewing a single comment's thread
view the rest of the comments
[–] wojtek15@alien.top 1 points 10 months ago (1 children)

According to https://openai.com/research/gpt-4 they were able to predict GPT4 performance while still training it, so this is contradiction to this tweet.

[–] dogesator@alien.top 1 points 9 months ago

Predicting the loss is very different from predicting real world abilities, they are able to top the former, not the latter.

Predicting the future loss once you’re already 10% into training is fairly trivial. Predicting the actual abilities though is not.