this post was submitted on 29 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] zalperst@alien.top 1 points 11 months ago (9 children)

It's extremely surprising given many instances of data are only seen once or very few times by the model during training

[–] cegras@alien.top 1 points 11 months ago (2 children)

What is the size of ChatGPT or the biggest LLMs compared to the dataset? (Not being rhetorical, genuinely curious)

[–] zalperst@alien.top 1 points 11 months ago

Trillions of tokens, billions of parameters

[–] StartledWatermelon@alien.top 1 points 11 months ago

GPT-4: 1.76 trillion parameters, about 6.5* trillion tokens in the dataset.

  • could be twice that, the leaks weren't crystal clear. The above number is more likely though.
load more comments (6 replies)