this post was submitted on 27 Oct 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 11 months ago
MODERATORS
 

DeepMind released the Training Compute-Optimal Large Language Models paper in 2022 which describe some scaling laws for LLMs. As far as I understand this is the most accredited reference to estimate the optimal relation between dataset size, compute power and model size.

Recently a number of models have been developed using far less data, parameters and compute than the bigger LLMs. Yet these models achieved great results thanks to much better data quality. For instance models like WizardLM, TinyStories and phi-1. Similarly, a lot of research seems to imply that better data could offer huge improvements without any other changes.

I'm curious about what role the data quality plays in the training of LLMs.
Is the set of values estimated by the Chinchilla scaling laws optimal for these smaller models with optimized data too?
Do we have any model to estimate the quality of some datasets and some scaling laws that take it into account?
Are there any relevant projects or research I could check out, focused on creating big datasets to train larger LLMs with high-quality data?

you are viewing a single comment's thread
view the rest of the comments
[–] scorpfromhell@alien.top 1 points 11 months ago

What if the conversation moves away from data quality / contamination to mixing skills?

https://arxiv.org/abs/2310.17567