this post was submitted on 27 Oct 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

DeepMind released the Training Compute-Optimal Large Language Models paper in 2022 which describe some scaling laws for LLMs. As far as I understand this is the most accredited reference to estimate the optimal relation between dataset size, compute power and model size.

Recently a number of models have been developed using far less data, parameters and compute than the bigger LLMs. Yet these models achieved great results thanks to much better data quality. For instance models like WizardLM, TinyStories and phi-1. Similarly, a lot of research seems to imply that better data could offer huge improvements without any other changes.

I'm curious about what role the data quality plays in the training of LLMs.
Is the set of values estimated by the Chinchilla scaling laws optimal for these smaller models with optimized data too?
Do we have any model to estimate the quality of some datasets and some scaling laws that take it into account?
Are there any relevant projects or research I could check out, focused on creating big datasets to train larger LLMs with high-quality data?

top 3 comments
sorted by: hot top controversial new old
[–] thedabking123@alien.top 1 points 1 year ago

Measuring and improving quality of NLP datasets in a comprehensive way is probably the main migraine there.

You can measure and improve quality by many dimensions that practitioners disagree on... ( accuracy, completeness, consistency, timeliness, validity, and uniqueness are common ways to slice data quality) and there's no consistent single measure for some of those either.

[–] scorpfromhell@alien.top 1 points 1 year ago

What if the conversation moves away from data quality / contamination to mixing skills?

https://arxiv.org/abs/2310.17567

[–] norbertus@alien.top 1 points 1 year ago

It seems that dataset size and quality are more important than the raw number of parameters, as many LLMs are under-trained

Though there has been significant recent work allowing larger and larger models to be trained, our analysis suggests an increased focus on dataset scaling is needed. Speculatively, we expect that scaling to larger and larger datasets is only beneficial when the data is high-quality

https://arxiv.org/pdf/2203.15556.pdf

This under-training appears to be significant

We conducted our investigation as a case study of the OPT-66B model, a 66-billion-parameter LLM that was open-sourced by Meta last year to serve as an open replica of GPT-3 (and was the largest publicly available decoder-only LLM at the time of our study). We found that a significant portion of the model could be discarded without affecting performance, indicating that OPT-66B and quite likely other prominent LLMs are undertrained.

...

We found that important attention heads are primarily clustered in the model’s intermediate layers, and important FFNs are primarily in later layers. The ability to perform zero-/few-shot in-context learning on 14 different natural-language-processing (NLP) datasets/tasks stayed nearly intact when up to 70% (~15.7B parameters in OPT-66B) of the attention heads are removed.

https://www.amazon.science/blog/do-large-language-models-really-need-all-those-layers