Man, 30T tokens deduplicated is a lot of data.
For reference, Llama 2 was trained on 2T tokens and GPT-4 was believed to have been trained on 13T tokens (and my suspicion is Turbo was too). This is much, much more than that.
Community to discuss about Llama, the family of large language models created by Meta AI.
Man, 30T tokens deduplicated is a lot of data.
For reference, Llama 2 was trained on 2T tokens and GPT-4 was believed to have been trained on 13T tokens (and my suspicion is Turbo was too). This is much, much more than that.
20B documents that are deduplicated.
I wonder if we'll see even slimmer version
Does this have the same level of deduplication like slimpajama or do we need a slimpajama v2?
If chinchilla is right, this dataset could be huge for small models.
https://www.lesswrong.com/posts/6Fpvch8RR29qLEWNH/chinchilla-s-wild-implications
Is there any way we can read those datasets? I'm a noob when it comes to "what's under the hood". On HuggingFace they show they tried to upload the dataset but it failed due to, likely, the sheer size of the thing...
How much free space is required to do a "git clone ..."?
Is there a better method to download the data without requiring additional space for the history (.git). If yes, how big is the whole dataset?
Given the current developments: Maybe some should start collecting raw data and serving them as torrents. ... Just in case.