AI-Guru011010

joined 1 year ago
[–] AI-Guru011010@alien.top 1 points 1 year ago

With bfloat16 and flash attention you can fully pretrain a 200M parameter encoder-decoder model on millions of data samples in as little as a couple of weeks. You’re going to have to really focus on optimizing your workflow so that you’re limited by gpu utilization, you don’t want your gpu sitting around waiting for data or anything. I’ve also been able to train models with >650m parameters and a sequence length of 4096 on a single A100 using huggingface accelerate, albeit much more slowly

[–] AI-Guru011010@alien.top 1 points 1 year ago

Same, I got some decent work done on a 1060 a few years back using cuda and torch