this post was submitted on 10 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 11 months ago
MODERATORS
 

I have access to a single 80Gb A100 GPU and would like to train an LLM with GPT-like architecture from scratch. Does anyone know how to calculate the maximum model size.

you are viewing a single comment's thread
view the rest of the comments
[–] AI-Guru011010@alien.top 1 points 10 months ago

With bfloat16 and flash attention you can fully pretrain a 200M parameter encoder-decoder model on millions of data samples in as little as a couple of weeks. You’re going to have to really focus on optimizing your workflow so that you’re limited by gpu utilization, you don’t want your gpu sitting around waiting for data or anything. I’ve also been able to train models with >650m parameters and a sequence length of 4096 on a single A100 using huggingface accelerate, albeit much more slowly