this post was submitted on 10 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 11 months ago
MODERATORS
 

I have access to a single 80Gb A100 GPU and would like to train an LLM with GPT-like architecture from scratch. Does anyone know how to calculate the maximum model size.

top 4 comments
sorted by: hot top controversial new old
[–] Consistent_Area9877@alien.top 1 points 10 months ago

Recently just took the GenAI LLM course on coursera. A basic calculation is 1B params can be trained on a SINGLE A100 80GB GPU using bfloat16 quantization with room to spare.

I think it can consume up to 40GB of memory hence you can’t really go to 2B params. But that also means you might be okay with 1.5B without going over the 80GB limit

[–] Ok-Equipment9840@alien.top 1 points 10 months ago

Depends on how many tokens you have?

[–] karlwikman@alien.top 1 points 10 months ago

This question might come off as stupid, but it's really something I'm curious about:

I 100% see why someone would like to take a state-of-the-art current open model and fine-tune it on their own data. I don't see why someone would want to train their own model from scratch. Can you explain it?

[–] AI-Guru011010@alien.top 1 points 10 months ago

With bfloat16 and flash attention you can fully pretrain a 200M parameter encoder-decoder model on millions of data samples in as little as a couple of weeks. You’re going to have to really focus on optimizing your workflow so that you’re limited by gpu utilization, you don’t want your gpu sitting around waiting for data or anything. I’ve also been able to train models with >650m parameters and a sequence length of 4096 on a single A100 using huggingface accelerate, albeit much more slowly