this post was submitted on 10 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

I have access to a single 80Gb A100 GPU and would like to train an LLM with GPT-like architecture from scratch. Does anyone know how to calculate the maximum model size.

you are viewing a single comment's thread
view the rest of the comments
[–] Consistent_Area9877@alien.top 1 points 1 year ago

Recently just took the GenAI LLM course on coursera. A basic calculation is 1B params can be trained on a SINGLE A100 80GB GPU using bfloat16 quantization with room to spare.

I think it can consume up to 40GB of memory hence you can’t really go to 2B params. But that also means you might be okay with 1.5B without going over the 80GB limit