this post was submitted on 10 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

I have access to a single 80Gb A100 GPU and would like to train an LLM with GPT-like architecture from scratch. Does anyone know how to calculate the maximum model size.

you are viewing a single comment's thread
view the rest of the comments
[–] karlwikman@alien.top 1 points 1 year ago

This question might come off as stupid, but it's really something I'm curious about:

I 100% see why someone would like to take a state-of-the-art current open model and fine-tune it on their own data. I don't see why someone would want to train their own model from scratch. Can you explain it?