this post was submitted on 14 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 11 months ago
MODERATORS
 

Can LLMs stack more layers than the largest ones currently have, or is it bottlenecked? Is it because the gradients can’t propagate properly to the beginning of the network? Because inference would be to slow?

If anyone could provide a paper that talks about layer stacking scaling I would love to read it!

top 4 comments
sorted by: hot top controversial new old
[–] currentscurrents@alien.top 1 points 10 months ago (1 children)

They definitely can go deeper - with skip connections and normalization you can propagate gradients through any depth of architecture.

Adding more layers isn't free though, it requires more parameters and thus more compute. There's an optimal depth-to-width ratio for a given parameter count.

[–] cstein123@alien.top 1 points 10 months ago (1 children)

Exactly the answer I was looking for, thank you!

[–] iantimmis@alien.top 1 points 10 months ago

It becomes very expensive compute-wise, but where we are actually running up to the edge is the scale of the data. They've discovered "scaling laws" (see chinchilla paper) that determines how big your model should be given the amount of data you have. We could go bigger but there's no reason to use a multi-trillion parameter model for example because it's just wasted capacity.

[–] Brudaks@alien.top 1 points 10 months ago

The bottleneck is the total compute budget devoted to training, so while I'm quite certain that stacking a few more layers can be done and would have some benefit, it might well be that spending the same extra compute on a larger context window or 'wider' layers or simply doing more iterations on the same data would have a larger benefit than more layers, and if the people training the very large models think so, they would do these other things instead of stacking more layers.