Very interesting development, but I'm waiting for more production-ready version. Having to set up separate Github repo, with manual installation inside, is not exactly nice. However, if this gets fully compatible with HuggingFace Hub, then this will be huge for simpler cases.
Machine Learning
Community Rules:
- Be nice. No offensive behavior, insults or attacks: we encourage a diverse community in which members feel safe and have a voice.
- Make your post clear and comprehensive: posts that lack insight or effort will be removed. (ex: questions which are easily googled)
- Beginner or career related questions go elsewhere. This community is focused in discussion of research and new projects that advance the state-of-the-art.
- Limit self-promotion. Comments and posts should be first and foremost about topics of interest to ML observers and practitioners. Limited self-promotion is tolerated, but the sub is not here as merely a source for free advertisement. Such posts will be removed at the discretion of the mods.
Wait, exponential in what?
Saving this for tomorrows reading
As someone who writes CUDA code professionally, these are my two cents on the matter: The reported speed enhancements, particularly the claimed 117.83x speedup, might be somewhat misleading. Consider, for example, the comparison of CUDA speedups. The authors contrast their CUDA Fast Feed Forward (CUDA fff) implementation with their own highly unoptimized version of a CUDA Fast Forward (CUDA ff).
In an effort to ensure a fair comparison, they maintained the same code structure for both CUDA fff and CUDA ff. However, this approach resulted in the CUDA ff not utilizing any shared memory and caused significant memory divergence due to the use of threadIdx.x for indexing the outer dimensions of matrices.
the claimed 117.83x speedup, might be somewhat misleading
If you compare the best implementation of FFF on CUDA to the best implementation of FF on CUDA, then the speed-up they got is 3.15x:
See Page 5 Further comparisons: "On GPU, the PyTorch BMM implementation of FFF delivers a 3.15x speedup over the fastest (Native fused) implementation of FF"
The 40x that u/lexected mentioned seems to apply only when comparing to an apparently much slower FF version.
It's a pretty cool paper regardless, as far as I can tell from skimming it. But it could benefit from stating more clearly what has been achieved.
It's probably just my own internal bias but I feel like this last week of chaos in this space has resulted in a sudden surge in cool new ideas percolating in the OSS/localllama/ml spaces.
Thanks for sharing this!
78x speedup over the optimized baseline feedforward implementation
So they are 78x faster than MKL using the same number of cores?
I think DistilBERT needs to be in Table 2, since it's their most direct competitor: it trades off accuracy for speed, and requires extra training effort, like their approach.
Still, if they are about 20x faster than DistilBERT using cuBLAS, that's pretty amazing.
has 4095 neurons but selectively uses only 12 (0.03%) for inference
an extra 0
in there
Maybe I missed it, but how did they select which neurons should be used in each layer? Max values after the activation function? Something else? Did they just fix the number of neurons that should be used fixed? e.g., to 12? So just taking the max 12 values?
The output of each parent neuron is basically treated as logit. So no activation is necessary. At inference, the logits below zero correspond to the choice of one child node and logits above zero correspond to the choice of alternative child node. At their deepest model, there are 11 such consecutive choices to be made, a descent down the binary tree.
The specifics of training are discussed in the previous paper of the authors. All nodes are computed during training so there's no speed-up at this stage compared to vanilla dense layer.
The number of neurons that should be used is fixed in advance. Basically, it's determined by the shape of the tree in which neurons are organised.