lexected

joined 11 months ago
[–] lexected@alien.top 1 points 11 months ago (3 children)

The system is quite broken, one could say that in its present state, it almost discourages genuine novelty of thought.

But, it's imperfect, first and foremost, because the people involved are imperfect. Reviewing is often a job assigned to the lowest performers in research groups, or traded by the highest performers (constantly on-big tech internships, building startups/open source models on the side) with their colleagues that have a somewhat more laid-back attitude to research excellence. You can submit a bad review and it will not come back to bite you, but in the age of reproducibility, a messed-up experiment or a poorly written/plainly incorrect paper that slips through the review system could be your end.

The idea is that you enter the publishing game at the beginning of your PhD and emerge seeing through and being above the game once you've graduated. After all, you first have to master the rules of the game to be able to propose meaningful changes. It is just that once done, you might have a lot more incentives to switch to industry/consultancy and not care about the paper-citation game ever again.

 

TL;DR: Organize your neurons into a tree to get 78x faster inference (theoretical limit is 341x).

This was demonstrated on BERT-base, where this change preserved 96% of its downstream GLUE performance. For a quick comparison, DistilBERT offers 1.6x acceleration while preserving 97% of GLUE performance.

This is a HuggingFace Featured Paper from 11/21/2023.

Paper: https://arxiv.org/abs/2311.10770

Code: https://github.com/pbelcak/UltraFastBERT

Model: https://huggingface.co/pbelcak/UltraFastBERT-1x11-long

Abstract:

Language models only really need to use an exponential fraction of their neurons for individual inferences.

As proof, we present UltraFastBERT, a BERT variant that uses 0.3% of its neurons during inference while performing on par with similar BERT models. UltraFastBERT selectively engages just 12 out of 4095 neurons for each layer inference. This is achieved by replacing feedforward networks with fast feedforward networks (FFFs).

While no truly efficient implementation currently exists to unlock the full acceleration potential of conditional neural execution, we provide high-level CPU code achieving 78x speedup over the optimized baseline feedforward implementation, and a PyTorch implementation delivering 40x speedup over the equivalent batched feedforward inference.

We publish our training code, benchmarking setup, and model weights.

This exponential acceleration was achieved on a 180mn BERT model. Just imagine how amazing the speedup would be on a multi-bn parameter model such as LLaMA if the tree trick (i.e. "fast feedforward networks") continues to scale up to larger layer sizes...