The ideas in the paper seem to be using a very similar concept from a NMT paper called Deep Encoders, Shallow Encoders. Surprised to see no citation in the related work.
Machine Learning
Community Rules:
- Be nice. No offensive behavior, insults or attacks: we encourage a diverse community in which members feel safe and have a voice.
- Make your post clear and comprehensive: posts that lack insight or effort will be removed. (ex: questions which are easily googled)
- Beginner or career related questions go elsewhere. This community is focused in discussion of research and new projects that advance the state-of-the-art.
- Limit self-promotion. Comments and posts should be first and foremost about topics of interest to ML observers and practitioners. Limited self-promotion is tolerated, but the sub is not here as merely a source for free advertisement. Such posts will be removed at the discretion of the mods.
Very cool. Is this still compatible with faster whisper / whisperx and if not how does the speed up compare? Whisperx is already way more than 6x faster using the same v2 models (and still multi lingual).
If we can stack these improvements - awesome!
Will you release your fine tuning framework so others can find tune on their own data?
Does the improvement itself depend on large scale data or can we achieve positive results with smaller fine tuning sets via your approach here?
Why did you bury the RTF results in the appendix? These are way more interesting than relative latency compared to vanilla whisper!
One thing I would question: A100GPU might be 'typical' for a lot of cloud offerings like what you see coming from DeepGram and the like, but it's definitely not typical of what you'd find or have access to for any sort of on-site client. Here CPU is still king.
Have you run any experiments with CPU only inference? What does that look like? Here you can still achieve 0.01x-0.06xRT with CPU only inference and basically the same accuracy with a fine-tuned model utilizing the latest production releases from K2/icefall. This looks like it's getting closer, but I'd still be inclined to recommend using this distilled model to pre-generate a bunch of pseudo labeled training data for a smaller, dedicated K2/sherpa production system for anything on site.
Hype!
not sure if someone also read the paper, in this paper they picks pseudo label via WER which means it already have the ground-truth, why not using those ground-truth as training goal but use pseudo label as goal?
wondering they must do the experiment but not reveal in the paper