this post was submitted on 01 Nov 2023
1 points (100.0% liked)
Machine Learning
1 readers
1 users here now
Community Rules:
- Be nice. No offensive behavior, insults or attacks: we encourage a diverse community in which members feel safe and have a voice.
- Make your post clear and comprehensive: posts that lack insight or effort will be removed. (ex: questions which are easily googled)
- Beginner or career related questions go elsewhere. This community is focused in discussion of research and new projects that advance the state-of-the-art.
- Limit self-promotion. Comments and posts should be first and foremost about topics of interest to ML observers and practitioners. Limited self-promotion is tolerated, but the sub is not here as merely a source for free advertisement. Such posts will be removed at the discretion of the mods.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Why did you bury the RTF results in the appendix? These are way more interesting than relative latency compared to vanilla whisper!
One thing I would question: A100GPU might be 'typical' for a lot of cloud offerings like what you see coming from DeepGram and the like, but it's definitely not typical of what you'd find or have access to for any sort of on-site client. Here CPU is still king.
Have you run any experiments with CPU only inference? What does that look like? Here you can still achieve 0.01x-0.06xRT with CPU only inference and basically the same accuracy with a fine-tuned model utilizing the latest production releases from K2/icefall. This looks like it's getting closer, but I'd still be inclined to recommend using this distilled model to pre-generate a bunch of pseudo labeled training data for a smaller, dedicated K2/sherpa production system for anything on site.