this post was submitted on 16 Nov 2023
1 points (100.0% liked)
Machine Learning
1 readers
1 users here now
Community Rules:
- Be nice. No offensive behavior, insults or attacks: we encourage a diverse community in which members feel safe and have a voice.
- Make your post clear and comprehensive: posts that lack insight or effort will be removed. (ex: questions which are easily googled)
- Beginner or career related questions go elsewhere. This community is focused in discussion of research and new projects that advance the state-of-the-art.
- Limit self-promotion. Comments and posts should be first and foremost about topics of interest to ML observers and practitioners. Limited self-promotion is tolerated, but the sub is not here as merely a source for free advertisement. Such posts will be removed at the discretion of the mods.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
TL;DR The more constraints on the model, the more time should spend analyzing your data and formulating your problem.
I'll agree with the top comment. I've also had to deal with a problem at work where we were trying to perform product name classification for our e-commerce product. The problem was that we couldn't afford to have anything too large or increase infrastructure costs (i.e., if possible we didn't want to use any more GPU computing resources than we already were).
It turns out that extensive EDA was what saved us. We were able to come up with a string-matching algorithm sophisticated enough that it achieved high precision with practically no latency concerns. Might not be as flexible as something like BERT but it got the job done.