ucals

joined 11 months ago
[–] ucals@alien.top 1 points 11 months ago

That’s easy: model stacking/ensembling. All winning Kaggle grandmasters use it.

You have one model only, using only one technique.

There are several other classic approaches to NLP classification tasks: Naive Bayes, SVMs, CBOW, etc.

The idea behind model stacking: train different models, each one using a different method. Then, train a meta-model, which uses as features the output of each individual model.

This will significantly improve your score. It’s how people win Kaggle competitions.