For some datasets, it's not possible to go to 90%. Just figure out some reasonable ranges for your most important hyperparameters (you don't have to optimize ALL of them), and do grid search for those values. That's all there is to it.
Machine Learning
Community Rules:
- Be nice. No offensive behavior, insults or attacks: we encourage a diverse community in which members feel safe and have a voice.
- Make your post clear and comprehensive: posts that lack insight or effort will be removed. (ex: questions which are easily googled)
- Beginner or career related questions go elsewhere. This community is focused in discussion of research and new projects that advance the state-of-the-art.
- Limit self-promotion. Comments and posts should be first and foremost about topics of interest to ML observers and practitioners. Limited self-promotion is tolerated, but the sub is not here as merely a source for free advertisement. Such posts will be removed at the discretion of the mods.
As another comment stated using grid search, there’s a more optimised approach to this called Bayesian optimisation. There’s many algorithms you can implement for it. A personal favourite library of mine called optuna does it without the user having to think about the algorithm used for optimisation. (It employs a parzen estimator if y wanna get into the details you can look at that)
It’s wise to get a feel for the distribution of data so plot feature against feature to try and identify what should contribute to separability. Classification demands separable data. Consider testing a one dimensional feature set because that should be your worst case. If it’s no better than a random choice then you can’t expect it to help when combined with more features.
You might or might not benefit from using a kernel, some non linear transformations and feature engineering before concentrating on hyper parameters.
Sometimes removing a feature improves the result. Make sure you normalise when there are radically different scales.
And when you go for it on the whole dataset resist the urge to peek at the data. Ultimately you are looking for an algorithm that is not over fitting, and has a chance to perform on unseen data (of the same type).