this post was submitted on 12 Nov 2023
1 points (100.0% liked)
Machine Learning
1 readers
1 users here now
Community Rules:
- Be nice. No offensive behavior, insults or attacks: we encourage a diverse community in which members feel safe and have a voice.
- Make your post clear and comprehensive: posts that lack insight or effort will be removed. (ex: questions which are easily googled)
- Beginner or career related questions go elsewhere. This community is focused in discussion of research and new projects that advance the state-of-the-art.
- Limit self-promotion. Comments and posts should be first and foremost about topics of interest to ML observers and practitioners. Limited self-promotion is tolerated, but the sub is not here as merely a source for free advertisement. Such posts will be removed at the discretion of the mods.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Couple of things to break down here.
You call them "parameters" but we would normally call those "features", just a small note.
Your two questions are pretty similar:
Q1. Is it better to add more features or less features?
Q2. Is it better to have a more complex/larger model or simpler/smaller model (like a neural network)?
The answer to both is: it depends!
When you add more features and make your model larger/more complex, then that means your model will be able to capture more complex patterns which could be beneficial or could be harmful!
You should read up on overfitting vs underfitting error. Generally speaking, you can reduce underfitting error by adding features and increase model complexity but that comes with the trade-off of increasing overfitting error usually.
The question then becomes: is the gain in underfitting error outweighing the loss in overfitting error?
The only way to know for sure is usually to test out both approaches on a validation set and choose the model and feature set that performed best.