this post was submitted on 31 Oct 2023
1 points (100.0% liked)
Machine Learning
1 readers
1 users here now
Community Rules:
- Be nice. No offensive behavior, insults or attacks: we encourage a diverse community in which members feel safe and have a voice.
- Make your post clear and comprehensive: posts that lack insight or effort will be removed. (ex: questions which are easily googled)
- Beginner or career related questions go elsewhere. This community is focused in discussion of research and new projects that advance the state-of-the-art.
- Limit self-promotion. Comments and posts should be first and foremost about topics of interest to ML observers and practitioners. Limited self-promotion is tolerated, but the sub is not here as merely a source for free advertisement. Such posts will be removed at the discretion of the mods.
founded 11 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
There are actually 3 datasets in a traditional NN training, though in practice people forego the third a lot: training, validation, and testing. You should split your datasets at the beginning before letting any networks train on them. Training data is what the network sees when it updates its weights. A batch is run, loss computed, backpropogation done with that loss to update the weights.
Then usually after an epic (which may or may not be the whole training set depending on how your data works) you run validation. This is solely a score to keep track of to prove you aren’t overfitting and to find a good stopping point. Once val dips or stabilizes people often stop training.
You still made a decision based on validation though, so validation accuracy isn’t a perfectly reportable score. That’s what training is for: once you’ve run all your models and picked the best, you run testing and that’s as unbiased a score you can get given a dataset.
Stuff works differently when you aren’t running a supervised classification task, but that’s a class for a different day.