this post was submitted on 31 Oct 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 11 months ago
MODERATORS
 

Hello,

I'm curious, when evaluating a neural network for both the training and validation data, do you calculate the accuracy across the entire dataset, or at every batch and then find the average at every batch?

you are viewing a single comment's thread
view the rest of the comments
[–] MrFlufypants@alien.top 1 points 10 months ago

There are actually 3 datasets in a traditional NN training, though in practice people forego the third a lot: training, validation, and testing. You should split your datasets at the beginning before letting any networks train on them. Training data is what the network sees when it updates its weights. A batch is run, loss computed, backpropogation done with that loss to update the weights.

Then usually after an epic (which may or may not be the whole training set depending on how your data works) you run validation. This is solely a score to keep track of to prove you aren’t overfitting and to find a good stopping point. Once val dips or stabilizes people often stop training.

You still made a decision based on validation though, so validation accuracy isn’t a perfectly reportable score. That’s what training is for: once you’ve run all your models and picked the best, you run testing and that’s as unbiased a score you can get given a dataset.

Stuff works differently when you aren’t running a supervised classification task, but that’s a class for a different day.