this post was submitted on 31 Oct 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 11 months ago
MODERATORS
 

Hello,

I'm curious, when evaluating a neural network for both the training and validation data, do you calculate the accuracy across the entire dataset, or at every batch and then find the average at every batch?

top 2 comments
sorted by: hot top controversial new old
[–] ObjectiveNewt333@alien.top 1 points 10 months ago

You can calculate your metrics (accuracy, loss, etc.) at each batch to track your training, but typically, the model's performance at each epoch (one iteration through the training set) is what people go by. Then, after each epoch, you should also iterate through the entirety of your validation set so that you can track how well the model is generalizing. Lastly, you may have a separate test set to calculate your metrics for after training.

Calculating the test set last is considered good practice in ML, but often just the validation is available, and test sets are often privately withheld for competitions and the sort depending on the type of data. I suppose its inevitable that people would inadvertently validate their techniques with the test set.

Also, be careful not just to average your batch metrics for the epoch metrics. Sometimes, the last epoch has a different size than the rest, leading to an error though it may be small.

[–] MrFlufypants@alien.top 1 points 10 months ago

There are actually 3 datasets in a traditional NN training, though in practice people forego the third a lot: training, validation, and testing. You should split your datasets at the beginning before letting any networks train on them. Training data is what the network sees when it updates its weights. A batch is run, loss computed, backpropogation done with that loss to update the weights.

Then usually after an epic (which may or may not be the whole training set depending on how your data works) you run validation. This is solely a score to keep track of to prove you aren’t overfitting and to find a good stopping point. Once val dips or stabilizes people often stop training.

You still made a decision based on validation though, so validation accuracy isn’t a perfectly reportable score. That’s what training is for: once you’ve run all your models and picked the best, you run testing and that’s as unbiased a score you can get given a dataset.

Stuff works differently when you aren’t running a supervised classification task, but that’s a class for a different day.