MrFlufypants

joined 10 months ago
[–] MrFlufypants@alien.top 1 points 10 months ago

There are actually 3 datasets in a traditional NN training, though in practice people forego the third a lot: training, validation, and testing. You should split your datasets at the beginning before letting any networks train on them. Training data is what the network sees when it updates its weights. A batch is run, loss computed, backpropogation done with that loss to update the weights.

Then usually after an epic (which may or may not be the whole training set depending on how your data works) you run validation. This is solely a score to keep track of to prove you aren’t overfitting and to find a good stopping point. Once val dips or stabilizes people often stop training.

You still made a decision based on validation though, so validation accuracy isn’t a perfectly reportable score. That’s what training is for: once you’ve run all your models and picked the best, you run testing and that’s as unbiased a score you can get given a dataset.

Stuff works differently when you aren’t running a supervised classification task, but that’s a class for a different day.

[–] MrFlufypants@alien.top 1 points 10 months ago

Read paper, convince customer to give more data than they want to give (biggest step)?, spend 90% of time integrating model, present to someone who makes WAY more money than me repeat.

Make 150k at defense contractor with masters+2.5 and fantastic WL balance and am happy. Look for better with phd, with masters it’s hard.