this post was submitted on 16 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

Often when I read ML papers the authors compare their results against a benchmark (e.g. using RMSE, accuracy, ...) and say "our results improved with our new method by X%". Nobody makes a significance test if the new method Y outperforms benchmark Z. Is there a reason why? Especially when you break your results down e.g. to the anaylsis of certain classes in object classification this seems important for me. Or do I overlook something?

you are viewing a single comment's thread
view the rest of the comments
[–] yoshiK@alien.top 1 points 1 year ago

How do you measure "statistical significance" for a benchmark? Run your model 10 times and, getting the same result each time, you conclude that variance is 0 and the significance is infty sigma?

So to get reasonable statistics, you would need to split your test set into say 10 parts and then you can calculate mean and average, but that is only a reasonable thing to do as long as it is cheap to gather data for the test set (and running the test set is fast).