this post was submitted on 16 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

Often when I read ML papers the authors compare their results against a benchmark (e.g. using RMSE, accuracy, ...) and say "our results improved with our new method by X%". Nobody makes a significance test if the new method Y outperforms benchmark Z. Is there a reason why? Especially when you break your results down e.g. to the anaylsis of certain classes in object classification this seems important for me. Or do I overlook something?

you are viewing a single comment's thread
view the rest of the comments
[–] Recent_Ad4998@alien.top 1 points 1 year ago (3 children)

One thing I have found is that if you have a large dataset, the standard error can become so small that any difference in average performance will be significant. Obviously not always the case, depending on the size of the variance etc but I imagine it might be why it's often acceptable not to include them.

[–] iswedlvera@alien.top 1 points 1 year ago (2 children)

This is the reason. People do significance tests when you want to draw conclusions with 20 samples on an entire population. If you have thousands of samples there won't be much point.

[–] econ1mods1are1cucks@alien.top 1 points 1 year ago (1 children)

Depends on how big the individual samples are tbh. 1000 samples of 10 people actually sounds like a decent study group

[–] iswedlvera@alien.top 1 points 1 year ago

I see what you mean. Yeah it shouldn't be by default I don't do statistical significance tests.