this post was submitted on 16 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 11 months ago
MODERATORS
 

Often when I read ML papers the authors compare their results against a benchmark (e.g. using RMSE, accuracy, ...) and say "our results improved with our new method by X%". Nobody makes a significance test if the new method Y outperforms benchmark Z. Is there a reason why? Especially when you break your results down e.g. to the anaylsis of certain classes in object classification this seems important for me. Or do I overlook something?

you are viewing a single comment's thread
view the rest of the comments
[–] altmly@alien.top 1 points 10 months ago

Three main reasons. 1) it's expensive to run multiple experiments, 2) it's nonsensical to split away more data when nobody else is subject to the same constraint 3) if the final artifact is the model and not the method, then in many cases it doesn't matter very much