this post was submitted on 16 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

Often when I read ML papers the authors compare their results against a benchmark (e.g. using RMSE, accuracy, ...) and say "our results improved with our new method by X%". Nobody makes a significance test if the new method Y outperforms benchmark Z. Is there a reason why? Especially when you break your results down e.g. to the anaylsis of certain classes in object classification this seems important for me. Or do I overlook something?

you are viewing a single comment's thread
view the rest of the comments
[โ€“] SMFet@alien.top 1 points 1 year ago (2 children)

Editor at AI journal and reviewer for a couple of the big conferences. I always ask for statistical significance as the "Rube Goldberg papers" as I call them are SO so common. Papers that complicate things to oblivion without any real gain.

At the very least, a bootstrap of the test results would give you some idea of the potential confidence interval of your test performance.

Lol, I hate reviewing those papers. "Interpretable quantized meta pyramidal cnn-vit-lstm with GAN based data augmentation and yyyy-aware loss for XXXX" no GitHub though...

load more comments (1 replies)