this post was submitted on 16 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

Often when I read ML papers the authors compare their results against a benchmark (e.g. using RMSE, accuracy, ...) and say "our results improved with our new method by X%". Nobody makes a significance test if the new method Y outperforms benchmark Z. Is there a reason why? Especially when you break your results down e.g. to the anaylsis of certain classes in object classification this seems important for me. Or do I overlook something?

you are viewing a single comment's thread
view the rest of the comments
[–] Seankala@alien.top 1 points 1 year ago (2 children)

The reason why is because most researchers can't be bothered because no one pays attention to it anyway. I'm always doubtful about the number of researchers who even properly understand statistical testing.

I'd be grateful if a paper ran experiments using 5-10 different random seeds and provided the mean and variance.

[–] Crimsoneer@alien.top 1 points 1 year ago (2 children)

Statistical significance was designed in an age when you had 300 observations to represent a population of hundreds of thousands. It is far less meaningful when you move into the big data space.

[–] bankimu@alien.top 1 points 1 year ago

Wow that was a lot of bollocks in a small space.

[–] kniglas@alien.top 1 points 1 year ago

I second that. Additionally I would say, statistics are designed for hypothesis testing. You have an idea, make a priori assumptions (! e.g. which variables to look at!), collect sample data and then want to know if the findings can be generalized to the entire population (or: everything/everyone in the world). The underlying idea is to better understand the workings of the world.

My own experience, as a pretty traditionally trained researcher (i.e. knowing a bit about statistics) leading a group of data scientist is that the goal is really different. My data scientist try to build a modlę that works, that is the primary goal. I am trying to understand why something is that way, even if that means that my (a priori) build model doesn't work.

The boarder between "traditional stats" and ML are very fluid. ML uses a lot of stats, and hypothesis testing research uses a lot ML these days. Just the underlying motivation might be slightly different.

[–] Jurph@alien.top 1 points 1 year ago

I'd be grateful if a paper ran experiments using 5-10 different random seeds and provided the mean and variance.

Unfortunately most papers are generated using stochastic grad student descent, where the seed keeps being re-rolled until a SOTA result is achieved.