this post was submitted on 16 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

Often when I read ML papers the authors compare their results against a benchmark (e.g. using RMSE, accuracy, ...) and say "our results improved with our new method by X%". Nobody makes a significance test if the new method Y outperforms benchmark Z. Is there a reason why? Especially when you break your results down e.g. to the anaylsis of certain classes in object classification this seems important for me. Or do I overlook something?

top 50 comments
sorted by: hot top controversial new old
[–] milkteaoppa@alien.top 1 points 1 year ago

I have πŸ™‹β€β™‚οΈ

[–] Ambiwlans@alien.top 1 points 1 year ago

Statistical significance doesn't make sense when nothing is stochastic. .... They test the entirety of the benchmark.

[–] nikgeo25@alien.top 1 points 1 year ago

Probably because most aren't statistically significant...

[–] bethebunny@alien.top 1 points 1 year ago (1 children)

While it's not super common in academia, it's actually really useful in industry. I use statistical bootstrapping -- poisson resampling of the input dataset -- to train many runs on financial fraud models and estimate variance of my experiments as a function of sampling bias.

Having a measure of the variance of your results is critical when you're deciding whether to ship models whose decisions have direct financial impact :P

load more comments (1 replies)
[–] matt_leming@alien.top 1 points 1 year ago

Statistical significance is best used for establishing group differences. ML is used for individual datapoint classification. If you have 75% accuracy in a reasonably sized dataset, it's trivial to include a p-value to establish statistical significance, but it may not be impressive by ML standards (depending on the task).

[–] devl82@alien.top 1 points 1 year ago

They don't even report if ANY kind of statistically sane validation method is used when selecting model parameters (usually a single number is reported) and you expect rigorous statistical significance testing? That.is.bold.

[–] __Maximum__@alien.top 1 points 1 year ago

They used to do this, I remember papers from 2015 the performance analysis of many papers were very comprehensive. They even provided useful statistics about used datasets. Now it's "we used COCO and evaluated on 2017val. Here is the final number." Unless the paper is about being better in certain classes, they will report the averaged percentage.

[–] kazza789@alien.top 1 points 1 year ago

One big reason for this is that there is a difference between prediction and inference. Most machine learning papers are not testing a hypothesis.

That said - ML definitely does get applied to inference as well, but in those cases the lack of p-values is often one of the lesser complaints.

[–] chief167@alien.top 1 points 1 year ago

a combination of different factors:

  • it is not thought in most self-educated programs.
  • therefore most actually don't know that 1) it exists 2) how to do it 3) how to do power calculations
  • since most don't know it, there is no demand for it
  • costs compute time and resources, as well as human time, so it's skipped if nobody asks for it
  • there is no standardized approach for ML models. Do you vary only the training, how to partition your dataset? there is no sklearn prebuilt stuff either
[–] yoshiK@alien.top 1 points 1 year ago

How do you measure "statistical significance" for a benchmark? Run your model 10 times and, getting the same result each time, you conclude that variance is 0 and the significance is infty sigma?

So to get reasonable statistics, you would need to split your test set into say 10 parts and then you can calculate mean and average, but that is only a reasonable thing to do as long as it is cheap to gather data for the test set (and running the test set is fast).

[–] isparavanje@alien.top 1 points 1 year ago

You're right. This is likely one of the reasons why ML has a reproducibility crisis, together with other effects like data leakage. (see: https://reproducible.cs.princeton.edu/)

Sometimes, indeed, results are so different that things are obviously statistically significant, even by eye, and that is uncommon in natural sciences. Even then, however, it should be stated clearly that the researchers believe this to be the case, and some evidence should be given.

[–] lfotofilter@alien.top 1 points 1 year ago

Because the proof is in the pudding baby!

[–] neo_255_0_0@alien.top 1 points 1 year ago

You'd be surprised to know that most academia is so focused on publishing that the rigor is not even a priority. Forget reproducibility. This is in part because repeated experiments would indeed require more time and resources both of which are a constraint.

That is why the most good tech which can be validated is produced by the industry.

[–] Seankala@alien.top 1 points 1 year ago (2 children)

The reason why is because most researchers can't be bothered because no one pays attention to it anyway. I'm always doubtful about the number of researchers who even properly understand statistical testing.

I'd be grateful if a paper ran experiments using 5-10 different random seeds and provided the mean and variance.

[–] Crimsoneer@alien.top 1 points 1 year ago (2 children)

Statistical significance was designed in an age when you had 300 observations to represent a population of hundreds of thousands. It is far less meaningful when you move into the big data space.

[–] bankimu@alien.top 1 points 1 year ago

Wow that was a lot of bollocks in a small space.

load more comments (1 replies)
[–] Jurph@alien.top 1 points 1 year ago

I'd be grateful if a paper ran experiments using 5-10 different random seeds and provided the mean and variance.

Unfortunately most papers are generated using stochastic grad student descent, where the seed keeps being re-rolled until a SOTA result is achieved.

Old papers were doing so. Yet now that we decided that NN are bricks that should be trained leaving no data out, we do not care about statistical significance anymore. Anyway test set is probably partially included in the training set of the fondation model you downloaded. It started with Deepmind and RL where experiments where very expensive to run (Joelle Pineau had a nice talk about theses issues). Yet as alpha_whatever are undeniable successes researcher pursued this path. Now go for useless confidence intervals when a training is worth 100 millions in compute... Nah, better rate the outputs by a bunch of humans.

[–] altmly@alien.top 1 points 1 year ago

Three main reasons. 1) it's expensive to run multiple experiments, 2) it's nonsensical to split away more data when nobody else is subject to the same constraint 3) if the final artifact is the model and not the method, then in many cases it doesn't matter very much

[–] coriola@alien.top 1 points 1 year ago

That set of methodologies, like any other, has its issues https://royalsocietypublishing.org/doi/10.1098/rsos.171085

[–] Recent_Ad4998@alien.top 1 points 1 year ago (3 children)

One thing I have found is that if you have a large dataset, the standard error can become so small that any difference in average performance will be significant. Obviously not always the case, depending on the size of the variance etc but I imagine it might be why it's often acceptable not to include them.

load more comments (3 replies)
[–] SASUKE0069@alien.top 1 points 1 year ago

n the world of machine learning, we usually look at metrics like accuracy or precision to judge how well a model is doing its job. Statistical significance testing, which you might hear about in other fields, isn't as common in machine learning. That's because ML often works with whole datasets, not small random samples. We care more about how well the model predicts outcomes than making big-picture statements about a whole population. Still, we take other steps, like cross-validation, to make sure our models are doing a good job

[–] SMFet@alien.top 1 points 1 year ago (2 children)

Editor at AI journal and reviewer for a couple of the big conferences. I always ask for statistical significance as the "Rube Goldberg papers" as I call them are SO so common. Papers that complicate things to oblivion without any real gain.

At the very least, a bootstrap of the test results would give you some idea of the potential confidence interval of your test performance.

[–] fordat1@alien.top 1 points 1 year ago

How often is that being applied equally irrespective of submitters reputation. If only reviewers reviewing certain submissions apply it that seems unfair and seems to be the case where the grad student making their first submission at some no name school with the smallest compute budget is getting that acceptance criteria

[–] senderosbifurcan@alien.top 1 points 1 year ago

Lol, I hate reviewing those papers. "Interpretable quantized meta pyramidal cnn-vit-lstm with GAN based data augmentation and yyyy-aware loss for XXXX" no GitHub though...

Wouldn't I also have to run the models I am comparing to? Kinda ruins the point of standard benchmarks.

[–] azraelxii@alien.top 1 points 1 year ago

In computer vision it often takes so long to train we wouldnt see anything published if we required multiple tests

[–] CKtalon@alien.top 1 points 1 year ago

Once you use one of the significance tests, you’ll start seeing that increased parameters don’t give a significant improvement given the increased number of parameters, but we are at a point where accuracy is more important than that.

[–] Zestyclose_Speed3349@alien.top 1 points 1 year ago (1 children)

It depends on the field. In DL repeating the training procedure N times may be very costly which is unfortunate. In RL it's common to repeat the experiments 25-100 times and report standard error.

Agreed, RL is extremely stochastic and the outcomes can be pretty random due to monte carlo sampling.

[–] Atisha800@alien.top 1 points 1 year ago

Perhaps because the ML community is dominated by CS people, not Statistics people, and the former do not care so much about statistical significance?

the world of acamecis is often biased to get amazing results without being critical enough of its experiments..

[–] SciGuy42@alien.top 1 points 1 year ago (1 children)

I review for AAAI, NeuRIPS, etc. If a paper doesn't report some notion of variance, standard deviation, etc., I have no choice but to reject since it's impossible to tell whether the proposed approach is actually better. In the rebuttals, the author's response is typically "well, everyone else also does it this way". Ideally, I'd like to see an actual test of statistical significance.

load more comments (1 replies)

Are you going to make assumptions on the statistical distributions in order to make such tests accurate? Part of the reason nobody does this is because it's arbitrary and irrelevant in many cases due to incorrect application of standardized methods. Combine that with the fact that it's expensive to perform and has no real value for researchers it doesn't really make sense.

[–] fuckunjustrules@alien.top 1 points 1 year ago

BC ML research is a joke

[–] thntk@alien.top 1 points 1 year ago

Because "The result was inconclusive, so we had to use statistics". Statistics is a 'trick' to convince yourselves and others when you are not sure.

[–] ZombieRickyB@alien.top 1 points 1 year ago

Money, time, and lack of care.

[–] bikeranz@alien.top 1 points 1 year ago

In part, because it can be prohibitively expensive to generate those results. And then also laziness. I used to go for a minimum of 3 random starts, until I was told to stop wasting resources in our cluster.

[–] longgamma@alien.top 1 points 1 year ago

If your training data is sufficiently large then isn’t any improvement in a metric statistically significant ?

[–] gunshoes@alien.top 1 points 1 year ago

Most ML work is oriented towards industry, which in turns oriented towards customers. Most users don't understand p values. But they do understand one number being bigger than another. Add on that checking p values would probably remove 75% of research publications, there's just no incentive.

Cuz each experiment is too expensive so sometimes it just doesn’t make sense to do that. Imagine training a large model on a huge dataset several times in order to have a numerical mean and variance that dont mean much.

[–] colintbowers@alien.top 1 points 1 year ago

It depends what book/paper you pick up. Anyone who comes at it from a probabilistic background is more likely to discuss statistical significance. For example, the Hastie, Tibshirani, and Friedman textbook discusses it in detail, and they consider it in many of their examples, e.g. the neural net chapter uses boxplots in all the examples

[–] GullibleEngineer4@alien.top 1 points 1 year ago (1 children)

Isn't cross validation (for prediction tasks) an alternative to and I daresay even better than statistical significance tests?

I am referring to the seminal paper of Statistical Modeling: The Two Cultures by Leo Breiman if someone wants to know where am I coming from.

Paper: https://www.jstor.org/stable/2676681

[–] Brudaks@alien.top 1 points 1 year ago

Cross-validation is a reasonable alternative, however, it does increase your compute cost 5-10 times, or, more likely, means that you generate 5-10 times smaller model(s) which are worse than you could have made if you'd just made a single one.

[–] AltruisticCoder@alien.top 1 points 1 year ago

They should but many don't because often their results are not statistically significant or they have to spend a ton of compute to only show very small statistically significant improvements. So, they'll just put 5 run averages (sometimes even less) and hope for the best. I have been a reviewer on most of the top ML conferences and I'm usually the only reviewer holding people accountable on statistical significance of results when confidence intervals are missing.

[–] txhwind@alien.top 1 points 1 year ago

For infamous papers, nobody cares it.

For famous papers, many people will reproduce it and make decisions based on its performance, which is a kind of human-based statistical testing.

[–] srpulga@alien.top 1 points 1 year ago

In the industry, cross-validation is a good measure of the model's utility, which is what matters in the end. But I agree that academia definitely should report some measure of uncertainty particularly in benchmarks.

[–] HarambeTenSei@alien.top 1 points 1 year ago

why do that and risk your paper not getting published?

[–] YinYang-Mills@alien.top 1 points 1 year ago

I have thought about doing this, and I get concerned that statistical tests are really easy to nit-pick, and seemingly they don’t make your results any more convincing than just adding errors.

[–] PMxMExURxTITS@alien.top 1 points 1 year ago

Ouroboros Logic

load more comments
view more: next β€Ί