I have πββοΈ
Machine Learning
Community Rules:
- Be nice. No offensive behavior, insults or attacks: we encourage a diverse community in which members feel safe and have a voice.
- Make your post clear and comprehensive: posts that lack insight or effort will be removed. (ex: questions which are easily googled)
- Beginner or career related questions go elsewhere. This community is focused in discussion of research and new projects that advance the state-of-the-art.
- Limit self-promotion. Comments and posts should be first and foremost about topics of interest to ML observers and practitioners. Limited self-promotion is tolerated, but the sub is not here as merely a source for free advertisement. Such posts will be removed at the discretion of the mods.
Statistical significance doesn't make sense when nothing is stochastic. .... They test the entirety of the benchmark.
Probably because most aren't statistically significant...
While it's not super common in academia, it's actually really useful in industry. I use statistical bootstrapping -- poisson resampling of the input dataset -- to train many runs on financial fraud models and estimate variance of my experiments as a function of sampling bias.
Having a measure of the variance of your results is critical when you're deciding whether to ship models whose decisions have direct financial impact :P
Statistical significance is best used for establishing group differences. ML is used for individual datapoint classification. If you have 75% accuracy in a reasonably sized dataset, it's trivial to include a p-value to establish statistical significance, but it may not be impressive by ML standards (depending on the task).
They don't even report if ANY kind of statistically sane validation method is used when selecting model parameters (usually a single number is reported) and you expect rigorous statistical significance testing? That.is.bold.
They used to do this, I remember papers from 2015 the performance analysis of many papers were very comprehensive. They even provided useful statistics about used datasets. Now it's "we used COCO and evaluated on 2017val. Here is the final number." Unless the paper is about being better in certain classes, they will report the averaged percentage.
One big reason for this is that there is a difference between prediction and inference. Most machine learning papers are not testing a hypothesis.
That said - ML definitely does get applied to inference as well, but in those cases the lack of p-values is often one of the lesser complaints.
a combination of different factors:
- it is not thought in most self-educated programs.
- therefore most actually don't know that 1) it exists 2) how to do it 3) how to do power calculations
- since most don't know it, there is no demand for it
- costs compute time and resources, as well as human time, so it's skipped if nobody asks for it
- there is no standardized approach for ML models. Do you vary only the training, how to partition your dataset? there is no sklearn prebuilt stuff either
How do you measure "statistical significance" for a benchmark? Run your model 10 times and, getting the same result each time, you conclude that variance is 0 and the significance is infty sigma?
So to get reasonable statistics, you would need to split your test set into say 10 parts and then you can calculate mean and average, but that is only a reasonable thing to do as long as it is cheap to gather data for the test set (and running the test set is fast).
You're right. This is likely one of the reasons why ML has a reproducibility crisis, together with other effects like data leakage. (see: https://reproducible.cs.princeton.edu/)
Sometimes, indeed, results are so different that things are obviously statistically significant, even by eye, and that is uncommon in natural sciences. Even then, however, it should be stated clearly that the researchers believe this to be the case, and some evidence should be given.
Because the proof is in the pudding baby!
You'd be surprised to know that most academia is so focused on publishing that the rigor is not even a priority. Forget reproducibility. This is in part because repeated experiments would indeed require more time and resources both of which are a constraint.
That is why the most good tech which can be validated is produced by the industry.
The reason why is because most researchers can't be bothered because no one pays attention to it anyway. I'm always doubtful about the number of researchers who even properly understand statistical testing.
I'd be grateful if a paper ran experiments using 5-10 different random seeds and provided the mean and variance.
Statistical significance was designed in an age when you had 300 observations to represent a population of hundreds of thousands. It is far less meaningful when you move into the big data space.
Wow that was a lot of bollocks in a small space.
I'd be grateful if a paper ran experiments using 5-10 different random seeds and provided the mean and variance.
Unfortunately most papers are generated using stochastic grad student descent, where the seed keeps being re-rolled until a SOTA result is achieved.
Old papers were doing so. Yet now that we decided that NN are bricks that should be trained leaving no data out, we do not care about statistical significance anymore. Anyway test set is probably partially included in the training set of the fondation model you downloaded. It started with Deepmind and RL where experiments where very expensive to run (Joelle Pineau had a nice talk about theses issues). Yet as alpha_whatever are undeniable successes researcher pursued this path. Now go for useless confidence intervals when a training is worth 100 millions in compute... Nah, better rate the outputs by a bunch of humans.
Three main reasons. 1) it's expensive to run multiple experiments, 2) it's nonsensical to split away more data when nobody else is subject to the same constraint 3) if the final artifact is the model and not the method, then in many cases it doesn't matter very much
That set of methodologies, like any other, has its issues https://royalsocietypublishing.org/doi/10.1098/rsos.171085
One thing I have found is that if you have a large dataset, the standard error can become so small that any difference in average performance will be significant. Obviously not always the case, depending on the size of the variance etc but I imagine it might be why it's often acceptable not to include them.
n the world of machine learning, we usually look at metrics like accuracy or precision to judge how well a model is doing its job. Statistical significance testing, which you might hear about in other fields, isn't as common in machine learning. That's because ML often works with whole datasets, not small random samples. We care more about how well the model predicts outcomes than making big-picture statements about a whole population. Still, we take other steps, like cross-validation, to make sure our models are doing a good job
Editor at AI journal and reviewer for a couple of the big conferences. I always ask for statistical significance as the "Rube Goldberg papers" as I call them are SO so common. Papers that complicate things to oblivion without any real gain.
At the very least, a bootstrap of the test results would give you some idea of the potential confidence interval of your test performance.
How often is that being applied equally irrespective of submitters reputation. If only reviewers reviewing certain submissions apply it that seems unfair and seems to be the case where the grad student making their first submission at some no name school with the smallest compute budget is getting that acceptance criteria
Lol, I hate reviewing those papers. "Interpretable quantized meta pyramidal cnn-vit-lstm with GAN based data augmentation and yyyy-aware loss for XXXX" no GitHub though...
Wouldn't I also have to run the models I am comparing to? Kinda ruins the point of standard benchmarks.
In computer vision it often takes so long to train we wouldnt see anything published if we required multiple tests
Once you use one of the significance tests, youβll start seeing that increased parameters donβt give a significant improvement given the increased number of parameters, but we are at a point where accuracy is more important than that.
It depends on the field. In DL repeating the training procedure N times may be very costly which is unfortunate. In RL it's common to repeat the experiments 25-100 times and report standard error.
Agreed, RL is extremely stochastic and the outcomes can be pretty random due to monte carlo sampling.
Perhaps because the ML community is dominated by CS people, not Statistics people, and the former do not care so much about statistical significance?
the world of acamecis is often biased to get amazing results without being critical enough of its experiments..
I review for AAAI, NeuRIPS, etc. If a paper doesn't report some notion of variance, standard deviation, etc., I have no choice but to reject since it's impossible to tell whether the proposed approach is actually better. In the rebuttals, the author's response is typically "well, everyone else also does it this way". Ideally, I'd like to see an actual test of statistical significance.
Are you going to make assumptions on the statistical distributions in order to make such tests accurate? Part of the reason nobody does this is because it's arbitrary and irrelevant in many cases due to incorrect application of standardized methods. Combine that with the fact that it's expensive to perform and has no real value for researchers it doesn't really make sense.
BC ML research is a joke
Because "The result was inconclusive, so we had to use statistics". Statistics is a 'trick' to convince yourselves and others when you are not sure.
Money, time, and lack of care.
In part, because it can be prohibitively expensive to generate those results. And then also laziness. I used to go for a minimum of 3 random starts, until I was told to stop wasting resources in our cluster.
If your training data is sufficiently large then isnβt any improvement in a metric statistically significant ?
Most ML work is oriented towards industry, which in turns oriented towards customers. Most users don't understand p values. But they do understand one number being bigger than another. Add on that checking p values would probably remove 75% of research publications, there's just no incentive.
Cuz each experiment is too expensive so sometimes it just doesnβt make sense to do that. Imagine training a large model on a huge dataset several times in order to have a numerical mean and variance that dont mean much.
It depends what book/paper you pick up. Anyone who comes at it from a probabilistic background is more likely to discuss statistical significance. For example, the Hastie, Tibshirani, and Friedman textbook discusses it in detail, and they consider it in many of their examples, e.g. the neural net chapter uses boxplots in all the examples
Isn't cross validation (for prediction tasks) an alternative to and I daresay even better than statistical significance tests?
I am referring to the seminal paper of Statistical Modeling: The Two Cultures by Leo Breiman if someone wants to know where am I coming from.
Cross-validation is a reasonable alternative, however, it does increase your compute cost 5-10 times, or, more likely, means that you generate 5-10 times smaller model(s) which are worse than you could have made if you'd just made a single one.
They should but many don't because often their results are not statistically significant or they have to spend a ton of compute to only show very small statistically significant improvements. So, they'll just put 5 run averages (sometimes even less) and hope for the best. I have been a reviewer on most of the top ML conferences and I'm usually the only reviewer holding people accountable on statistical significance of results when confidence intervals are missing.
For infamous papers, nobody cares it.
For famous papers, many people will reproduce it and make decisions based on its performance, which is a kind of human-based statistical testing.
In the industry, cross-validation is a good measure of the model's utility, which is what matters in the end. But I agree that academia definitely should report some measure of uncertainty particularly in benchmarks.
why do that and risk your paper not getting published?
I have thought about doing this, and I get concerned that statistical tests are really easy to nit-pick, and seemingly they donβt make your results any more convincing than just adding errors.
Ouroboros Logic