this post was submitted on 16 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

Often when I read ML papers the authors compare their results against a benchmark (e.g. using RMSE, accuracy, ...) and say "our results improved with our new method by X%". Nobody makes a significance test if the new method Y outperforms benchmark Z. Is there a reason why? Especially when you break your results down e.g. to the anaylsis of certain classes in object classification this seems important for me. Or do I overlook something?

you are viewing a single comment's thread
view the rest of the comments
[–] SASUKE0069@alien.top 1 points 1 year ago

n the world of machine learning, we usually look at metrics like accuracy or precision to judge how well a model is doing its job. Statistical significance testing, which you might hear about in other fields, isn't as common in machine learning. That's because ML often works with whole datasets, not small random samples. We care more about how well the model predicts outcomes than making big-picture statements about a whole population. Still, we take other steps, like cross-validation, to make sure our models are doing a good job