mr_birkenblatt

joined 1 year ago
[–] mr_birkenblatt@alien.top 1 points 11 months ago (2 children)
  1. Start training models

  2. Go watch model trains

  3. Come back to error

[–] mr_birkenblatt@alien.top 1 points 11 months ago

the review process is completely pointless with regards to reproducibility. the reviewers basically have to go off of what somebody wrote in the paper. other than maybe finding some systematic error from the writeup there is not really much a reviewer can actually detect and criticize (if the model works what else is there to say?). most published papers would be better off as just github projects with a proper descriptive readme that also shows benchmarks anyway. it's not like papers are written very well to begin with. but that doesn't get you a phd.

in physics there is basically no (or minimal) review process and publications are judged by how much your paper got cited. also, there is a full secondary track of researchers who just take other papers and recreate the experiments to actually confirm reproducibility. in ML right now there is no incentive for anyone to just run a published model on different/their own data and confirm that it works correctly. in fact you'd probably be crucified for doing that

[–] mr_birkenblatt@alien.top 1 points 11 months ago

Lol, yes, thank you. Corrected

[–] mr_birkenblatt@alien.top 1 points 11 months ago (2 children)

The model is already done. The challenge right now is to scale up operations and make it viable/profitable. ML engineers can't do that (their job is to build models not build infrastructure)