UnusualClimberBear

joined 10 months ago
[–] UnusualClimberBear@alien.top 1 points 10 months ago

Usually adding a L1/L2 regularization is an efficient way to deal with correlated variables

[–] UnusualClimberBear@alien.top 1 points 10 months ago

Old papers were doing so. Yet now that we decided that NN are bricks that should be trained leaving no data out, we do not care about statistical significance anymore. Anyway test set is probably partially included in the training set of the fondation model you downloaded. It started with Deepmind and RL where experiments where very expensive to run (Joelle Pineau had a nice talk about theses issues). Yet as alpha_whatever are undeniable successes researcher pursued this path. Now go for useless confidence intervals when a training is worth 100 millions in compute... Nah, better rate the outputs by a bunch of humans.

[–] UnusualClimberBear@alien.top 1 points 10 months ago

Highlight misunderstanding and contradictions of the review. Keep it short if you have strong parts in your answers. Answering all the small details often leads to exchange of messages so long that the AC will skip them and directly read the paper.

[–] UnusualClimberBear@alien.top 1 points 10 months ago (2 children)

Do as you prefer, yet don't expect too much reviewers to change their minds. In my experience it works better to write the rebuttal as something directed to the AC, extra points if you can have an AC reaction.