ObjectManagerManager

joined 1 year ago

Controversial opinion: A proof of an ML concept is not really ML. It's (usually) mostly math. I don't think knowing how to prove that gradient descent converges in certain conditions helps you apply gradient descent in any way whatsoever. It certainly doesn't make you a better ML engineer, which is what most people learning ML are trying to do. Knowing that gradient descent converges in certain conditions is absolutely necessary, but the proof is not at all helpful to an ML engineer. You can gain an intuition for the concepts without the proofs, and I honestly don't even think the proofs are very intuitive to begin with. I say all of this as someone who has been an ML student, an ML engineer, and an ML researcher in a professional capacity.

Here's an anecdote. I took a convex optimization course during my M.S. program. It was a 10-week course, 30 total hours of lecture, and I think we only discussed maybe 10 theorems and / or methods related to convex optimization because we spent 3 hours proving the correctness of each one of them. Those 3 hours were spent doing relatively basic calculus and loads of algebra. ML engineers should obviously understand basic mathematics, but they certainly don't spend a disproportionate amount of their time doing algebra. I would've much rather glossed over the proofs, sacrificing the math lessons to actually focus on convex optimization.

Unfortunately, there's a disconnect---the teachers are often mathematicians (ML researchers; professors), and most of the students are engineers. And the teachers don't do a very good job of appealing to their audience.