this post was submitted on 19 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 10 months ago
MODERATORS
 

I'm trying to teach a lesson on gradient descent from a more statistical and theoretical perspective, and need a good example to show its usefulness.

What is the simplest possible algebraic function that would be impossible or rather difficult to optimize for, by setting its 1st derivative to 0, but easily doable with gradient descent? I preferably want to demonstrate this in context linear regression or some extremely simple machine learning model.

you are viewing a single comment's thread
view the rest of the comments
[–] lithiumdeuteride@alien.top 1 points 10 months ago (3 children)

How about this function:

f(x) = abs(x) - cos(x)

Setting the first derivative equal to zero leads to trouble, as the derivative doesn't exist at the minimum, and there are infinitely many points which have zero slope. Nevertheless, the function has a clear minimum which gradient descent of any finite step size should find.

[–] neyman-pearson@alien.top 1 points 10 months ago (1 children)

Except you could get stuck at an arbitary saddle point

[–] DoctorFuu@alien.top 1 points 10 months ago (1 children)

Isn't the probability of getting exactly on one 0 though?

[–] jamochasnake@alien.top 1 points 10 months ago

of course, every locally lipschitz function is differentiable almost everywhere, i.e., the set of points where it is not differentiable is a measure zero set. So if you drew a point randomly (with distribution absolutely continuous w.r.t. the lebesgue measure) then you avoid such points with probability. However, this has NOTHING to do with actually performing an algorithm like gradient descent - in these algorithms you are NOT simply drawing points randomly and you cannot guarantee you avoid these points a priori.

[–] jamochasnake@alien.top 1 points 10 months ago

ignoring the fact that you can't prove anything about gradient descent for nonsmooth functions (you need subgradients), a finite step size will NOT work. You can end up at the minimum and compute a subgradient which is different from zero there, and end up moving AWAY from the minimizer. You need to decay your step size to ensure that you end up at the minimizer (asymptotically).

[–] yannbouteiller@alien.top 1 points 10 months ago

It does answer OP's question, but is of limited practical relevance for an ML course IMHO.

We typically use GD in approximately pseudoconvex optimization landscapes, not because there are infinitely many or even any single saddle point. To escape local optima and saddle points, we rely on other tricks like SGD.