this post was submitted on 09 Nov 2023
1 points (100.0% liked)
Machine Learning
1 readers
1 users here now
Community Rules:
- Be nice. No offensive behavior, insults or attacks: we encourage a diverse community in which members feel safe and have a voice.
- Make your post clear and comprehensive: posts that lack insight or effort will be removed. (ex: questions which are easily googled)
- Beginner or career related questions go elsewhere. This community is focused in discussion of research and new projects that advance the state-of-the-art.
- Limit self-promotion. Comments and posts should be first and foremost about topics of interest to ML observers and practitioners. Limited self-promotion is tolerated, but the sub is not here as merely a source for free advertisement. Such posts will be removed at the discretion of the mods.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
optimizer OMG no one touched optimizes for decades.
we basically figure its ADAM/SGD and there wasnt really any improvement on it.
I tried finding an improvement to it myself for a few months but failed miserably
Because its super hard to build something that works better than ADAM across many tasks. There's probably no shortage of people trying to come up with something better.
Learned optimizers look promising - training a neural network to train neural networks.
Unfortunately they're hard to train and nobody has gotten them to really work yet. The two main approaches are meta-training or reinforcement learning, but meta-training is very expensive and RL has all the usual pitfalls of RL.
Nothing beats adamw + compute. Plus with the current data centric approach everything kinda converges at scale
There has been LOADS of research on deep learning optimisation in recent years. However, TLDR nothing beats ADAM.