this post was submitted on 19 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

At what point do you think there was an inflection point for technical expertise and credentials requires for mid-top tier ML roles? Or was there never one? To be specific, would knowing simple scikit-learn algorithms, or basics of decision trees/SVM qualify you for full-fledged roles only in the past or does it still today? At what point did FAANGs boldly state: preferred (required) to have publications at top-tier venues (ICLR, ICML, CVPR, NIPS, etc) in their job postings?

I use the word 'creep' in the same context 'power creep' is used in battle animes where the scale of power slowly gets to such an irrationally large scale that anything in the past looks extremely weak.

Back in late 2016 I landed my first ML role at a defense firm (lol) but to be fair had just watched a couple ML courses on YouTube, took maybe 2 ML grad courses, and had an incomplete working knowledge of CNNs. Never used Tensorflow, had some experience with Theano not sure if it's exists anymore.

I'm certain that skill set would be insufficient in the 2023 ML industry. But it begs the question is this skill creep making the job market impenetrable for folks who were already working post 2012-2014.

Neural architectures are becoming increasingly complex. You want to develop a multi-modal architecture for an embodied agent? Well you better know a good mix of DL involving RL+CV+NLP. Improving latency on edge devices - how well do you know your ONNX/TensorRT/CUDA kernels, your classes likely didn't even teach you those. Masters is the new bachelors degree, and that's just to give you a fighting chance.

Yeah not sure if it was after the release of AlexNet in 2012, Tensorflow in 2015, Attention /Transformers in 2017 or now ChatGPT - but the skill creep is definitely creating an increasingly fast and growing technical rigor in the field. Close your eyes for 2 years and your models feel prehistoric and your CUDA, Pytorch, Nvidia Driver, NumPy versions need a fat upgrade.

Thoughts yall?

you are viewing a single comment's thread
view the rest of the comments
[–] Tasty-Rent7138@alien.top 1 points 1 year ago (1 children)

Also not every domain need deep learning at all. With tabular data gbm is still king (I am happy for every example where DL outperformed gbm on tabular data, as I am also would be happy to use more dl, but I can't just use more complex architectures for the sake of complexity.) Also not every company has infinite data which is needed for the data hungry DL models. So there are situations where knowing transformer architecture would be feasable, but with the available data it is gonna be gbm still. There are still data scientict positions out there where you can still have a big impact on the business as a 'sklearn kiddie' (okay maybe xgboost or lightgbm kiddie) and it would not help any more if you would know all the DL architectures.

In the end it should be all about business impact (if you are working in the competitive sector) and not who is using the latest, freshest architectures.

[–] progressgang@alien.top 1 points 1 year ago

That’s the thing (across all sectors) that people don’t seem to understand. If it makes them money, or keeps the board happy (often intertwined) then they’re doing a fantastic job.

Hence high paid LLM stuff. It’s super impactful, and the CEO can’t do it.