this post was submitted on 19 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 11 months ago
MODERATORS
 

At what point do you think there was an inflection point for technical expertise and credentials requires for mid-top tier ML roles? Or was there never one? To be specific, would knowing simple scikit-learn algorithms, or basics of decision trees/SVM qualify you for full-fledged roles only in the past or does it still today? At what point did FAANGs boldly state: preferred (required) to have publications at top-tier venues (ICLR, ICML, CVPR, NIPS, etc) in their job postings?

I use the word 'creep' in the same context 'power creep' is used in battle animes where the scale of power slowly gets to such an irrationally large scale that anything in the past looks extremely weak.

Back in late 2016 I landed my first ML role at a defense firm (lol) but to be fair had just watched a couple ML courses on YouTube, took maybe 2 ML grad courses, and had an incomplete working knowledge of CNNs. Never used Tensorflow, had some experience with Theano not sure if it's exists anymore.

I'm certain that skill set would be insufficient in the 2023 ML industry. But it begs the question is this skill creep making the job market impenetrable for folks who were already working post 2012-2014.

Neural architectures are becoming increasingly complex. You want to develop a multi-modal architecture for an embodied agent? Well you better know a good mix of DL involving RL+CV+NLP. Improving latency on edge devices - how well do you know your ONNX/TensorRT/CUDA kernels, your classes likely didn't even teach you those. Masters is the new bachelors degree, and that's just to give you a fighting chance.

Yeah not sure if it was after the release of AlexNet in 2012, Tensorflow in 2015, Attention /Transformers in 2017 or now ChatGPT - but the skill creep is definitely creating an increasingly fast and growing technical rigor in the field. Close your eyes for 2 years and your models feel prehistoric and your CUDA, Pytorch, Nvidia Driver, NumPy versions need a fat upgrade.

Thoughts yall?

you are viewing a single comment's thread
view the rest of the comments
[–] ProfessionalGoogler@alien.top 1 points 10 months ago

I agree with what most people have said, but it will definitely vary from company to company and even between interviewers.

My personal preference for interviewing candidates has mainly been about someone's thought process around tackling a problem they aren't familiar with. There's such a broad array of people applying for roles that 9 times out of 10 you'll find people can't explain what the feature creation or feature selection process looks like - because in most courses and play datasets these things are done for you.

Many people don't even think about the implications of their answers. You'd be shocked at the number of people who say they would do a grid search to find the optimal parameters on a dataset with millions of rows.

I'd also say in many interviews now the company is setting you up to fail rather than trying to navigate the interview with you to show how you might be useful to the organisation. I would use that to see if the company are a good fit. If a company has 6 interview stages, they don't know what they want. If a company has 1 interview stage they probably aren't being rigorous enough (find out personal fit and technical fit). If a company makes you solve random leet code problems, or explain the architecture of an RNN, when in reality all they do day to day is use scikit-learn, is that interview really fit for purpose?