this post was submitted on 30 Oct 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 11 months ago
MODERATORS
 

So!

I am a ML newbie and was wondering if one of you pros can help me out on my learning journey (tool use = google colab).

I have a csv file containing loan data where each row is a customer that applied for a loan. One of the columns is called TARGET and it shows whether the customer's loan request was approved or not. All sorts of data points are captured e.g. age, gender, salary, employment details like industry, assets, etc.

I've done cross validation and found that GradientBasedClassifier and LGBM perform the best. Cross validation also tells me that their accuracy is between 68%-70%.

My problem is that I SUCK at hyper param optimisation. How do you go from 68 to +80%??? Or 90%?

For the curious ones, here is the dataset: https://drive.google.com/file/d/1IKNVstck6gnXvfGS-mVRMAE1RFrDNUgZ/view?usp=sharing

you are viewing a single comment's thread
view the rest of the comments
[–] kduyehj@alien.top 1 points 10 months ago

It’s wise to get a feel for the distribution of data so plot feature against feature to try and identify what should contribute to separability. Classification demands separable data. Consider testing a one dimensional feature set because that should be your worst case. If it’s no better than a random choice then you can’t expect it to help when combined with more features.

You might or might not benefit from using a kernel, some non linear transformations and feature engineering before concentrating on hyper parameters.

Sometimes removing a feature improves the result. Make sure you normalise when there are radically different scales.

And when you go for it on the whole dataset resist the urge to peek at the data. Ultimately you are looking for an algorithm that is not over fitting, and has a chance to perform on unseen data (of the same type).