kduyehj

joined 1 year ago
[–] kduyehj@alien.top 1 points 1 year ago

Maybe the Q is about cuda or similar.

[–] kduyehj@alien.top 1 points 1 year ago

If the particular data potentially has say 50 clusters, but using k-means if you ask for 40, then you will get 40 and then 1 to 10 of those could lend themselves to finding sub clusters. So the majority of the 40 clusters won’t exhibit a WCSS curve with a knee and therefore conclude they Are “good” clusters. (There’s a bit more to it than that by the way but this is part of the idea). In the lucky case this could be 39 good clusters where the remaining one is mixed up with things that don’t fit well. Maybe these are outliers or poorly represented in the input space. Or you might get up to 5 “nearly good” clusters where each have two sub clusters.

Of course if your input data only has say 20 clusters by whatever definition, then asking for 40 will incorrectly separate some data. This is why I then used some de-duplication.

You’d need to understand the distribution of your data and apply techniques that suit.

I’m not saying this approach is a general solution, it’s just an idea that worked out for me in my case. All I needed was a single representative from each cluster and it didn’t matter much if two or more of those should have been treated the same.

In my case, the initial (k=40) is a hyper-parameter, as is the choice to search for up to 8 sub clusters.

The graphs and analysis of the 2nd tier WCSS data give a reasonable measure of performance.

[–] kduyehj@alien.top 1 points 1 year ago (2 children)

I had this issue. Tried hierarchical methods but fell back to K-means by using a two tier method. The first tier hunting for K=40 (the data likely had more). Then for each resulting cluster applied k-means again from k=0 to k=8 and then using some analytics techniques on the WCSS curve, decided if there was a decent knee in the curve and choose the appropriate k for each sub cluster. There are some complications using this method because the WCSS curve might be close to a straight line, so you conclude there are no sub clusters. Or it might not be monotonously decreasing in which case you might not have enough members in the cluster. As always, it depends on your data, the way you choose features, and how it’s embedded or tokenised. The final layer did some heavier tokenisation followed by de-duplication across all sub-clusters.

I don’t know how well the above description helps, but you might get some ideas.

[–] kduyehj@alien.top 1 points 1 year ago

It’s wise to get a feel for the distribution of data so plot feature against feature to try and identify what should contribute to separability. Classification demands separable data. Consider testing a one dimensional feature set because that should be your worst case. If it’s no better than a random choice then you can’t expect it to help when combined with more features.

You might or might not benefit from using a kernel, some non linear transformations and feature engineering before concentrating on hyper parameters.

Sometimes removing a feature improves the result. Make sure you normalise when there are radically different scales.

And when you go for it on the whole dataset resist the urge to peek at the data. Ultimately you are looking for an algorithm that is not over fitting, and has a chance to perform on unseen data (of the same type).

[–] kduyehj@alien.top 1 points 1 year ago