this post was submitted on 15 Nov 2023
1 points (100.0% liked)
Machine Learning
1 readers
1 users here now
Community Rules:
- Be nice. No offensive behavior, insults or attacks: we encourage a diverse community in which members feel safe and have a voice.
- Make your post clear and comprehensive: posts that lack insight or effort will be removed. (ex: questions which are easily googled)
- Beginner or career related questions go elsewhere. This community is focused in discussion of research and new projects that advance the state-of-the-art.
- Limit self-promotion. Comments and posts should be first and foremost about topics of interest to ML observers and practitioners. Limited self-promotion is tolerated, but the sub is not here as merely a source for free advertisement. Such posts will be removed at the discretion of the mods.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
1k features are a lot but not really A LOT. Also you didn't mention how many samples you have. Without any other knowledge, off the top of my head I would try to fit a self-organizing map and then use it as an "index" to retrieve the closest samples most similar to the query and finish with a knn only on those.
My dataset is about 8000 points, and the reason I am not using ANN is that I am trying to study and experiment how exact kNNs work, what can I do with them, what's best amongst them in high dimensional space...
SOMs are not like neural network predictors you would see around here, in the sense that they do not learn new feature spaces. It would have been the same if I suggested you to use kmeans to reduce the search space and then doing knn