Have you tried to simply use highly optimized brute force search?
You can optimize with early abandoning and reordering etc.
It is surprising how competitive that can be,
Have you tried to simply use highly optimized brute force search?
You can optimize with early abandoning and reordering etc.
It is surprising how competitive that can be,
1k features are a lot but not really A LOT. Also you didn't mention how many samples you have. Without any other knowledge, off the top of my head I would try to fit a self-organizing map and then use it as an "index" to retrieve the closest samples most similar to the query and finish with a knn only on those.
My dataset is about 8000 points, and the reason I am not using ANN is that I am trying to study and experiment how exact kNNs work, what can I do with them, what's best amongst them in high dimensional space...
SOMs are not like neural network predictors you would see around here, in the sense that they do not learn new feature spaces. It would have been the same if I suggested you to use kmeans to reduce the search space and then doing knn
You don't say how many items are in your dataset or if this something you need to do a lot or only a few times. Despite the horrible scaling, you can get a surprisingly long way by just doing a brute-force search should you have a beefy enough CPU with lots of threads.
If you have access to a GPU, then look at Faiss. On my now-puny 1080 laptop GPU it does a fine job for my purposes, although admittedly it does run out of memory on very large datasets (but that's almost certainly just me not having yet paid enough attention to its API on how to deal with that).
Finally, you say you are not looking for approximate neighbors using LSH, but the difficulty you are encountering is the reason why people use approximate nearest neighbors. Moreover, LSH is not the only or even the best choice for approximate nearest neighbor search. I have happily used PyNNDescent and HNSW. Annoy will also work fine with 150 dimensions and maybe the full 1000 dimensions. Voyager seems interesting but I haven't spent any time with it.
My dataset is about 8000 points, and the reason I am not using ANN is that I am trying to study and experiment how exact kNNs work, what can I do with them, what's best amongst them in high dimensional space...
I understand your investigative spirit but I think you are going to discover that all methods of finding exact kNNs are different flavors of "slow" at high dimensions. There might be some exotic variations that can shave off some of the constant factor but they are usually restricted to Euclidean or inner-product-like distances (although that's usually what people want). FWIW 8000 points doesn't seem like a huge dataset.
Here's some R code and timings using the FNN package which has a few options:
data8k <- matrix(rnorm(n=8000 * 1000), nrow = 8000)
system.time(knn <- FNN::get.knn(data8k, k = 30, algorithm = "brute"))
user system elapsed
70.19 0.06 75.87
system.time(knn <- FNN::get.knn(data8k, k = 30, algorithm = "kd_tree"))
user system elapsed
78.51 0.14 85.08
system.time(knn <- FNN::get.knn(data8k, k = 30, algorithm = "cover_tree"))
user system elapsed
129.52 0.14 134.01
system.time(knn <- FNN::get.knn(data8k, k = 30, algorithm = "CR"))
user system elapsed
70.41 0.08 74.40
That's single-threaded on my very-much no-longer-impressive laptop. The fact that brute force search does so well in comparison to the others suggests that there aren't good options for your data.