this post was submitted on 26 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 11 months ago
MODERATORS
 

Hello.

I am seeing the GAN specialization course on Coursera by instructor Sharon Zhou.

In one of the lectures, she says that a disadvantage of GANs is that it cannot do density estimation and thus is not useful for anomaly detection. (not sure if you can access it).

In the next video, she says that VAEs don't have this problem.

I am a little confused about this. Could anyone please explain what she means?

As far as I can understand from the lecture, density estimation means learning how probable/frequent particular features are in a dataset. Like, how probable is it that a dog will have droopy ears. Then, we can use this info to detect anomalies if they do not exhibit these features.

But, isn't this exactly what GANs learn? aren't GANs learning to mimic the distribution of the training data?

Also, how is a VAE different in this particular regard?

Could someone please help explain this?

Thank you.

you are viewing a single comment's thread
view the rest of the comments
[–] sitmo@alien.top 1 points 9 months ago (1 children)

GANs indeed learn to generate samples from the data distribution.

VAEs learn how to encode samples to parameters (mean, variance) of a latent-distribution. De VAE-decoder then maps samples from that latent-distribution back to input samples. If basically an autoencoder that tries to do "input->code->reconstructed input", but with the code being a compact probability distribution instead of a point.
You can use a VAE as an outlier detectors by looking at the reconstruction error. If you e.g. have trained a cat-image VAE then if will ouput cat images. You can generate random code samples and run those through the decoder sttep and then you'll get random cat pictures. If you feed it a cat picture, encode it, and then decode it, you get something similar to your original input cat image out again. This is because it is an auto-encoder. The reconstruction error is small in this case. If you however feed it a dog image, then the encoder will try to map it to a cat-code, however, the decoder will then still always generate a cat image. In this case the input dog image and the ouput cat image will have a larger distance / reconstruction error.
There is yet another type of generative models called "flow models" that explicitely model the data density. Flow models use invertible function and allow you to evaluate the pdf directy, whereas VAEs only tell you how well it can auto-encode a sample, and it will be trained to do that (only) well for samples from the trainset.

[–] racc15@alien.top 1 points 9 months ago (2 children)

thank you for the detailed answer!

"GANs indeed learn to generate samples from the data distribution."

So GANs do have estimation capabilities? Can I use the trained discriminator to detect anomalous images? I guess the discriminator should mark them as "fake" due to not being prevalent in the dataset?

[–] Real_Revenue_4741@alien.top 1 points 9 months ago

GANs can generate samples from the data distribution, but not estimate them.

[–] sitmo@alien.top 1 points 9 months ago

Yes, indeed, good point. You can also use the discriminator or a GAN for anomaly detection.