this post was submitted on 26 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

Hello.

I am seeing the GAN specialization course on Coursera by instructor Sharon Zhou.

In one of the lectures, she says that a disadvantage of GANs is that it cannot do density estimation and thus is not useful for anomaly detection. (not sure if you can access it).

In the next video, she says that VAEs don't have this problem.

I am a little confused about this. Could anyone please explain what she means?

As far as I can understand from the lecture, density estimation means learning how probable/frequent particular features are in a dataset. Like, how probable is it that a dog will have droopy ears. Then, we can use this info to detect anomalies if they do not exhibit these features.

But, isn't this exactly what GANs learn? aren't GANs learning to mimic the distribution of the training data?

Also, how is a VAE different in this particular regard?

Could someone please help explain this?

Thank you.

top 15 comments
sorted by: hot top controversial new old
[–] buttfook@alien.top 1 points 11 months ago

GANs are not supposed to do that, all they do is use surfaces. If you want 3D GAN you are going to have to create it

[–] tiglonMalayalam1993@alien.top 1 points 11 months ago

there are too many people in the city that can't seem to see the problem.

[–] sitmo@alien.top 1 points 11 months ago (1 children)

GANs indeed learn to generate samples from the data distribution.

VAEs learn how to encode samples to parameters (mean, variance) of a latent-distribution. De VAE-decoder then maps samples from that latent-distribution back to input samples. If basically an autoencoder that tries to do "input->code->reconstructed input", but with the code being a compact probability distribution instead of a point.
You can use a VAE as an outlier detectors by looking at the reconstruction error. If you e.g. have trained a cat-image VAE then if will ouput cat images. You can generate random code samples and run those through the decoder sttep and then you'll get random cat pictures. If you feed it a cat picture, encode it, and then decode it, you get something similar to your original input cat image out again. This is because it is an auto-encoder. The reconstruction error is small in this case. If you however feed it a dog image, then the encoder will try to map it to a cat-code, however, the decoder will then still always generate a cat image. In this case the input dog image and the ouput cat image will have a larger distance / reconstruction error.
There is yet another type of generative models called "flow models" that explicitely model the data density. Flow models use invertible function and allow you to evaluate the pdf directy, whereas VAEs only tell you how well it can auto-encode a sample, and it will be trained to do that (only) well for samples from the trainset.

[–] racc15@alien.top 1 points 11 months ago (2 children)

thank you for the detailed answer!

"GANs indeed learn to generate samples from the data distribution."

So GANs do have estimation capabilities? Can I use the trained discriminator to detect anomalous images? I guess the discriminator should mark them as "fake" due to not being prevalent in the dataset?

[–] sitmo@alien.top 1 points 11 months ago

Yes, indeed, good point. You can also use the discriminator or a GAN for anomaly detection.

[–] Real_Revenue_4741@alien.top 1 points 11 months ago

GANs can generate samples from the data distribution, but not estimate them.

[–] gwern@alien.top 1 points 11 months ago (1 children)

GANs learn to generate samples in similar ratios as the original data: if there's 10% dogs, there will be 10% dogs in the samples. But they don't work backwards from a dog image to 10%, you might say - they are 'likelihood-free'. They just generate plausible images. They don't know how plausible an existing image is.

In theory, a VAE can tell you this and look at a dog image and say '10% likelihood' and look at a weird pseudoimage and say 'wtf this is like, 0.00000001% likely', and you could use it to eliminate all your pseudoimages. In practice, they don't always work that well for outlier detection and seem to be fragile. So, the advantage of VAEs there may be less compelling than it sounds on a slide.

[–] racc15@alien.top 1 points 11 months ago (1 children)

In theory, can I use the discriminator of the GAN for this?

It will look at a weird picture and say: this looks fake?

[–] gwern@alien.top 1 points 11 months ago

Can I use the trained discriminator to detect anomalous images? I guess the discriminator should mark them as "fake" due to not being prevalent in the dataset?

Generally, no. What a Discriminator learns seems to be weirder than that. It seems to be closer to 'is this datapoint in the dataset' (the original dataset, not the distribution). You can look at the ranking of a Discriminator over a dataset and this can be useful for finding datapoints to look at more closely, but it's weird: https://gwern.net/face#discriminator-ranking

[–] ambodi@alien.top 1 points 11 months ago
[–] mao1756@alien.top 1 points 11 months ago (1 children)

I learned GAN recently too so take it with a grain of salt.

The generative network learns a function that takes random noise as an input and returns a generated sample. As a consequence, the network learns the distribution of the true samples, but that information is hard to retrieve because it is encoded as weights of neural networks. So yes, it learns the distribution, but we cannot use it because it is in a hard-to-use format.

[–] racc15@alien.top 1 points 11 months ago

but that information is hard to retrieve

that makes sense!

[–] idkname999@alien.top 1 points 11 months ago

GANs are implicit probabilistic models. This means that they do not learn the distribution of data but rather a mapping from a known distribution (standard Gaussian) to the data distribution. As a result, there is no density estimate because it isn't modeling the density.

VAEs, on the other hand, can approximate the density indirectly, since they also do not learn the distribution of the data directly. Rather, it learns an encoder which estimates p(z|x) and a decoder p(x|z). However, using simple probabilistic rules, we can derive p(x) = integral of p(x,z) over z. We break down p(x,z) to p(x|z)p(z). We approximate integral of p(x|z)p(z) dz with the monte carlo approximation via sampling to arrive at an estimate of p(x).

[–] cwkx@alien.top 1 points 11 months ago (1 children)

If you look at the review: https://arxiv.org/pdf/2103.04922.pdf in Table 1, you'll see in the rightmost column GANs don't have any "NLL" - this stands for the negative log likelihood, or if you like the model's density fit over the distribution. Other classes of models, like VAEs, only give bounds on the density (approximate densities). Flows and autoregressive token predictors can give exact densities. The discriminator of GAN just estimates if something is real or fake, not estimating true probabilities (densities). Adversarial training can, however, be used for anomaly detection (and works quite well, e.g. GANomoly and successors).

[–] racc15@alien.top 1 points 11 months ago

thanks!

there is so much stuff to learn!