GANs are implicit probabilistic models. This means that they do not learn the distribution of data but rather a mapping from a known distribution (standard Gaussian) to the data distribution. As a result, there is no density estimate because it isn't modeling the density.
VAEs, on the other hand, can approximate the density indirectly, since they also do not learn the distribution of the data directly. Rather, it learns an encoder which estimates p(z|x) and a decoder p(x|z). However, using simple probabilistic rules, we can derive p(x) = integral of p(x,z) over z. We break down p(x,z) to p(x|z)p(z). We approximate integral of p(x|z)p(z) dz with the monte carlo approximation via sampling to arrive at an estimate of p(x).
GANs are implicit probabilistic models. This means that they do not learn the distribution of data but rather a mapping from a known distribution (standard Gaussian) to the data distribution. As a result, there is no density estimate because it isn't modeling the density.
VAEs, on the other hand, can approximate the density indirectly, since they also do not learn the distribution of the data directly. Rather, it learns an encoder which estimates p(z|x) and a decoder p(x|z). However, using simple probabilistic rules, we can derive p(x) = integral of p(x,z) over z. We break down p(x,z) to p(x|z)p(z). We approximate integral of p(x|z)p(z) dz with the monte carlo approximation via sampling to arrive at an estimate of p(x).