this post was submitted on 26 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 11 months ago
MODERATORS
 

Diffusion Models have recently gained popularity in the field of image generation, with widely used products such as Stable Diffusion employing this approach and yielding impressive results. While GANs are also recognized for their efficiency, in what scenarios do I need to choose GANs over Diffusion Models and do GANs have any advantages compared to Diffusion Models in image generation?

Here are a few reasons I can think of:

  • Diffusion Models take more time and larger datasets to train.
  • To train a Diffusion Model project, one must have substantial computational resources (a lot of GPUs), compared to GANs.
  • The codebases of some popular Diffusion Models projects are not open source.

I don't know if these are correct. As for the mathematical aspect, I'm not an expert in that area.

you are viewing a single comment's thread
view the rest of the comments
[–] cwkx@alien.top 1 points 9 months ago

GANs can be great if you want to intentionally mode collapse, e.g. model a subset of the most likely parts of the data distribution. Why might you want to do this? For example, see Taming Transformers and Unleashing Transformers; these hybrids exploit the generative modelling trilemma; they learn a compressed/quantised codebook of image patches using a GAN, each patch being collapsed into a small set of codes, then they model these information-rich codes using a Transformer to capture the full diversity and global structure of the larger image, even though if you zoom right in you may see small mode collapsed artifacts that don't matter at a perceivable level (repetition of similar looking hairs, dirt etc)... a bit like with JPEG artifacts.