this post was submitted on 26 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 11 months ago
MODERATORS
 

Diffusion Models have recently gained popularity in the field of image generation, with widely used products such as Stable Diffusion employing this approach and yielding impressive results. While GANs are also recognized for their efficiency, in what scenarios do I need to choose GANs over Diffusion Models and do GANs have any advantages compared to Diffusion Models in image generation?

Here are a few reasons I can think of:

  • Diffusion Models take more time and larger datasets to train.
  • To train a Diffusion Model project, one must have substantial computational resources (a lot of GPUs), compared to GANs.
  • The codebases of some popular Diffusion Models projects are not open source.

I don't know if these are correct. As for the mathematical aspect, I'm not an expert in that area.

top 10 comments
sorted by: hot top controversial new old
[–] WoanqDil@alien.top 1 points 9 months ago

If you have time constraints, GANs are a better option. To generate one data, you have only one forward pass.

[–] I_will_delete_myself@alien.top 1 points 9 months ago (2 children)

GAN - Great if it works, but you better get used to praying cause it’s difficult to train like reinforcement learning. After all the pain you either got a complete piece of garbage or amazing miracle work that’s extremely efficient with O(1) time complexity. Look at GigaGan. Images are sharper with detail and sometimes almost impossible to tell.

Diffusion - Slow but gets high quality results and super easy to train. It will probably improve in the future when we get better noise schedulers and other breakthroughs. O(n) which n is time steps. Images are smoother. But good quality enough to fool most people.

[–] Obvious-Sense2454@alien.top 1 points 9 months ago

How do you determine the value of time steps "n" in O(n)?

[–] Username912773@alien.top 1 points 9 months ago (1 children)

Paper for 2 time step diffusion models?

[–] I_will_delete_myself@alien.top 1 points 9 months ago

For sure. OAI also mentioned using a similar process. They probably failed in implementation and will probably copy them as soon as they can.

https://arxiv.org/abs/2210.03142

[–] Jimmyfatz@alien.top 1 points 9 months ago
[–] bitemenow999@alien.top 1 points 9 months ago

GANs have faster inference.

[–] cwkx@alien.top 1 points 9 months ago

GANs can be great if you want to intentionally mode collapse, e.g. model a subset of the most likely parts of the data distribution. Why might you want to do this? For example, see Taming Transformers and Unleashing Transformers; these hybrids exploit the generative modelling trilemma; they learn a compressed/quantised codebook of image patches using a GAN, each patch being collapsed into a small set of codes, then they model these information-rich codes using a Transformer to capture the full diversity and global structure of the larger image, even though if you zoom right in you may see small mode collapsed artifacts that don't matter at a perceivable level (repetition of similar looking hairs, dirt etc)... a bit like with JPEG artifacts.

[–] huehue12132@alien.top 1 points 9 months ago

The reasons you listed are actually not true.

  1. Diffusion models can be trained just fine on the same datasets as GANs. They also do not take longer to train as you generally just sample one "time step" (noise level) per training step. What does take longer is inference, as GANs need a single generator execution while diffusion models require multiple.
  2. Diffusion models also do not inherently need more resources than GANs. It's basically the same: GANs have a generator and a discriminator, while diffusion models often follow a "encoder-decoder"-style U-net architecture. You can train small diffusion models on MNIST or whatever, you can train gigantic GANs (look up GigaGAN), this is not inherent to the type of model.
  3. That is, again, not an advantage of GANs per se, also you will have a hard time finding anything remotely comparable to Stable Diffusion that is based on GANs (unless I missed some big release).
[–] Comfortable_Use_5033@alien.top 1 points 9 months ago

Unpair image to image translation. There isn't anything equivalent to that in diffusion model or I have missed something?