If you have time constraints, GANs are a better option. To generate one data, you have only one forward pass.
Machine Learning
Community Rules:
- Be nice. No offensive behavior, insults or attacks: we encourage a diverse community in which members feel safe and have a voice.
- Make your post clear and comprehensive: posts that lack insight or effort will be removed. (ex: questions which are easily googled)
- Beginner or career related questions go elsewhere. This community is focused in discussion of research and new projects that advance the state-of-the-art.
- Limit self-promotion. Comments and posts should be first and foremost about topics of interest to ML observers and practitioners. Limited self-promotion is tolerated, but the sub is not here as merely a source for free advertisement. Such posts will be removed at the discretion of the mods.
GAN - Great if it works, but you better get used to praying cause it’s difficult to train like reinforcement learning. After all the pain you either got a complete piece of garbage or amazing miracle work that’s extremely efficient with O(1) time complexity. Look at GigaGan. Images are sharper with detail and sometimes almost impossible to tell.
Diffusion - Slow but gets high quality results and super easy to train. It will probably improve in the future when we get better noise schedulers and other breakthroughs. O(n) which n is time steps. Images are smoother. But good quality enough to fool most people.
How do you determine the value of time steps "n" in O(n)?
Paper for 2 time step diffusion models?
For sure. OAI also mentioned using a similar process. They probably failed in implementation and will probably copy them as soon as they can.
GANs have faster inference.
GANs can be great if you want to intentionally mode collapse, e.g. model a subset of the most likely parts of the data distribution. Why might you want to do this? For example, see Taming Transformers and Unleashing Transformers; these hybrids exploit the generative modelling trilemma; they learn a compressed/quantised codebook of image patches using a GAN, each patch being collapsed into a small set of codes, then they model these information-rich codes using a Transformer to capture the full diversity and global structure of the larger image, even though if you zoom right in you may see small mode collapsed artifacts that don't matter at a perceivable level (repetition of similar looking hairs, dirt etc)... a bit like with JPEG artifacts.
The reasons you listed are actually not true.
- Diffusion models can be trained just fine on the same datasets as GANs. They also do not take longer to train as you generally just sample one "time step" (noise level) per training step. What does take longer is inference, as GANs need a single generator execution while diffusion models require multiple.
- Diffusion models also do not inherently need more resources than GANs. It's basically the same: GANs have a generator and a discriminator, while diffusion models often follow a "encoder-decoder"-style U-net architecture. You can train small diffusion models on MNIST or whatever, you can train gigantic GANs (look up GigaGAN), this is not inherent to the type of model.
- That is, again, not an advantage of GANs per se, also you will have a hard time finding anything remotely comparable to Stable Diffusion that is based on GANs (unless I missed some big release).
Unpair image to image translation. There isn't anything equivalent to that in diffusion model or I have missed something?