This project was developed during my internship at L'Oréal in France as a proof of concept to explore the implementation of Denoising Diffusion Models [1] in two different ways:
- Unconditional image generation – generating images without conditioning on any specific class.
- Conditional image generation – leveraging classifier-free guidance [2] to generate images towards a desired class.
The project uses the MNIST dataset [3], though the code can be easily extended to more complex datasets.
References:
[1] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising Diffusion Probabilistic Models. NeurIPS 2020. arXiv:2006.11239
[2] Jonathan Ho and Tim Salimans. Classifier-Free Diffusion Guidance. NeurIPS 2021 Workshop. arXiv:2207.12598
[3] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-Based Learning Applied to Document Recognition. Proceedings of the IEEE, 1998.
Conditional images on classes 0-9 (the last column is unconditional):