Skip to content

gargsid/Image-Generation-using-Diffusion-Models

Repository files navigation

Image Generation Using Diffusion Models

This repository implements Diffusion Models using methods described in these papers -- DDPM and DDIM. We use a basic version of UNet2D architecture for generating sprite images which are 16x16 in size. We train the diffusion models in the pixel space using and use Linear Noise Schedule during the forward process. We compare two sampling techniques presented in DDPM and DDIM papers. We showed that DDIM method can improve the sampling speed when compared with DDPM method but with a trade-off in image qualities.

Results

Sample generation using DDPM

Sample generation using DDIM

Acknowledgements

DeepLearning.AI Course: How Diffusion Models Work?

About

Unconditional and conditional image generation using diffusion models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published