This repository implements Diffusion Models using methods described in these papers -- DDPM and DDIM. We use a basic version of UNet2D architecture for generating sprite images which are 16x16 in size. We train the diffusion models in the pixel space using and use Linear Noise Schedule during the forward process. We compare two sampling techniques presented in DDPM and DDIM papers. We showed that DDIM method can improve the sampling speed when compared with DDPM method but with a trade-off in image qualities.
DeepLearning.AI Course: How Diffusion Models Work?