Summary of Diffusion Models
Structure
- A fixed forward noise-adding process
- A U-Net used to learn how to denoise
Noise-Adding Process
Fixed is the key to understanding the noise-adding process!
Noise-adding is a fixed process. Given an image, the result of adding noise once and adding noise 100 times is the same (with fixed parameters).
How to Obtain Noise?
First, introduce the concept of distribution. We assume the distribution of real images is , and is a real image sampled from .
You can think of the distribution as a class with 50 students, and the sampled is one student.
The noise-adding process is done step by step. Each step is denoted as , and a total of noise-adding steps are required.
We obtain noise through a Gaussian distribution. Here, it is important to know that a Gaussian distribution is determined by two parameters: mean and variance, denoted as and . The noise added at step for an image comes from a Gaussian distribution with mean and variance . This can be equivalently expressed as: each time, sample an from a standard normal distribution, i.e., .
The Noisy Image at Step ,
The image at noise-adding step is expressed as .
Note that here is not a constant; it changes with , but follows (). It can be linear, binomial, cosine, etc.
The final noisy image should be pure noise.
Denoising Process
The denoising process involves step-by-step transforming the noisy result back to the original image .
What is the purpose of learning this denoising process? After obtaining a new noisy image, denoising can generate entirely new images, giving the network the ability to generate on its own.
We denote this denoising process as , but this distribution cannot be directly computed. Therefore, we use a neural network to approximate this process, i.e., , where represents the parameters of the neural network.
How to Fit the Denoising Process?
Here, we assume that the denoising process also follows a Gaussian distribution. That is, the neural network needs to learn the two parameters mentioned earlier: and . (DDPM fixes the variance and only learns the mean, which has been improved in subsequent papers.)
Defining the Objective Function
To drive the neural network to learn the mean during the denoising process, the authors treat the noise-adding process and the denoising process as a VAE (variational auto-encoder). If you are not familiar with VAE, you can skip this part; just know how the final loss function is calculated. (This involves knowledge of KL divergence, ELBO evidence lower bound, and other probability theory concepts. If curious, you can explore each one.)
I haven’t fully understood this part yet.
After a series of conditions, a “nice property” is obtained:
where , . This property means:
- Noise can be sampled from a Gaussian distribution, and through appropriate scaling, can be directly transformed into , where can be calculated from the known .
- The network predicting the mean can be converted into a network predicting noise.
The final objective function is defined as:
Summary
- Randomly sample an image from the real-world data distribution .
- Uniformly sample a noise level from 1 to .
- Sample noise from a Gaussian distribution and corrupt the sampled image to obtain .
- The neural network predicts the noise based on the corrupted image .