Progressive Distillation of Fast Sampling of Diffusion Models


This is a paper overview presentation of the 'Progressive Distillation for Fast Sampling of Diffusion Models' paper, as well as the follow on 'On Distillation of Guided Diffusion Models' paper.

Progressive Distillation is briefly mentioned in the Imagen Video paper. The two papers discussed in this presentation explain the details of what progressive distillation means.


1:  The whole student-teacher paradigm to cut down the number of iteration cycles in the diffusion schedule is pretty interesting.  And probably is a more general thing that could be applied to other problems.

2:  Note the whole explanation about why you want to progressively reduce the cycles as opposed to doing it all in a single step.  To avoid a blurry output due to averaging multiple potential valid solutions when you try to do it in one shot.

3: Note how the second paper reduces the computation for classifier-free guided diffusion. So rather than running the model twice with the prompt embedding and a blank embedding you get the student to learn what that does (passing in the guidance strength as an additional input parameter).


Popular posts from this blog

Simulating the Universe with Machine Learning

CycleGAN: a GAN architecture for learning unpaired image to image transformations

Pix2Pix: a GAN architecture for image to image transformation