Deep Generative Models and Inverse Problems

 This is a talk by Alexandros Dimakis given in April 2020 for a seminar at the Institute for Advanced Study seminar on theoretical machine learning.

So he organizes the world of modeling high dimensionality distributions into 3 categories.

1:  Sparsity (so think wavelets).

2:  Conditional Independence  (so think markov chains, factor graphs, bayes nets used in things like channel coding).

3:  Deep Generative Models (so think passing random noise through a learnable differentiable function (like GANS, VAEs, etc).




Comments

Popular posts from this blog

CycleGAN: a GAN architecture for learning unpaired image to image transformations

Pix2Pix: a GAN architecture for image to image transformation

Smart Fabrics