Generative Models Deep Dive - Part 1
We're going to be taking a deep dive into deep learning neural net generative models. It will be broken up into several parts.
Parts 1 and 2 will be based on lectures presented by Justin Johnson. Parts 3 and 4 will be based on lectures by Yann LeCun from the 2020 NYU Deep Learning course. Yann has a fascinating unified theory of all things neural net generative model (and there are so many flavors without it), which is pretty mind expanding once you get your head around the concepts. Parts 5 and 6 will be focused on GANS, and hopefully tie back to Yann's unified generative model theory.
Part 1 begins with a lecture from Justin Johnson, who is now a professor at University of Michigan. It was taken from the online lectures associated with justin's class called 'Deep Learning for Computer Vision.
We're using the fall 2019 lecture since the 2020 lectures unfortunately won't post online until next year (university policy?). I say unfortunately because research in this subject area is exploding, so who knows what additional information will be in the 2020 lecture (Justin knows).
Justin covers a introduction to generative models, autoregressive generative models, auto-encoders, and variational auto-encoders. We will continue with the second generative model lecture in this series in our Part 2 post.
Discriminative vs Generative
Discriminative Model - classifier (outputs class tag)
- learn a likelihood distribution p(y/x)
- not good at distinguishing junk input data
Generative Model - outputs image
- learn a likelihood distribution p(x)
Conditional Model
- learn p(x/y)
Autoregressive Models
Comments
Post a Comment