History, Motivation, and Evolution of Deep Learning
This is the first lecture in the NYU Deep Learning course. It is presented by Yann LeCun, who won a turing award for his contributions to deep learning research (along with Geoffrey Hinton and Yoshua Bengio). After some 'how does our course work' info at the beginning you can skip if you want to, Yann dives into the history of deep learning, a quick overview of interesting topics they will cover in later lectures, the evolution and application of Convolutional Neural Nets (CNN), how deep learning is used at Facebook, why deep learning is hierarchical in nature, and generating and learning features and representation.
The course website is here. It includes access to all the course slides, Jupyter notebooks, and youtube videos.
The last bit of the lecture on the manifold hypothesis is a particularly great section. I talk about this repeatably here on the HTC site (one example here). I don't really think it's a hypothesis at this point, but pretty well established. Natural data lives in a low-dimensional manifold that is non-linear. It's why neural networks can do what they do.
Comments
Post a Comment