How Deep Neural Nets Learn - Feature Visualization
This is a great overview presentation on Feature Visualization in deep learning networks. Let's take a look, and then discuss what we learned.
So now when someone tells you that we have no idea about what neural nets actually learn, that they are mysterious unknowable black box systems, you can correct them. Visualizing the feature maps of a neural network can tell us a huge amount of information about what the neural net has learned.
Visualizing the feature space of a neural network is also crucial to understanding how to modify these systems to make them user parameter adjustable. We covered this in the second VAE presentation in this post, and in the latent space disentanglement discussion when we took at look at how the StyleGAN works in this post.
The Distill.pub post by some Google people on Feature Visualization that Xander mentioned is really great. You can check it out here. I originally read this post on an ipad, and it's really worth checking it out in a browser on a big screen compute like an imac, because the photos will display much larger on the big screen.
Here's another blogpost on Deep Feature Visualization which includes some info on activating individual neurons, the deep dream work, and an explanation of the DeepVis Toolbox.
If you want to check out the DeepVis Toolbox itself and run it, you can check it out here.
The original paper by Zeiler and Fergus on feature space visualization he mentioned is available here.
If you are interested in the music recommendation system work, check it out here.
Post a Comment