OpenAI Microscope - Visualizing Neural Networks

 So OpenAi has a new tool they have made available called Microscope.  It lets you visualize every significant layer and neuron in 8 different deep learning neural networks used for processing images.

The are AlexNet, AlexNet (Places), Inception V1, InceptionV1 (Places), VGG19, InceptionV3, Inception V4, and ResNet v2 50.


The OpenAI Microscope is based on two concepts, a location in a model and a technique. Metaphorically, the location is where you point the microscope, the technique is what lens you affix to it.


Our models are composed of a graph of “nodes” (the neural network layers), which are connected to each other through “edges.” Each op contains hundreds of “units”, which are roughly analogous to neurons. Most of the techniques we use are useful only at a specific resolution. For instance, feature visualization can only be pointed at a “unit”, not its parent “node”.

Check it out for ResNet v2 50.
This tool is amazing!

Currently they offer DeepDream and Caricature visualization techniques. DeepDream is an artificial, optimized image that maximizes the activations of all units in an op.  Caricature is an artificial, optimized image that maximizes the activations in response to a real, given image. You can select from a list of different images.

Once you dive into a specific layer, you get additional visualization options.

Once you pick a specific neuron, you can dive in even deeper.


Other visualization options may become available in microscope in the future.


They have also released the Lucid Library, which contains a ton of research code useful for creating visualizations.  Looks like it is all coded in tensorflow.  You can access the Lucid Library here.

The library contains a number of very useful Colab notebooks you can use to dive into trying out different aspects of the Lucid Library.


Hopefully some bits of the lucid library can get ported over to fastai api in PyTorch.

Why do we care so much about feature visualization?
Here's a great online paper by some folks at Google on 'The Building Blocks of Interpretability' that does a deep dive into trying to use feature visualization to help understand what is going on in neural nets..

Comments

Popular posts from this blog

CycleGAN: a GAN architecture for learning unpaired image to image transformations

Pix2Pix: a GAN architecture for image to image transformation

Smart Fabrics