t-SNE Visualization


If you read anything about deep learning neural networks, you often come across statements to the effect of 'we don't really know what is going on inside of these systems'.  I think often times what they really mean is that they can't quantify a neural net in a set of rules that something like an expert system might use to try to reason. With the assumption that humans also have some set of rules inside of their head they iterate through to come to a decision.

So neural nets work very differently. The net learns from training data. And what it learns is a distributed representation, so the 'knowledge' is encoded  in weight parameters throughout the network.

But people have come up with other approaches to trying to understand what is going on inside of the network.  Many of these approaches are really about the same thing, manifold learning. The net is learning a multi-dimensional mapping function. And fortunately for us, most (perhaps all) of reality can be embedded into lower dimensional manifolds.  Probably higher than 2 or 3 dimensions, but less than the dimensionality of the network itself.

A long time ago, Tomaso Poggio (Eugene McDermott Professor of Brain and Cognitive Sciences at MIT) promoted the idea of neural networks learning a nonlinear mapping function. And that they could learn any nonlinear mapping function. That was what they did. It's a very simple and concise way to think about them, about what they are doing, about what they are learning.


Let's have a little thought experiment. There are many different color spaces, vector spaces for representing what humans perceive as different colors. All of them have 3 dimensions.  And if you would train a neural network to perceive colors like humans do, off of the human perceptual data, you end up with that net learning a multi-dimensional mapping to a 3 dimensional manifold.

So here we have an example of one small piece of 'reality' that maps to a 3 dimensional manifold. And of course experiments and studies that take a look at what is going on inside of the visual cortex in color vision (human neural modeling) come to similar conclusion. Even arriving at a 3 component model for color which consists of 1 channel that encodes luminance or lightness (the black and white channel if you want to simply think about it), and 2 different color axis opponents channels that encode the color information.


So, if you can wrap your mind around the concept of learning mapping functions into locally embedded manifolds, you can start to understand what these deep learning neural networks are really doing. It's no longer so mysterious.


So how could you try and visualize all of this. You've read the tales of Flatland, so you have some sense of what inhabitants of 1D or 2D worlds might think of discovering a 3D world. And you can try and extrapolate that into you being the creature that is exploring 4D or higher dimensions.

t-SNE is a very popular algorithm for visualizing multi-dimensional data in lower dimensions.  t-SNE stands for t-Distributed Stochastic Neighbor Embedding. Geoffery Hinton's name is on the original paper, so you know it must be interesting.

The main goal of t-SNE is to project multi-dimensional points to 2- or 3-dimensional plots so that if two points were close in the initial high-dimensional space, they stay close in the resulting projection. If the points were far from each other, they should stay far in the target low-dimensional space too.

Wow, that sounds just like MDS (multi-dimensional scaling) you might be thinking.  Maybe the way to think about it is that t-SNE is a much better way to approach the old MDS problem space.

Gigory Serebryakov has an excellent tutorial on t-SNE that is available on the Learn OpenCV blog.  

If you read it, you will be introduced to an alternative to Keras for specifying, training, and deploying a neural network.  We'll be taking a look at the PyTorch alternative in later posts. It's also what Open AI is using, so again when we look at GPT-2 it's going to come up.

If you are interested in that original paper by Laurens van der Maaten and Geoffrey Hinton in 2008, check it out.

Anyone who knows me well probably also knows that i was using MDS to generate interactive perceptual maps of image databases in the 90s. Boy do i wish we had understood about t-SNE back when we were doing that research.  Way better way to do the mapping to the lower dimensional display space for the perceptual mapping.

Comments

  1. SO how does t-SNE compare to MDS. I found this discussion which might be useful to help understand the difference.

    PCA selects influential dimensions by eigenanalysis of the N data points themselves, while MDS selects influential dimensions by eigenanalysis of the 𝑁2 data points of a pairwise distance matrix. This has the effect of highlighting the deviations from uniformity in the distribution. Considering the distance matrix as analogous to a stress tensor, MDS may be deemed a "force-directed" layout algorithm, the execution complexity of which is (𝑑𝑁𝑎) where 3<𝑎≤4.

    t-SNE, on the other hand, uses a field approximation to execute a somewhat different form of force-directed layout, typically via Barnes-Hut which reduces a (𝑑𝑁2) gradient-based complexity to (𝑑𝑁⋅log(𝑁)), but the convergence properties are less well-understood for this iterative stochastic approximation method (to the best of my knowledge), and for 2≤𝑑≤4 the typical observed run-times are generally longer than other dimension-reduction methods. The results are often more visually interpretable than naive eigenanalysis, and depending on the distribution, often more intuitive than MDS results, which tend to preserve global structure at the expense of local structure retained by t-SNE.

    ReplyDelete

Post a Comment

Popular posts from this blog

CycleGAN: a GAN architecture for learning unpaired image to image transformations

Pix2Pix: a GAN architecture for image to image transformation

Smart Fabrics