How Deep Neural Nets Learn - Adversarial Examples

 This is a great overview presentation on how to fool current deep learning neural net systems by using what are called adversarial examples.  Understanding how adversarial examples work is important not just to understand the limitations and hack-ability of current neural net systems, but also to help further understand what is going on under the hood of these systems.


As always, Xander does a great job of providing a concise and compelling explanation of some really exciting new deep learning research. I hope you found this most recent presentation on adversarial examples as fascinating as i did.

Now there is a reason why Xander included this topic in the more general series he put together focused on Feature Visualization in deep learning systems.  In Part 1, he really dived into various approaches we can use to pictorially visualize what is being learned in the features associated with deep learning system models.

In today's Part 2 presentation, he show us how people then took the computational approaches used for the feature visualization discussed in Part 1, and then subverted them to generate images that fool the deep learning system (or at least lead it's output in the direction they wanted it to be led in).  The system itself is just doing what it was trained to do.

The fact that we can so easily generate adversarial images that obviously look wrong to humans but fool the deep learning system probably means that we don't yet have a totally clear understanding of what the system is actually learning. 

That perhaps we need to use better error functions in the model's optimization that better reflect actual human visual perception of images. 

And that perhaps the deep learning system itself is taking advantage of weaknesses in human visual perception  that we don't fully understand yet. A kind of perceptual metamerism the system takes advantage of when training. We are going to get into this topic in more detail in a future post.

You are going to see this adversarial learning concept in many different scenarios throughout this course. GANs are an extremely hot topic in current deep learning research that we are going to seriously dive into later in the course. The A in GAN stands for 'Adversarial'.  So it's our old friend adversarial learning again, but this time leading the charge for the forces of good, rather than the forces of evil.

You can take a deep dive into HTC's extensive interest in GAN technology here.

There's a long history of research into adversarial learning systems. In evolutionary biology, in game theory, in economics. Virtual robots have used adversarial learning to train for robot warfare competitions. All of this has been discussed in some form in previous HTC posts.



Comments

Popular posts from this blog

CycleGAN: a GAN architecture for learning unpaired image to image transformations

Pix2Pix: a GAN architecture for image to image transformation

Smart Fabrics