Generative Adversarial Networks
Generative Adversarial Networks, or GAN for short, are worth getting to know more about. Gan's are a particularly hot topic in recent deep learning neural network research.
They are a special kind of deep learning neural net that is composed of 2 components. One side of the net is a generative network. The other adversarial side look at what the generator half output and decides if it is good output or not. So there is a competition going on between the 2 sides that forces both of them to get better at what they do. Better at generating and better at evaluating the reliability of what was generated.
Fans of the HTC blog are probably waiting for me to bring up Karl Sim’s work on competing ecologies of robots. And it’s true that the basic idea behind how a GAN works (competitive ecology) has been around for a while. We recently had a talk by Darwinian biologist Richard Dawkins in our weekly seminar series. Richard discusses the properties of competitive Darwinian ecologies in many of his famous books on Darwinian biology.
Here's a MIT Technology Review article on Ian and GAN neural nets.
Here's another paper on Self-Attention Generative Adversarial Networks.
This is a more recent paper then the original 2014 Goodfellow et al paper, and covers some new work oriented at fixing limitation of the original approach that are caused by the limited global statistics of localized convolutional nets by incorporating self attention. I believe this is the effect discussed in the second podcast below that lead to the increased performance in his work with GANs.
I’m also including a second Nvidia podcast discussion with a software developer using GAN networks to colorize old black and white photos. This is a good contrast to Ian’s podcast, since it is with a software developer who is not a neural network expert or research, but is just using the technology to do cool things. Note that his original GAN approach did not work until he added self attention to his GAN implementation.
We discussed GANs previously in our post on Generative Neural Models.