HTC Seminar #12: Evolution, Intelligence, Simulation, and Memes
Let's stretch our brains a little bit this week beyond our normal deep learning focus. This weeks HTC Seminar is a conversation with Richard Dawkins. Titled Richard Dawkins: Evolution, Intelligence, Simulation, and Memes, this is a podcast in Lex Fridman's Ai Podcast series. Richard Dawkins is an evolutionary biologist, and author of many great books you should really sit down and read, including The Selfish Gene and The Blind Watchmaker.
You can skip the first 2 minutes of sponsorship ad mentions if you want to jump right to the interview.
This talk by Shirley Ho of the Flatiron Institute from 2020 discusses the use of graph neural networks to try and model physical systems that involve n-body interactions. Moving on to the main presentation, Giuseppe Carleo of the Flatiron Institute presents a seminar on machine learning techniques for many-body quantum systems.
Like the Doublemint Twins touting the joys of Doublemint gum, 2 GANs are surely better than 1 GAN. Especially if we package them together inside of one meta GAN module. And this is exactly what the CycleGAN architecture does. Have you ever harbored dark secrets of turning a horse into a zebra? The CycleGAN was developed to do just that. Learn how to turn a horse into a zebra. And more. Now right away you can notice a difference between the image to image transformation GAN architectures we've been discussing over the last few posts. Those last few posts described systems that learn from a database of matched input-output image pairs. And if your goal is to turn an edge representation into a nicely filled in continuous tone image, it's easy to build your database of matched input-output image pairs that your GAN system can then learn off of. Take a continuous tone image (which will be the output of the database pair entry), then run it through an edge detector algorithm.
I thought following up yesterday's TraVelGAN post with a Pix2Pix GAN post would be useful to compare what is going on in the 2 architectures. Two different approaches to the same problem. I stole this Pix2Pix Overview slide below from an excellent deeplearning.ai GAN course (note that they borrowed it from the original paper) because it gives you a good feel for what is going on inside of the Pix2Pix architecture. Note how the Generator part is very much like an auto-encoder architecture, but rebuilt using the U-Net architecture features (based on skip-connections) that fastai has been discussing in their courses for several years before it became more widely known to the deep learning community at large (and which originally came from an obscure medical image segmentation paper) . So the Generator in this Pix2Pix GAN is really pretty sophisticated, consisting of a whole image to image auto-encoder network with U-Net skip connections to generate better image quality at highe
Post a Comment