Showing posts from February, 2021

Biologically Plausible Neural Networks

 Great discussion with Simon Stringer from the Oxford Center for Theoretical Neuroscience and Artificial Intelligence.  Pay special attention to the discussion of the 'feature binding' problem.  If you are a computer artist, think about Gestalt principles when he's talking about this. How the theoretical models work off of streams of varying images (from saccades) is also fascinating, and should get you thinking about data augmentation. The conversation includes a discussion of the emergence of self-organized behavior, complex information processing, invariant sensory representations and hierarchical feature binding which emerges when you build biologically plausible neural networks with temporal spiking dynamics. Here's a link to the paper 'A new approach to solving the feature-binding problem in primate visions'. Here's a link to Simon's research page if you want to learn more about his research.

Refactoring a PyTorch VAE to PyTorch Lightning

 Since we've been focusing on VAE's recently, let's take look at how to refactor one into PyTorch Lightning.

Adversarial Examples

 Today's presentation is a discussion with 3 experts in the study of deep learning adversarial examples.  Fooling a trained neural network into mis-classifying an image by added designed noise to it. And we get introduced to some cool new papers on the topic that also relate to human perception of images. Let's check it out.

Let's Code a U-Net in PyTorch

 U-Net's are the hot architecture.  And they have come up over and over again here at HTC, from the original fast-ai lectures to what seems like every other generative model these days. Aladdin is going to show us how it's done. Using PyTorch of course.