HTC Education Series: Getting Started with Deep Learning- Lesson 9

 Our journey into getting started with deep learning continues.  Today, we head upstream into uncharted territory.  What i mean by that is that the first 8 lessons included the complete fastai V2 course, and the first part of the v2 fastai course we have completed.  We're all anxiously awaiting for Jeremy and crew to come out with their v2 Part 2.  What to do until that happens?

There are still a ton of cool things to learn.  And as it turns out, there are several previous years of the fastai course we can mine for additional information on these topics.  Along with our additional HTC specific material that we always include in our lessons (which is not included in the fastai course).

Today's first lecture is one by Jeremy of fastai from last year.  Lesson 7 from the 2019 fastai course.  We'll be covering things like 'skip connections', and the fabulous U-net architecture which uses skip connections internally to dramatically improve image segmentation results using auto-encoder neural networks.  Jeremy will then use the U-net architecture to construct a 'super-resolution' model that can be used to increase the size of images, as well as remove things like jpeg-artifacts or watermarks from images.  Jeremy will then briefly discuss GANs.

Keep in mind that this fastai lecture is using the previous version of fastai api, not the new v2 one.  We'll discuss what this means in greater detail below.

You can stop this lecture at 1 hour, 38 minutes into it (unless you are dying to see the rest of it).  The reason why is that the rest of the lecture is on recurrent neural networks for working with NLP and Jeremy actually covered essentially the same material in last weeks lesson.

What is covered in this lecture?

Basic Convolutional Neural Network CNN
    MNIST example
    residual learning (relu) block
    visualizing the loss landscape of neural networks
Skip connections
    semantic segmentation
U-net architecture
    convolutional arithmetic
Super resolution 
    data augmentation (crappify)
Custom loss functions
    feature loss
    gram loss
    leverage transfer learning
    Wassertein GAN
Style Transfer
BW image colorization

lecture continues with RNN at end, but that was covered last week

Additional HTC Course Material

1:  Our second lecture in this lesson is from the Stanford University CS231n course from 2017.  The lecture is presented by Justin Johnson.  Now that name should be familiar, since at the end of the segment of Jeremy's lecture above we are interested in for this lesson, he talked about style transfer, and showed a diagram from a paper by Justin Johnson, et al.

And there's our beautiful lead in to the second lecture in this lesson.

Justin is going to run us through Visualizing and Understanding the internal mechanisms of deep learning convolutional networks.  You will notice some overlap with the additional HTC specific material from lesson 7, which featured Andrej Karpathy (also from the CS231 course, but from the previous year.

Why watch the same material again you might be asking.  There is some overlap, but also a lot of new stuff in this lecture.  I feel that when you are trying to learn something, getting to see that material through different people's eyes and their different interpretations can greatly aid you in getting a much better understanding of the material (better than you would get through just the initial exposure).

Justin's lecture also totally meshes with the material Jeremy was presenting near the end of the segment of his 2019 lecture we focused on.  In this lecture, he starts off running through different approaches to feature visualization, getting progressively more sophisticated.  He then pivots to discussing texture synthesis, neural texture synthesis, and then artistic style transfer.  You will see how artistic style transfer works, and also how it is related to this whole concept of visualizing what is going on in a neural network (feature visualization).

2:  One thing you should be noticing is that deep learning neural net research is expanding at a rapid rate more and more each year.  So it's now 2020, and the world continues to move on, including feature visualization of neural networks.  With that in mind we direct you to a specific HTC post on the OpenAI Microscope.

The OpenAI Microscope lets you analyze the feature visualization of a number of different deep learning architectures in real time on your browser.  Wow, pretty exciting, right.


1:  Skip connections and the U-Net architecture are hugely important.  It's really a key factor in getting the best results in any upsizing style neural net model.

Need to review something from the previous lessons in the course.
No problem.

You can access Lesson 1 here.

You can access Lesson 2 here.

You can access Lesson 3 here.

You can access Lesson 4 here.

You can access Lesson 5 here.

You can access Lesson 6 here.

You can access Lesson 7 here.

You just completed lesson 9.

Our original plans were to have a few more HTC specific lessons in this particular course.  But we've decided to hold off on that for later, and dive into a new HTC course that covers the basics of PyTorch programming and writing deep learning code with it only.  We will return to fastai when the fastai v2 Part 2 lectures come on line.

If you successfully made it through all of the lessons in this course (including working with the code) you will probably find this second HTC course on 'Deep Learning with PyTorch Basics' pretty simple.  If you are still trying to get up to speed on PyTorch programming you might find it useful.  In many respects it's a much simpler course than this one.  And all of the programming is done using PyTorch only.


Popular posts from this blog

Pix2Pix: a GAN architecture for image to image transformation

CycleGAN: a GAN architecture for learning unpaired image to image transformations

Smart Fabrics