HTC Education Series: Getting Started with Deep Learning- Lesson 9
Our journey into getting started with deep learning continues. Today, we head upstream into uncharted territory. What i mean by that is that the first 8 lessons included the complete fastai V2 course, and the first part of the v2 fastai course we have completed. We're all anxiously awaiting for Jeremy and crew to come out with their v2 Part 2. What to do until that happens?
There are still a ton of cool things to learn. And as it turns out, there are several previous years of the fastai course we can mine for additional information on these topics. Along with our additional HTC specific material that we always include in our lessons (which is not included in the fastai course).
Today's first lecture is one by Jeremy of fastai from last year. Lesson 7 from the 2019 fastai course. We'll be covering things like 'skip connections', and the fabulous U-net architecture which uses skip connections internally to dramatically improve image segmentation results using auto-encoder neural networks. Jeremy will then use the U-net architecture to construct a 'super-resolution' model that can be used to increase the size of images, as well as remove things like jpeg-artifacts or watermarks from images. Jeremy will then briefly discuss GANs.
Keep in mind that this fastai lecture is using the previous version of fastai api, not the new v2 one. We'll discuss what this means in greater detail below.
You can stop this lecture at 1 hour, 38 minutes into it (unless you are dying to see the rest of it). The reason why is that the rest of the lecture is on recurrent neural networks for working with NLP and Jeremy actually covered essentially the same material in last weeks lesson.
What is covered in this lecture?
Basic Convolutional Neural Network CNN
Additional HTC Course Material
1: Our second lecture in this lesson is from the Stanford University CS231n course from 2017. The lecture is presented by Justin Johnson. Now that name should be familiar, since at the end of the segment of Jeremy's lecture above we are interested in for this lesson, he talked about style transfer, and showed a diagram from a paper by Justin Johnson, et al.
1: Skip connections and the U-Net architecture are hugely important. It's really a key factor in getting the best results in any upsizing style neural net model.
Need to review something from the previous lessons in the course.
You can access Lesson 1 here.
You can access Lesson 2 here.
You can access Lesson 3 here.
You can access Lesson 4 here.
You can access Lesson 5 here.
You can access Lesson 6 here.
You can access Lesson 7 here.