HTC Education Series: Deep Learning with PyTorch Basics - Lesson 6

 Let's continue with the final lesson 6 in our HTC Deep Learning with PyTorch Basics course.  And the team at Jovean continue their home run streak from last week with another really great information packed lecture.

In this lesson you will lean how a GAN works, how to construct one from scratch in PyTorch, and how to then train your constructed GAN.  This is the clearest explanation of how to actually build a working GAN i've seen so far, so i think you will find it extremely useful.

The lesson then continues with an explanation of Transfer Learning, which you will also build from scratch based on a RES-Net 49 model.  This part is heavily based on some fastai material you will have seen before if you took our other HTC deep learning course.

Let's get started.

You can access the jovian course page associated with this lecture here.

What was covered in this lecture

generative modeling

generative adversarial networks

  generator - discriminator

building a discriminator network

building a generator network

TanH activation function

training the discriminator

training the generator

full training loop

transfer learning

Don't forget to check out the associated Jupyter notebooks

Generating anime faces using GANs here

Generating handwritten digits using GANs here

Transfer learning using pre-trained models here

Additional HTC Material.

1.  We're going to borrow some material from our other HTC Getting Started with Deep Learning course that is a good overview of GANs.  So let's have Xander get you pumped up about GANs.  He then runs through the StyleGan architecture.

Note that there is a Jupyter notebook associated with this video.

Note that the original objective function in that original GAN paper is not a good one. Hence the many 'make it work better' tricks in the literature.

Which trick did we use in this lesson?

2.  The StyleGan and StyleGan 2 research and associated papers have been hugely influential.  Henry AI Labs will take us on a deep dive into the gory technical details of the implementation of thee enhancements in the StyleGan2 GAN architectures.  

Some of this may seem overwhelming at first, but remember your understanding of the individual block components of CNN architectures and the GAN example you built in this lesson.  Then you will realize that you already know a large chunk of what he is presenting in this video.  The changes are things different optimizers, new tweaks to the standard CNN module components you are already familiar with.  

Skip connections (and the U-Net architecture) you probably aren't familiar with yet, but if you understand a ResNet block you can figure it out since it does something similar but the skips go to different layers rather than internal within a layer.

The latent space of a generative model is also something you probably aren't real familiar with (other than Xander's presentation above).  We have a number of different HTC posts tagged latent space to help you get started.


1. If you are interested in generative models, we have a number of HTC posts on them.

Need to review something from the previous lessons in the course. 
No problem.

You can access the first lesson here.

You can access the second lesson here.

You can access the third lesson here.

You can access the 4th lesson here.

You can access the 5th lesson here.

You just completed lesson 6 here.

Keep in mind that HTC courses are works in progress until the final lesson post, so previous and current posted lessons may be tweaked to optimize as the course progresses.


Popular posts from this blog

Pix2Pix: a GAN architecture for image to image transformation

CycleGAN: a GAN architecture for learning unpaired image to image transformations

Smart Fabrics