HTC Education Series: Deep Learning with PyTorch Basics - Lesson 2

 This is the second lesson in the HTC Deep Learning with PyTorch Basics course.  It begins with the second lecture in the Jovian.ai course called 'Deep Learning with PyTorch: Zero to GANs'.  

This 2nd lecture is called 'Working with Images and Logistic Regression'.  In it we will explore how to work with images from the MNIST character recognition dataset, create training and validation sets for our model, and train a logistic regression model using softmax activation and cross-entropy loss.


You can also access the video at the jovian site here.  There are several Jupyter notebooks available there associated with this lesson.  You can also access a community discussion forum there if you want to use it.

What was covered in the lecture

Working with your Jupyter notebook

Working with images in PyTorch

The famous MNIST dataset (handwritten digits 0-9)

Splitting a dataset into training and validation sets (why?)

Batch processing

Creating a custom PyTorch model by extending nn.Module class

What softmax is and why you might use it (interpreting model classification output as probabilities that sum to one)

Picking evaluation metrics and loss functions for training your model

Setting up a training loop to evaluate a model

Testing your trained model

Saving and reloading a trained model


Additional HTC Course Material

1:  The goal of this course is to get you over the initial 'getting started' hump for working with PyTorch (and Python).  And at the same time we want to move you further upstream (or if not move you there now, at least make you aware of where you ultimately want to end up).  With that goal in mind, let's look at some real PyTorch code being generated in a tandem coding session.

The following video is a live coding session with Alfreo Canziani and William Falcon of NYU.  Watch Alfredo create a PyTorch classification model from scratch (while William observes and comments).


The model Alfredo codes up in Pytorch is more advanced than the one in the first video lecture above (but still is using MNIST data).  It uses residual connections, and can run on a gpu.

Note how Alfredo structures his code:

    architecture definition

    optimizer

    loss function

    data loader

    5 steps training

This structure is a generic pattern you will work with over and over again each time you build and train a new neural net model.


Observations

1: Note that jovian has it's own simple data logger in the jovian environment.  Saves copies of run notebook on jovian in your account, can load it later and continue.  When we get into PyTorch Lightning, we'll be looking at a more sophisticated data logger.

2:   nn.Linear is basically just linear regression (from last jovian lesson).  It's a linear fully connected layer.  The least interesting kind of layer in a neural network (the software equivalent of the Perceptron model from the dawn of neural net history).

Note that the second HTC specific lecture above gets into more sophisticated layers.


Need to review something from the previous lessons in the course.
No problem.


You just completed lesson 2 in this post.

You can access the next lesson here.

Keep in mind that HTC courses are works in progress until the final lesson post, so previous posted lessons may be tweaked to optimize as the course progresses.

Comments

Popular posts from this blog

CycleGAN: a GAN architecture for learning unpaired image to image transformations

Pix2Pix: a GAN architecture for image to image transformation

Smart Fabrics