HTC Education Series: Deep Learning with PyTorch Basics - Lesson 3

 This is the third lesson in the HTC Deep Learning with PyTorch Basics course.  It begins with the third lecture in the course called 'Deep Learning with PyTorch: Zero to GANs'.

This 3rd lecture is called 'Training Deep Neural Networks on a GPU'.  You will learn how to build and train a deep learning neural network with hidden layers and non-linear activation functions.  You will be working with a Jupyter notebook in a cloud based system that gives you access to GPUs.

What was covered in the lecture

how to run jupyter notebooks on colab using jovian

multi-layer networks with nonlinearities

rectified linear unit (ReLU)

deep learning neural nets approximate arbitrary functional mappings

defining a model by extending nn.module

training and verifying your model

using a gpu

Additional HTC Material

1: Again, our goal with this course is to present pretty basic beginning PyTorch code for building neural networks in the first lectures in this course.  We then try and move you further upstream in our HTC specific additional material.

Last week we watched Alfredo code up a deep learning neural net in PyTorch while William looked on and occasionally commented on it.  This week we will initially continue on with some follow-up associated with Alfredo's coding last week.  Alfredo will cover using dropout layers to avoid over-fitting, interactive debugging of your models, and how to setup training on GPUs.

Then William will take Alfredo's code from last week, and re-implement it using PyTorch Lightning.  PyTorch Lightning is built on top of PyTorch, and works to hide generic housekeeping details associated with building and training a neural net in PyTorch.  It allows you to focus on the coding parts of your particular problem that are unique to your problem, and hides the implementation details of generic parts of the training loop.  It also hides all of the details associated with running on GPUs (including clusters) and TPUs from your code.

After watching this PyTorch Lighting video, it should be obvious why you might want to move towards using it in your work rather than hard coding everything from scratch.  It handles a lot of busy work associated with training that you would otherwise have to deal with manually.

2: You should start becoming more familiar with the PyTorch DataLoader.

Here's a recent blog article from paperspace that retails PyTorch DataLoader Class, and the various Datasets in TorchVision.


1:  You might be thinking 'I thought this course was for beginners, so what is all this Lightning stuff with more advanced code '.  Well, HTC thinks that the best way for beginners to get ahead with moving forward with PyTorch programming is to take full advantage of all that PyTorch Lightning has to offer.

Like a standardized training loop that hides generic 'do it every time' details.

A standardized methodology for structuring the component blocks of your code.

Deals with GPU details for you (hides complexity).

Access to advanced data logging.

Access to standardized implementations of many architecture components.

An army of very smart people pushing this open source project forward that you can build your work on top of.

A framework designed to support advanced deployment of your architecture model.

Need to review something from the previous lessons in the course.
No problem.

You can access the first lesson here.

You can access the second lesson here.

You just completed lesson 3 here.

Lesson 4 posts next monday.

Keep in mind that HTC courses are works in progress until the final lesson post, so previous posted lessons may be tweaked to optimize as the course progresses.


Popular posts from this blog

Pix2Pix: a GAN architecture for image to image transformation

CycleGAN: a GAN architecture for learning unpaired image to image transformations

Smart Fabrics