Infrastructure and Tooling for Deep Learning

 Today's presentation continues our sweep through some of the Full Stack Deep Learning lectures from their spring 2021 course.  This one is an overview and more detailed breakdown of the various components of machine learning tools and infrastructure.  So software, compute hardware options, resource management, frameworks, distributed training, experiment management, hyper-parameter tuning.

I had to puzzle a lot of this stuff out with a ton of web searching and puzzlement, so seeing it all explained in one place is a great resource.  The full stack course really tries to cover the nuts and bolts of everything you might need to get up to speed on for working with deep learning projects.


Observations

1:  They discuss Visual Studio Code early on.  If you are looking for a good Python coding environment i think it is a good choice.

2:  Streamlit was new to me, but sounds interesting.  Especially the part about easy ways to build interactive applets from your Python code.

3:  PyTorch Lightning once again is mentioned as a great solution.  Which is what we thought when we did the second deep learning course here at HTC.

One thing to be aware of about the fastai wrapper sitting on top of PyTorch is that fastai 2.0 does not fully support the TorchScript to c code pipeline.  It is also very unclear what is going on with fastai development moving forward (which was not the case last year when we put together the first HTC deep learning course), while PyTorch Lightning seems to be charging ahead.

Comments

Popular posts from this blog

CycleGAN: a GAN architecture for learning unpaired image to image transformations

Pix2Pix: a GAN architecture for image to image transformation

Smart Fabrics