HTC Updates - Deep Learning #1
Research in Deep learning is increasing at an ever accelerating pace. Keeping abreast of the latest developments is a constant challenge. The HTC Updates on Deep Learning will try to periodically keep you abreast with a summary of the latest new research of interest.
Let's start off with the most recent hot off the presses AI Weekly Update from Henry AI Labs. This 35 minute presentation covers 14 different new papers or lectures or blog posts.
We will be presenting the Yoshua Bengio lecture mentioned in the presentation in an upcoming HTC Seminar.
The vision transformer research seems particularly interesting. We also really need to get some more posts going here on Transformer architecture.
We covered AlphaFold recently in a HTC seminar post.
More reinforcement that Self-Supervised Learning is a very hot topic. So the classification variation of it mentioned above is kind of doubly interesting because of that. We recently posted an in-depth report on the latest self-supervised learning research from FAIR. And we have other HTC posts tagged with self-supervised learning.
PyTorch and PyTorch Lightning
PyTorch Lightning is really gaining momentum. We are incorporating it in our currently running HTC Education Series - Deep Learning PyTorch Basic course because of this. I also think if you are getting started with PyTorch programming jumping into working with Lightning right away is a great way to help you organize your code and take advantage of all that Lightning brings to the table.
There is also a whole model library being added as an extension to PyTorch Lightning.
Kornia, a differentiable computer vision and image processing library, is also a part of the extended PyTorch family. We have a HTC post on Kornia here. And you can expect us to dive deeper into what you can do with Kornia in the coming year.
PyTorch is also kicking butt to support mobile and AR based platforms in the coming year. So Metal on ios and ARM mac, Vulkan on Android and Linux, etc.
Here's a paper at nips2020 by Geoffry Hinton's group (including Simon Kornblith) title 'Big Self-Supervised Models are Strong Semi-Supervised Learners'. Simon was interviewed in this recent HTC post on self-supervised learning.