Posts

Showing posts from May, 2020

Visual Transfer Learning

Image
So we're going to be talking about Affordance-Based Manipulation. And how robots can learn to perceive it.  Using deep learning neural networks. What is it (Affordance-Based Manipulation) ?  What a robot can or cannot do with an object.  That awareness. And we've been talking this week about Transfer Learning. So 'hot' a topic that it appears on Google AI Blog in many different recent scenarios. For example: BLEURT In “ BLEURT: Learning Robust Metrics for Text Generation ” (presented during  ACL 2020 ), we introduce a novel automatic metric that delivers ratings that are robust and reach an unprecedented level of quality, much closer to human annotation. BLEURT (Bilingual Evaluation Understudy with Representations from Transformers) builds upon recent advances in  transfer learning  to capture widespread linguistic phenomena, such as paraphrasing. The metric is available on  Github . Bit and BERT Pre-trained Computer Vision Models -  Transfer Learning Followin

HTC Seminar Series #7: Deep Learning Cognition

Image
Todays HTC Seminar Series talk gets os back to another one of the core deep learning 'godfathers' (or Turing award winners) , Yoshua Bengio, who will be speaking on Deep Learning Cognition. Lots of interesting ideas in this talk. And a different perspective than our normal 'capture perception by neural modeling' viewpoints.  The 'Attention' ideas discussed are pretty interesting. An discussion of recurrent systems.

Deep Learning for Robotics

Image
So i was trying to find a good Transfer Learning video tutorial for the weekly seminar. And i was also trying to find a deep learning neural net robotics lecture as well. Because there are some people living on Maui who have an interest in robotics. And computer vision. But i stumbled upon this lecture instead.  Which is actually quite interesting. And a good alternative lead in to the 'visual transfer learning for robotics' post i want to dive into at the end of the week.  It ties into the  Grand Engineering Challenges for the 21st Century  weekly seminar talk, specifically when Jeff Dean early on discussed robotics systems that used deep learning neural nets to learn by doing.  Which is ultimately a lot nicer than programming them by hand to do whatever you might want them to do, in some programming language.  This talk is by Eric Yang of Google Brain on Deep Learning for Robotics, and Robotics for Deep Learning.

Transfer Learning

Transfer Learning. Everyone is doing it. What is it? Imagine a world where you could use your prodigious skills at distinguishing between different micro brew beers to solve complex calculus problems instead. Sounds too good to be true, right. Well... Ultimately when people are talking about transfer learning they are usually referring to taking a pre-trained deep neural net, and then using it for some other task. Now typically there is some additional training needed to pull this off. But perhaps a lot less training then if you started from scratch. So you try to utilize what the trained neural net learned about the world (the world of images and their statistics at least).  And then extrapolate that with some additional training to generalize to another imaging problem.  With the hope that the slice of the real world you are trying to model lives on some lower dimensional manifold . Now you might want to do this using PyTorch. Since you are all experts on it after last weeks

HTC Seminar Series #6: Pushing the State of the Art in AI with PyTorch

Image
This weeks HTC Seminar Series double header begins with a talk by Joe Spisak, product manager at Facebook, on Pushing the State of the Art in AI with PyTorch. And of course because it is Facebook, there is no Youtube link, so you can watch the talk here on Facebook . Since it's not too long, i added a second talk by Andrej Karpathy on PyTorch use at Tesla. This one takes a look at how PyTorch is being used at Tesla to engineer self driving software for Tesla cars. So here are 2 different perspectives on using PyTorch to engineer and build real world deep learning solutions to hard problems. From the 2 major forces providing funding for PyTorch development, Facebook and Elon Musk via Tesla and OpenAI.

PyTorch - getting to know you

Let's continue our exploration of PyTorch. Last time, we talked about the history that lead to PyTorch, and a little bit about what it does. Today, let's take a closer look at how you could actually use it. So my immediate focus was on deep learning models, and the features PyTorch offers for specifying, training, and deploying them And i kind of missed something very interesting about PyTorch, which is that it is really conceived as a scientific computing package. So it's actually way more general purpose then you might at first expect . PyTorch is based on the concept of using Tensors as it's core data structure. Tensor is really just a fancy name for array when you get down to it (Numpy array if you are a Python head). And you know all about arrays, right.  So you are already up to speed to some extent. The tensor construct allows for targeted acceleration of various mathematical functions. Both for manipulating matrices, and for specifically how you would mani