Transfer Learning
Transfer Learning. Everyone is doing it. What is it?
Imagine a world where you could use your prodigious skills at distinguishing between different micro brew beers to solve complex calculus problems instead. Sounds too good to be true, right. Well...
Ultimately when people are talking about transfer learning they are usually referring to taking a pre-trained deep neural net, and then using it for some other task. Now typically there is some additional training needed to pull this off. But perhaps a lot less training then if you started from scratch.
So you try to utilize what the trained neural net learned about the world (the world of images and their statistics at least). And then extrapolate that with some additional training to generalize to another imaging problem. With the hope that the slice of the real world you are trying to model lives on some lower dimensional manifold.
Now you might want to do this using PyTorch. Since you are all experts on it after last weeks posts. And there is a nice tutorial on it for computer vision applications available at pytorch.org. There is a youtube video of someone typing this tutorial into a Jupyter notebook we'll spare you from.
Now you may be particular to Keras instead of PyTorch. And there are lots of pre-trained neural net models to go around. Some in Keras form as you might expect. And here a tutorial on Transfer Learning in Keras for Computer Vision Models to get you going.
All of these pre-trained models have specific statistical requirements of the images they work with. Usually some standardized mean and variance you need to march your images to before inputting them into the neural network.
So in PyTorch something like below. PyTorch also wants the images to be in a [0,1] range for the Tensor. Here's additional discussion of Normalize if you are interested.
We're moving towards the goal of checking out transfer learning for teaching robots robotic manipulation. So that's something to look forward to in a later post.
Imagine a world where you could use your prodigious skills at distinguishing between different micro brew beers to solve complex calculus problems instead. Sounds too good to be true, right. Well...
Ultimately when people are talking about transfer learning they are usually referring to taking a pre-trained deep neural net, and then using it for some other task. Now typically there is some additional training needed to pull this off. But perhaps a lot less training then if you started from scratch.
So you try to utilize what the trained neural net learned about the world (the world of images and their statistics at least). And then extrapolate that with some additional training to generalize to another imaging problem. With the hope that the slice of the real world you are trying to model lives on some lower dimensional manifold.
Now you might want to do this using PyTorch. Since you are all experts on it after last weeks posts. And there is a nice tutorial on it for computer vision applications available at pytorch.org. There is a youtube video of someone typing this tutorial into a Jupyter notebook we'll spare you from.
Now you may be particular to Keras instead of PyTorch. And there are lots of pre-trained neural net models to go around. Some in Keras form as you might expect. And here a tutorial on Transfer Learning in Keras for Computer Vision Models to get you going.
All of these pre-trained models have specific statistical requirements of the images they work with. Usually some standardized mean and variance you need to march your images to before inputting them into the neural network.
So in PyTorch something like below. PyTorch also wants the images to be in a [0,1] range for the Tensor. Here's additional discussion of Normalize if you are interested.
'train': transforms.Compose([ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]
We're moving towards the goal of checking out transfer learning for teaching robots robotic manipulation. So that's something to look forward to in a later post.
Comments
Post a Comment