PyTorch - getting to know you

Let's continue our exploration of PyTorch. Last time, we talked about the history that lead to PyTorch, and a little bit about what it does. Today, let's take a closer look at how you could actually use it.

So my immediate focus was on deep learning models, and the features PyTorch offers for specifying, training, and deploying them And i kind of missed something very interesting about PyTorch, which is that it is really conceived as a scientific computing package. So it's actually way more general purpose then you might at first expect.

PyTorch is based on the concept of using Tensors as it's core data structure. Tensor is really just a fancy name for array when you get down to it (Numpy array if you are a Python head). And you know all about arrays, right.  So you are already up to speed to some extent.

The tensor construct allows for targeted acceleration of various mathematical functions. Both for manipulating matrices, and for specifically how you would manipulate them in neural net algorithms.

So let's get started

import torch
= torch.tensor([[1.02.0],[3.04.0]])
print(a)

# tensor([[1., 2.],
#        [3., 4.]])


and we are done. You just made a 2-dimensional tensor (image) and printed it out.
What about manipulating a matrix (i mean a tensor). Sure, why not

# Create tensor
tensor1 = torch.tensor([[1,2,3],[4,5,6]])
tensor2 = torch.tensor([[-1,2,-3],[4,-5,6]])

# Addition

print(tensor1+tensor2)


You get the idea. Extrapolate for subtraction, multiplication, division, element access, etc. Back and forth conversion from tensor to Numpy, sure.



Now one thing that is very interesting about PyTorch is that you can specify whether your tensor lives on the cpu or the gpu. 

# Create a tensor for CPU
# This will occupy CPU RAM
tensor_cpu = torch.tensor([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]], device='cpu')

# Create a tensor for GPU
# This will occupy GPU RAM
tensor_gpu = torch.tensor([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]], device='cuda')

And of course you can move a GPU tensor to the CPU, or reverse.

# Move GPU tensor to CPU
tensor_gpu_cpu = tensor_gpu.to(device='cpu')

# Move CPU tensor to GPU
tensor_cpu_gpu = tensor_cpu.to(device='cuda'


There are quite a few pre-trained neural net models you can grab and load into PyTorch to quickly start doing something useful.  So let's run the AlexNet, which is a famous neural net that does image recognition. We're using the torchvision module in this next example.

import torch
import torchvision

alexnet = models.alexnet(pretrained=True)

img = Image.open("myAmazingImage.jpg")
img_t = transform(img)
batch_t = torch.unsqueeze(img_t, 0)

alexnet.eval()
out = alexnet(batch_t)

At this point you have an output vector with 100 elements. So you have to take that and massage it to get the class index with the highest confidence.  But you get the idea.  Pretty straightforward to use.


So as i started to wrap my head around the notion that PyTorch was really more general purpose then i originally thought, i cut to the chase and wondered if i could dump all of the python stuff and just load one of these pre-built neural net models into a C++ program and use it directly that way. And indeed you can do that.

So, you first convert your PyTorch neural net model to Torch Script.  Torch Script has an associated Torch Script compiler, which will serialize your PyTorch model into Torch Script.


# An instance of your model.
model = torchvision.models.resnet18()

# An example input you would normally provide to your model's forward() method.
example = torch.rand(1, 3, 224, 224)

# Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing.
traced_script_module = torch.jit.trace(model, example)
Once you have a Torch Script model, you can then serialize it to a file


traced_script_module.save("traced_resnet_model.pt")
At that point you have what you need to load into your C++ program for that particular neural net model.

So in your C++ program you need to add LibTorch to your C++ project. It encompasses the PyTorch C++ api.


#include <torch/script.h> // One-stop header.
So let's load in that neural net model we serialized.
torch::jit::script::Module module;
  try {
    // Deserialize the ScriptModule from a file using torch::jit::load().
    module = torch::jit::load(argv[1]);
  }
  catch (const c10::Error& e) {
    std::cerr << "error loading the model\n";
    return -1;
  }
We are almost there. Let's run our neural net model.

// Create a vector of inputs.
std::vector<torch::jit::IValue> inputs;
inputs.push_back(torch::ones({1, 3, 224, 224}));

// Execute the model and turn its output into a tensor.
at::Tensor output = module.forward(inputs).toTensor();
std::cout << output.slice(/*dim=*/1, /*start=*/0, /*end=*/5) << '\n';
So that's very interesting. We can add LibTorch to our C++ project and get access to all of that PyTorch functionality. At C++ computational speeds no less.

Obviously there are a million and one other things you could do with this library. Very exciting.

Comments

Popular posts from this blog

CycleGAN: a GAN architecture for learning unpaired image to image transformations

Pix2Pix: a GAN architecture for image to image transformation

Smart Fabrics