Exploring OAK, OpenVINO Toolkit
OpenVINO is a toolkit specifically designed for deep learning neural net deployment. It specifically targets hardware built with Intel chips, and just by coincidence it's development is funded by Intel. So, it targets things like Intel CPUs, and the Intel Movidius VPU, found in such wonderful devices as the Neural Net Compute Stick, and the Vision Accelerator design. And also OAK, which we are very excited about here at HTC and covered in a previous post.
As you might remember from last week, OAK is a supercool low cost crowd funded open source vision magician. Giving you a decent res RGB camera, and also a separate somewhat lower res depth camera. And a built in Movidius VPU chip to run all of your neural net greatness on.
And by the transitive property of some kind of algebra, that means that OpenVINO also targets OAK. And the OAK development team is very committed to using OpenVINO to make neural net deployment on OAK as simple as drag and drop (ie: easy to use, a misnomer when applied to most neural bet software). So, it would behoove us to take a look at OpenVINO in more depth. To see what it can do for us. And to understand how it compares to other neural net frameworks out there to take advantage of.
What Does OpenVINO Target?
As we said above, OpenVINO is designed to make neural net deployment easy for a software developer. Deployment specifically meaning deploying to Intel chip platforms, so Intel CPUs and Movidius VPUs. Or Intel design FPGAs if you are a glutton for punishment.
It supports heterogeneous execution, which is exciting if you want to speed things up, and brain twisting when you think about the intricacies of programming something like that. Heterogeneous computing refers to systems that contain multiple processing units. By running different parts of your wondrous algorithm on different processors at the same time, you can speed things up (once the pipeline is full, not before).
Understanding the pipeline is key here. Because it's a little different from the normal parallel processing strategy. So not running the exact same thing on multiple processors (what we will call normal parallel processing), but running different parts of some overall master algorithm on the different processors (heterogeneous processing).
You are serially running the master algorithm in this heterogeneous processing scheme. The different processors are running different parts of that master algorithm, but all of those parts have to be run in sequence to complete one cycle of master algorithm output.
This is what we are talking about when we refer to the pipeline needing to be full. The speedup is only going to happen once the pipeline is full. The pipeline refers to the sequence of different processors that are running some smaller part of the overall master algorithm.
Lucky you, software programmer, now has to program all of the timing interactions between all of these different processors to make this whole heterogeneous computing scheme using multiple processors work. And rest assured, the time it will take to run each of the different subtasks will be different, so processors will need to wait if they finish early, before their results can be passed on to the next processor.
If you try to wrap your brain around this for awhile, you will start to see what a pain it would be to program the whole thing by hand. So when you see a system like OpenVINO telling you they will handle all of the gory details of this for you, it should bring a smile to your face.
Building your whole algorithm in expression graphs is another way to try and handle this multiple processor timing balance behind the scenes (the compiler does it for you in this case). We'll get into this in more detail when we discuss the OpenCV G-API in a future post. It allows you to target different output hardware scenarios by writing your code in a certain way that conforms to how they want you to do it in G_API, and then the compiler does all of the magic to optimize your code and associated algorithm for the specifics of the particular hardware you are currently targeting. Write one, deploy to many different places being your motto.
And if it isn't now it really should be. HTC is committed to pushing cross-platform solutions whenever possible. If you are a software developer, you will ultimately be glad you did.
What Does OpenVINO Provide?
OpenVINO provides a deep learning neural net inference engine. So focus is on neural net inference, not training.
OpenVINO supports the Open Model Zoo. Zoo because it houses a collection of different wild and crazy deep learning neural nets. Pre-trained neural nets. That cover things like object detection, object recognition, semantic segmentation, pose estimation, text detection and recognition, and other fun things.
OpenVINO includes a model optimizer. This lets you import different neural net models and convert and optimize them to run on the inference engine. So neural net model formats like Caffe, TensorFlow, MXNet, and ONNX.
OpenVINO is designed to work with OpenCV. And i guess includes a version of it compiled for Intel hardware.
Which brings us to an interesting point. OpenVINO targets Intel cpus, so it currently targets Mac OS. But what about that issue about Mac OS moving from Intel to ARM we discussed in a post last week? What happens on ARM based macs?
Questions you need to think about if you are a developer for that platform. OpenCV is not an issue, since it already compiles for ARM targeting if you want to do that. But OpenVINO is a different story then OpenCV.
I should point out that OpenVINO is apparently an optional thing to install for OAK development machines. So maybe they are running the neural nets in OpenCV DNN modules? Or they use it internally on the OAK board, but you don't need it on your development machine unless you want to use the model optimizer? They are definitely working with OpenVINO models on OAK, so i'm going to say the later.
We'll be continuing our coverage of OAK and the software components that work with it in more detail this week. We also pre-ordered some OAK-D modules for HTC members to play with.
I should also point out that we're very interested in running neural net models for audio processing and audio synthesis on OAK (running them on the internal Movidius VPU). Which is not what OAK was designed for, but why not try and take advantage of it for as many different fun applications as possible.
As you might remember from last week, OAK is a supercool low cost crowd funded open source vision magician. Giving you a decent res RGB camera, and also a separate somewhat lower res depth camera. And a built in Movidius VPU chip to run all of your neural net greatness on.
And by the transitive property of some kind of algebra, that means that OpenVINO also targets OAK. And the OAK development team is very committed to using OpenVINO to make neural net deployment on OAK as simple as drag and drop (ie: easy to use, a misnomer when applied to most neural bet software). So, it would behoove us to take a look at OpenVINO in more depth. To see what it can do for us. And to understand how it compares to other neural net frameworks out there to take advantage of.
What Does OpenVINO Target?
As we said above, OpenVINO is designed to make neural net deployment easy for a software developer. Deployment specifically meaning deploying to Intel chip platforms, so Intel CPUs and Movidius VPUs. Or Intel design FPGAs if you are a glutton for punishment.
It supports heterogeneous execution, which is exciting if you want to speed things up, and brain twisting when you think about the intricacies of programming something like that. Heterogeneous computing refers to systems that contain multiple processing units. By running different parts of your wondrous algorithm on different processors at the same time, you can speed things up (once the pipeline is full, not before).
Understanding the pipeline is key here. Because it's a little different from the normal parallel processing strategy. So not running the exact same thing on multiple processors (what we will call normal parallel processing), but running different parts of some overall master algorithm on the different processors (heterogeneous processing).
You are serially running the master algorithm in this heterogeneous processing scheme. The different processors are running different parts of that master algorithm, but all of those parts have to be run in sequence to complete one cycle of master algorithm output.
This is what we are talking about when we refer to the pipeline needing to be full. The speedup is only going to happen once the pipeline is full. The pipeline refers to the sequence of different processors that are running some smaller part of the overall master algorithm.
Lucky you, software programmer, now has to program all of the timing interactions between all of these different processors to make this whole heterogeneous computing scheme using multiple processors work. And rest assured, the time it will take to run each of the different subtasks will be different, so processors will need to wait if they finish early, before their results can be passed on to the next processor.
If you try to wrap your brain around this for awhile, you will start to see what a pain it would be to program the whole thing by hand. So when you see a system like OpenVINO telling you they will handle all of the gory details of this for you, it should bring a smile to your face.
Building your whole algorithm in expression graphs is another way to try and handle this multiple processor timing balance behind the scenes (the compiler does it for you in this case). We'll get into this in more detail when we discuss the OpenCV G-API in a future post. It allows you to target different output hardware scenarios by writing your code in a certain way that conforms to how they want you to do it in G_API, and then the compiler does all of the magic to optimize your code and associated algorithm for the specifics of the particular hardware you are currently targeting. Write one, deploy to many different places being your motto.
And if it isn't now it really should be. HTC is committed to pushing cross-platform solutions whenever possible. If you are a software developer, you will ultimately be glad you did.
What Does OpenVINO Provide?
OpenVINO provides a deep learning neural net inference engine. So focus is on neural net inference, not training.
OpenVINO supports the Open Model Zoo. Zoo because it houses a collection of different wild and crazy deep learning neural nets. Pre-trained neural nets. That cover things like object detection, object recognition, semantic segmentation, pose estimation, text detection and recognition, and other fun things.
OpenVINO includes a model optimizer. This lets you import different neural net models and convert and optimize them to run on the inference engine. So neural net model formats like Caffe, TensorFlow, MXNet, and ONNX.
OpenVINO is designed to work with OpenCV. And i guess includes a version of it compiled for Intel hardware.
Which brings us to an interesting point. OpenVINO targets Intel cpus, so it currently targets Mac OS. But what about that issue about Mac OS moving from Intel to ARM we discussed in a post last week? What happens on ARM based macs?
Questions you need to think about if you are a developer for that platform. OpenCV is not an issue, since it already compiles for ARM targeting if you want to do that. But OpenVINO is a different story then OpenCV.
I should point out that OpenVINO is apparently an optional thing to install for OAK development machines. So maybe they are running the neural nets in OpenCV DNN modules? Or they use it internally on the OAK board, but you don't need it on your development machine unless you want to use the model optimizer? They are definitely working with OpenVINO models on OAK, so i'm going to say the later.
We'll be continuing our coverage of OAK and the software components that work with it in more detail this week. We also pre-ordered some OAK-D modules for HTC members to play with.
I should also point out that we're very interested in running neural net models for audio processing and audio synthesis on OAK (running them on the internal Movidius VPU). Which is not what OAK was designed for, but why not try and take advantage of it for as many different fun applications as possible.
Comments
Post a Comment