Apple New M1 Chips and Machine Learning

 Apple's new M1 ARM processor based computers are interesting beasts.  We recently had a post that describe what this is in more detail.  I got one of the first M1 based powerbooks you could get your hands on literally a week ago, so we're talking about something very new. And it will be interesting to see how this plays out over the next few years, particularly when we look at training and deploying deep learning neural net systems.

Here's an article on 'How is the Apple M1 going to affect Machine Learning'.  It provides an overview of the overall M1 architecture, which include a multi-core ARM based RISC processor, along with a Multi-core GPU, and a'Neural Engine' all on the same chip substrate (along with shared memory).  It briefly discusses tensorflow at the bottom, but does not get into PyTorch.


PyTorch has announced prototype features that support hardware accelerated mobile and ARM64 builds.  This is pretty cool, since they are supporting deployment options to Android, GPU execution on Android via Vulkan, and GPU execution on iOS via Metal.  So with that last option it seems like they will also eventually support GPU execution on M1 based macs (which have even beefier versions of the Metal GPU architecture in their M1 ARM chips).

They also say they are supporting ARM64 for Linux. So it seems like ARM based macs are going to be included in this effort as well (if not already).  There is essentially no difference between the chips in iOS devices and M1 based macs at this point (other than the mac chips are beefier).  And i would expect new N1 variations for iMac and Mac Pro machines to be even beefier when they are released.


Apple seems to hate Nvidia, or has made a business decision to not support them for some reason, so using Nvidia cards on macs in the future would appear to not be happening on any level unless they radically change their viewpoint on this. After making a big show of eGPU support in the not too distant past (when pro customers started to wonder if apple thought they were even customers anymore and apple responded with a PR blitz that they did indeed still 'love' them), they now seem to had bailed on it entirely.

At the same time, Nvidia recently announced that they are buying ARM.  So i would expect Nvidia GPU cores running CUDA to seamlessly integrate into ARM based systems at some point.  But perhaps on Nvidia based ARM chip designs?  Imagine a different kind of system on a chip substrate design based on ARM CPU cores and Nvidia GPU cores.


Here's some thoughts from the peanut gallery on all of this. Note that the peanut gallery does not seem to be aware of the new PyTorch development efforts i mentioned above).


The 'Neural Engine' is a bit mysterious.  I say that because there is no public api to it at the moment.  And as you start to dive into the details reading about coreML (apple's proprietary machine learning api) if becomes more perplexing.  Your neural net my or may not run on the CPU, GPU, or Neural Engine (coreML decides in some mysterious non-publicly documented way).


So what is apple's machine learning framework anyway?

Here's some documentation on ML Compute.

Here's an apple machine learning blog article on 'Leveraging ML Compute for Accelerated Training on Mac'.

Here's apple's Machine Learning Research blog.



Note that there is a recent open PyTorch issue on supporting the apple 16 core neural engine.

Here's a recent TensorFlow blog post on 'Accelerating TensorFlow Performance on the Mac'.

Here's a recent article on apple tensorflow acceleration.  This article gets into some of the nitty-gritty discussed above.


We live in a time of rapid change. Ever accelerating, ever increasing.  It's going to be a wild and crazy few years ahead of us as all of this shakes out.

Comments

Popular posts from this blog

CycleGAN: a GAN architecture for learning unpaired image to image transformations

Pix2Pix: a GAN architecture for image to image transformation

Smart Fabrics