GPU Technology Conference October 2020 Keynote

 This year's virtual GPU Technology Conference (GTC) happened this week.  Workshops, conferences, digital art galleries, a lot of very exciting information to dig into. NVIDIA CEO Jensen Huang's keynote presentations are always dynamic, and this one does not disappoint.  It covers a wide range of really exciting new application areas associated with AI and deep learning.  All powered by Nvidia GPU chips of course.

The complete keynote is broken up into 9 easily digestible parts below.  Each part focuses on a different application area of deep learning AI on GPUs.  Choose the one you are most interested in, or watch them all.  The point is to see what is being done with deep learning today at the very bleeding edge of AI R and D.

Part 1: The coming age of AI


Part 2:  Exploring our world, Creating New Worlds

Part 3: AI to fight disease, AI for drug discovery

Part4: Software that writes software, Accelerated AI Inference

Part 5: Data Center Infrastructure on a chip, BlueField and DOCA


Part 6: AI and AI Services for every company

Part 7: Trillions of intelligent things, Nvidia EGX Edge AI Platform powers autonomous machines

Part 8: Everything that moves will be autonomous, breakthroughs in autonomous machine development

Part 9: Computing for the age of AI, Bringing accelerated computing to ARM

Part 9 is particularly interesting in light of Nvidia's announced acquisition of ARM.  As discussed in a previous post, Apple is transitioning the Mac architecture from Intel chips to ARM chips.  Microsoft has announced that they will be making a version of Windows for ARM, as well as an ARM based Surface device.

Modern computer hardware uses multiple CPU cores along with some kind of GPU cores for accelerated graphics.  The GPU cores are also often used for more generic computational concerns, like running neural net or general tensor based computational processing. To be truly speed effective, you would ideally like the cpu and gpu cores to share or have access the same memory, otherwise this ends up being a speed bottleneck.

Apple's plans seem to involve developing custom Apple silicon substrates that include multiple ARM cpu cores along with some kind of proprietary Apple GPU and Apple Neural Net accelerator. To be used across their entire product line, iPhone, iPad, iMac, Apple AR Glasses, etc.

Nvidia acquiring ARM means that they will put a lot of work into making sure that venders can build silicon substrates that contain multiple ARM cpu cores long with Nvidia CUDA based GPU cores. And some  version of this might end up being a standard implementation for third party manufacturers to use to make the next generation of Windows clone computers or tablets. Windows clones that have an extremely tightly coupled Nvidia GPU architecture.

This tightly coupled standard ARM CPU - Nvidia GPU architecture might also be a great standard platform for running Linux or Ubuntu.  And perhaps could help move Ubuntu to a form that makes it more competitive with Apple and Windows, a true 3rd party major computer platform.

The tight CPU - GPU coupling on a single piece of silicon means relatively low cost systems that will be great for running AI and deep learning algorithms on them. And an interesting competitor or alternative to Apple's more proprietary approach. 

Both use ARM as CPU cores. Apple goes out of their way to not support Nvidia GPUs. But with Nvidia acquiring ARM, every ARM system is going to potentially be able to very closely couple to Nvidia GPU cores.


Comments

Popular posts from this blog

CycleGAN: a GAN architecture for learning unpaired image to image transformations

Pix2Pix: a GAN architecture for image to image transformation

Smart Fabrics