LIDAR - Rethinking how cameras work

One cool feature of Apple's new iPad Pro tablets is that they include a new 3d sensor in the camera that utilizes LIDAR. The LIDAR sensor allows the camera to compute depth information for an image in addition to the normal 2D RGB pixel data that you get from a digital camera.

So how does LIDAR work? LIDAR stands for Light Detection and Ranging. It uses a pulsed infrared laser to sweep a spot of light across the area that the camera is looking at. The pulsed laser light is used to measure range (the variable distance for a point in the captured 2D image). Think of it this way. Your typical digital camera gives you a 2-dimensional pixel image of a 3-dimensional space. LIDAR provides a way to get the 3-dimensional distance information for that pixel. So the digital photo is capturing and recording the depth information for the pixels of the image. Your camera is now a 3D scanner as well as a 2D imaging system.

Of course it's more complicated then that, since the camera's LIDAR resolution is not the same as the pixel resolution. You can think of the LIDAR image as a series of small light dots that are super fast swept across the scene you are photographing. How fast? Crazy fast. Hundreds of picoseconds.

IFixit has an interesting iPad Pro teardown where they dive into what the LIDAR sensor looks like, and what it's resolution actually is. And it's a very course resolution. The infrared footage of LILDAR scanner in action at 1:43 in the movie shows you exactly how coarse the pulsed laser dot depth scanning actually is.

Any one familiar with the Microsoft Kinect systems they were selling as an attachment to Xboxes will realize that this all sounds very familiar. And digital artists loved Kinect and what they could potentially do with it.

We're very interested in AR (Augmented Reality) applications here at HTC. Consumer tablets and cameras that actually let you directly work with depth information in digital camera photos is the new wild west. Very exciting.

We're looking forward to providing easy to use interfaces and information on for working with this new technology in our HTC member toolkit.


Popular posts from this blog

Simulating the Universe with Machine Learning

CycleGAN: a GAN architecture for learning unpaired image to image transformations

Pix2Pix: a GAN architecture for image to image transformation