Understanding and Extending Neural Radiance Fields

Neural Radiance Fields (Mildenhall, Srinivasan, Tancik, et al., ECCV 2020) are an effective and simple technique for synthesizing photorealistic novel views of complex scenes by optimizing an underlying continuous volumetric radiance field, parameterized by a (non-convolutional) neural network. This talk reviews NeRF, explains why it works, and then introduces some additional research work related to it.

Here's a link to the 'NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis' paper.

Here's a link to the 'Fourier Features let Networks Learn High Frequency Function sin Low Dimensional Domains' paper.

Here's a link to the 'NeRF++: Analyzing and Improving Neural Radiance Fields' paper.

Here's a link to a paper that renders NeRFs in real time using PlenOctrees 'PlenOctrees for Real-time Rendering of neural Radiance Fields'.


We previously covered Yannic Kilcher's analysis of the NeRF paper here.

Comments

Popular posts from this blog

CycleGAN: a GAN architecture for learning unpaired image to image transformations

Pix2Pix: a GAN architecture for image to image transformation

Smart Fabrics