Fourier Feature Let Networks Learn High Frequency Functions in Low Dimensional Domains

 This is a NeurlIPS 2020 talk by Matthew Tancik titled 'Fourier Feature Let Networks Learn High Frequency Functions in Low Dimensional Domains'.  

We're been discussing this approach to improving the representation of high frequency information in neural nets in the recent NeRF related HTC posts.  And it seems to be related to using the Fourier transform as an alternative to attention in Transformer architectures as well.


The 'Learned Initializations for optimizing Coordinate-Based Neural Representation' paper is here.  The project page is here.

The project page for 'Fourier Features let Networks Learn High Frequency Functions in Low Dimensial Domains' paper is here.


Ben Mildenhall's github page is here.


Observations:

1:  Isn't the Fourier feature trick restructuring the problem as a certain kind of prior defined wavelet basis?  As opposed to the wavelet basis you get when you just use the raw fully connected model.

Comments

Popular posts from this blog

CycleGAN: a GAN architecture for learning unpaired image to image transformations

Pix2Pix: a GAN architecture for image to image transformation

Smart Fabrics