HTC Seminar Series #34: How to represent part-whole hierarchies in a neural net

Geoffrey Hinton gave a really great talk at GTC this week. One among several neural net luminaries.  The free virtual conference is well worth checking out from the comfort of your pandemic refuge.

You would need to register for GTC 2021 to access and view the GTC talk on their web site.

I did find another talk he presented in January 2021 on the same topic (same slides as the GTC talk except a few more of them), so you can check this out below as this week's HTC seminar talk.


1:  Obviously influenced by Cortex columns in the brain.

2:  Obviously influenced by forward-backward flow of info in the visual system in the brain.

3:  Part hierarchy is really an alternative approach to create a scale space prior in the architecture (in my opinion).

4:  Best explanation for what a transformer is really doing i have heard yet.

5:  Implicit function decoder.

6:  Is a part hierarchy just another sub-manifold? It's a constraint on the possible space of representations.

7:  Clustering embeddings rather than Gaussians? Interesting

GLOM paper is here.


Popular posts from this blog

Simulating the Universe with Machine Learning

CycleGAN: a GAN architecture for learning unpaired image to image transformations

Pix2Pix: a GAN architecture for image to image transformation