The Thousand Brains Theory

 Jeff Hawkins of Numenta has a new book out that tries to explain the results of Numenta's research into modeling how the cortex of the brain functions, and how intelligent behavior build off of that underlying structure.

The thousand brains theory of intelligence puts forward a novel theory of how the neocortex works.  It proposes that each cortical micro-column builds a model of an object, and they then collectively bubble out a consensus about what they are modeling actually means.  Other people have discussed micro-columns being a replicated neural computation element before, so other than de-emphasing hierarchy, i'm not sure there is really anything new here.

The real meat of the theory is the part that delves into grid cells in the brain.  Specifically grid cells in the neo-cortex.  And the notion that all computation in the cortex is ultimately composed of replicated grid cell structure that is used for all modality of sensory input, and for the mechanics of thought itself.

Grid cells are used by the brain to form a map of the environment.  Specifically, they represent the location of the organism in the environment, and how to navigate between that position to another one in the environment.  They also point out how reference frames are a fundamental element of thinking and intelligence.

I'm not really doing this theory justice in my brief overview above, so check out this more detailed blog post.  And i highly recommend reading the book.  It is written for layman, so no heavy technical jargon or math.

The google talk below does a good job of describing the theory from Hawkins and Subutai Ahmad. of Numenta.


1.  Replication of a single structure is a common thread in evolution, dna replication errors, and the genetics of development in multi-celled organisms.

2.  Relationship to Capsule Networks.

3.  Specific things discussed that could be applied to re-engineering deep learning systems

Sparse connections

  - benefits to sparsity

        minimal interference


        power consumption

neuron modeling

    continuous learning

    localized optimization (not global backprop)

    dendrites build the consensus between cortical columns via local massively distributed processing

grid cells

    reference frames

        object oriented


Popular posts from this blog

Simulating the Universe with Machine Learning

CycleGAN: a GAN architecture for learning unpaired image to image transformations

Pix2Pix: a GAN architecture for image to image transformation