Computational Challenges (and limitations) for the Cortex

This lecture by Leslie Valiant on 'What are the Computational Challenges for the Cortex' is a followup lecture to yesterdays talk by Edvard Moser.  It focuses on the theoretical limitations of cortex models based on known physics and biological constraints in neural assemblies.

Over a lifetime the brain performs hundreds of thousands of individual cognitive acts of a variety of kinds, including the formation of new associations and other kinds of learning. Each such act depends on past experience, and, in turn, can have long lasting effects on future behavior. It is difficult to reconcile such large scale capabilities quantitatively with the known resource constraints on cortex, such as low connectivity. Here we shall describe an approach to this problem, in terms of concrete functions, representations, and algorithms, that seeks to explain these phenomena in terms that are faithful to the basic quantitative resources available. Until recently an algorithmic understanding of cognition has been regarded as an overambitious goal for experimental neuroscience. As we shall explain, with current experimental techniques this view is no longer justified, and we should expect algorithmic theories to be experimentally testable, and tested.

This lecture is part of the Heidelberg Laureate Forum, so all of the people involved are Nobel prize winners or Turing award winners or the equivalent, so everything is really top notch.  We'll be following up later this week with another talk by Yoshua Bengio from the same forum.



Observations

1:  I have to be honest, at the end of this talk i was kind of disappointed, because he sets up this whole scenario where he shows the math leads to certain conclusions or constraints, and then you are basically left hanging with no conclusions at all.  I wanted another 10 minutes where he would actually lay out an actual theoretical model we could explore and test.

However, something he pointed out in the talk got me thinking about a really interesting new approach to building neural net models.  It ties into some work i did on fast multi-dimensional scaling algorithms in the late 90s.  If you run through the section of the lecture that talks about sparse random connections you might start to grok it.

If my very preliminary thinking on this pans out at all, i'll do a HTC post where i describe the algorithm in more detail.

2:  Note that sparse connections are a key component of what the thousand brains theory is proposing.

Comments

Popular posts from this blog

CycleGAN: a GAN architecture for learning unpaired image to image transformations

Pix2Pix: a GAN architecture for image to image transformation

Smart Fabrics