### The Synapto-Dendrodendritic Web

Yesterday's post made a reference to the synapto-denroendritic web, which made me shake my head and go 'what's that now'. To quote from yesterday's reading:

'Is there a cortical anatomic substrate where these wavelets are formed and interact? The answer is in the affirmative. The study of these receptive electric fields is known as holonomy, and the plane waves carry out their action in the phase space, the Hilbert space. We know that the distal ends of axons split into teledendrons and form a web of interconnected fibers. These dendrites communicate through electrical and chemical synapses. Electric recordings reveal oscillations of depolarizations and hyperpolarizations of electric potential differences without electric currents. These oscillations in different cortical slices intersect with one another, producing waves of interference. Different neuronal ensembles overlap in this assembly of interfering complex plane waves, in a holographic manner.'That whole paragraph is fascinating for a number of different reasons.

1: How is it similar or different to conventional visual models, and what does it imply about those conventional models?

2: Deep learning neural nets like Convolutional Neural Networks (CNN) are pretty directly extrapolated from the scale space conventional neural vision models. Watch Yann LeCunn's 2020 NYU lecture on them if you aren't following that point. They are directly inspired from a 1990 thereabout time period take on human visual modeling studies.

The review article from yesterday's post points us at 2 different references on this topic.

The first is by Karl Pibram

*(there's a familiar old name)*and Shelli Meade on 'Conscious Awareness: Processing in the Synaptodentritic Web, which you can find here. And a no-pay-wall downloadable pdf here.Observations:

1: Ok, references ICA (independent component analyses) produces gabor-like filters (Bell and Sejnowski paper), that's familiar old ground.

2: Here's another interesting short Pibram reference related to this.

The second is by the same author as the review article from yesterday's post, titled 'An Analytic Dissection of a Case of Cerebral Diplopia: Is the Human Brain a Holographic Device?', which you can find here.

Ok.

We are lead through a very similar discussion about how visual cells extract visual information:

'How do visual cells extract visual information? The Gabor transform is a short-time Fourier transform that is used to determine the frequency and phase content of a short segment of a signal. The Fourier function to be transformed is multiplied by a Gaussian function, a window function, and the result is then transformed with a Fourier transform to derive the time-frequency analysis. The Gabor wavelet transform can extract both time (spatial) and frequency information from a signal, and the tunable kernel size allows it to perform a multi-resolution time-frequency analysis. A smaller kernel size, in the time domain, has a higher resolution in the time domain but a lower resolution in the frequency domain and is used for higher frequency analysis while bigger kernel size has a higher resolution in the frequency domain but a lower resolution in the time domain and is used for lower frequency analysis. This property of the Gabor transform is known as the Gabor uncertainty principle. The wavelet transform is suitable for image compression, edge detection, and object recognition. The cells of the visual cortex of mammalian brains are best modeled as a family of self-similar two-dimensional Gabor wavelets'.

The discussion continues (i'm going in reverse order here):

The paper also brings up another oldie but goodie, the work of De Valois in 1971, that showed recordings in striate cortex cells are spatial frequency filters. So we're back to spatial frequency channel models.

Observations:

1: Ok, vision is based on spatial frequency channel models, the channels are derived from sets of gabor-like wavelets, you can tie those back to how neurons work in the visual cortex.

2: I keep coming back to the following sentence:

'Information processing takes place in this interconnected web, in what is known as the phase space.'

along with the fllowing:

'the cerebral cortex like the retina does not register an electric current with intracellular recordings. Instead, they register hyperpolarizations and depolarizations at the dendritic end-plates. '

This seems to tie into a point he was making in the review article from yesterday's post, which again tied into phase space rather than a non-complex plane activation thing going on.

3: So how is this the same or different than what is going on in deep learning CNN architectures?

4: This notion of lateral information flow keeps coming up. Like diffusion models. Or models of gestalt properties observed in human vision. Fascinating.

## Comments

## Post a Comment