Graph Convolutional Neural Network Practicum

 Continuing our graph convolutional network (GCN) theme from yesterday's post, Alfredo Canziani of the 2020 NYU Deep Learning course helps reinforce the notion presented yesterday, that GCNs are very closely related to self-attention and transformer networks.  Which seemed surprising when first presented, but maybe less so when you realize it's all about sparse networks.

After understanding the general notation, representation and equations of GCN, we delve into the theory and code of a specific type of GCN known as Residual Gated GCN. We then look at some related PyTorch code in a Jupyter notebook.



Comments

Popular posts from this blog

CycleGAN: a GAN architecture for learning unpaired image to image transformations

Pix2Pix: a GAN architecture for image to image transformation

Smart Fabrics