HTC Updates - Deep Learning #2

 There are some great recent update summaries from Henry AI Labs, so let's dive into them.  The first one on data efficient image transformers being particularly interesting.


Here's a link to a blog post on the Facebook AI work on Data-efficient Image Transformers.

I guess one question in my mind is whether they are really better than typical convolutional net architectures if you build the convolutional nets with 'attention' features.  And 'attention' is really another way to say 'sparse network'.

In any case, this area of research (transformer architectures for deep learning image tasks) is fascinating.  And there is a lot of recent activity in this area, so expect more surprises in the coming year.

We'll be covering transformer architectures in much more detail here soon.


Now let's jump to a previous Henry AI Labs update.


So the one that really jumps out for me is the first one on Abstraction and Reasoning in Modern AI Systems by Francois Chollet.  We'll be presenting his entire NIPs talk in our next HTC Seminar post tomorrow.

If you read HTC posts, you often times hear me babbling on about 'manifold learning'.  Both the summery above and the full presentation do a great job of explaining what 'manifold learning' is all about.


Now let's jump to the previous Henry AI Labs update.


There are a lot of interesting things to check out from this summary. one that stands out for me is the model based approach to identifying the brain's learning algorithms here.

Comments

Popular posts from this blog

CycleGAN: a GAN architecture for learning unpaired image to image transformations

Pix2Pix: a GAN architecture for image to image transformation

Smart Fabrics