Self Supervised Learning is Taking Off
The cutting edge of deep learning research in self-supervised learning algorithms is taking off. Self supervised being a neural net that learns from the data in the model, not from supervised labels or other pattern match-ups tagged by humans.
And in some sense it's really all about data augmentation. And we move from thinking about data augmentation as just a way to endlessly expand our model's data, to thinking about it as a way to introduce perceptual clustering of the data in our model so that the training process can manipulate it's energy surface to correspond to the natural perceptual classes in the data (doing this with human intervention). Actually the human intervention is the human designing the correct perceptual augmentation.
SwAV is the latest paper from FAIR to generate state of the art results in self-supervised learning. With exciting potential for an alternative approach to transfer learning (alternative to a ResNet model (or equivalent)).
"SwaV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments" here.
"Unsupervised Visual Representation Learning with SwAV" here.
"What's in a Loss Function for Image Classification?" here.
"A Simple Framework for Contrastive Learning of Visual Representations" here.
"Big Self-Supervised Models are Strong Semi-Supervised Learners" here.
Comments
Post a Comment