HTC Seminar #23 - DeepMind's New AlphaFold 2 Breakthrough in Protein Folding
Today's HTC seminar provides some information on a very recent development in trying to solve the protein folding problem by DeepMind. They just announced this in the last week, so hot new news on how deep learning continues to dominate in new and very difficult problem sets.
This lecture is given by Yannic Kilcher. As he points out, DeepMind has not released a specific paper on AlphaFold 2 yet, just the PR associated with the recent press release. He tries to read between the lines based on an analysis of the AlphaFold 1 paper and what the PR releases about AlphaFold 2 say to inform us on what is going on under the hood.
You know its big news when Nature talks about it.
A fascinating quote from this is the following "instead of predicting relationships between amino acids, the network predicts the final structure of a target protein sequence". They then claim that kind of prediction is a more complex system.
But is that really true? Or does the improved performance come from the fact that they are approaching it like a language model, letting the neural network learn the manifold transformation to the 3D structure. So all the weird quantum bits are handled by the 'learn from data' model (which is maybe not the case if you are predicting relationships and miss some of that stuff).
If you are in a hurry, or just want an overview (and a deep mind press release / mini-docu-drama), here's a 7 minute overview from DeepMind.
I was involved in a computational chemistry software research effort in the 80's at Tektronix Research Labs, so it's personally amazing to see this particular problem (long considered intractable) essentially solved now. Onwards.