2020 Joint Conference on AI Music Creativity

 Due to the magic of a global pandemic, we can all experience fascinating academic conferences virtually in the comfort of our homes.  We all hope the pandemic ends sooner rather than later, but i think it would be great if the 'hold your conference virtually' trend continues indefinitely.  

From the standpoint of trying to deal with climate change, it makes total sense.  

From the standpoint of making better use of people's time, it makes total sense.  

From the standpoint of reaching a bigger audience, it makes total sense.  

For the purposes of archiving knowledge for easy access whenever anyone needs to understand something, it makes total sense.

One example of what i'm talking about is this really great conference on AI and Music Creativity, happening in Stockholm Sweden this week.  We can all check out this event virtually, and watch any of the presentations that peak our interest.  And there are a lot of great presentations to choose from.

I'm going to present below some of the highlights of the conference for me personally.  You may have different interests, so take a look at the schedule and dive into any presentations that sound interesting to you.  And do check out the ones below.

Keep in mind that anything discussed in this conference on audio and music could be applied to other domains of interest (images, 3d models, text, 'insert your favorite topic here', etc).  They are new ways to think about editing and generating signals using deep learning systems (and other AI approaches as well, not everything in this conference is deep learning based).

JukeBox

We previously discussed the JukeBox project here.  Christine Payne from OpenAI talks about JukeBox and other projects in Music Creativity.  She discusses MuseNet (which works with MIDI), and then JukeBox (which works with raw audio) and uses a Transformer architecture.  She breaks down and explains the Transformer deep learning architecture for us.



Augmented Musical Interactions

Anna Huang discusses the Magenta project at Google.  We have discussed Magenta in previous posts here.  She covers improving generative systems, enabling new musical interactions, and supporting musician's workflows.  

A lot of very interesting ideas are discussed in this presentation, including DDSP (Differentiable Digital Signal Processing), attaching buttons to a latent representation so you can interactively play it, representing midi sequences as images in a neural net, etc.  

Oh how i wish DDSP had been invented when i got my DSP degree.



I'll be updating this post over the next few days as i work my way through more presentations.

Comments

Popular posts from this blog

CycleGAN: a GAN architecture for learning unpaired image to image transformations

Pix2Pix: a GAN architecture for image to image transformation

Smart Fabrics