Making Machine Learning Art in Colab - Part 1

 We're going to be presenting series of videos on the topic of making machine learning art with colab.  They are from a course put together by Derrick  Schultz and Lia Coleman.

We have talked about colab quite a bit here on HTC, so you've probably run into it before if you have read other HTC posts. It's a way to run Jupyter notebooks on a google server that allows you to use GPU resource to run the notebook for free.  They were used heavily in both of out HTC Education Series courses on deep learning.

The focus of this course and accompanying series of videos is to get digital artist comfortable enough to work with pre-built colab implementations of different machine learning models.  Machine learning is a pretty broad term, and what we are really talking about is deep learning neural networks, primarily ones that implement different generative models.

Most if not all of the models have already been covered in some form here at HTC.  And we'll be diving into more of them in increasing technical depth as time permits.

My focus tends to be from the standpoint of someone building new models, so understanding the interior guts of how they work, and how they are implemented in code (hopefully PyTorch).   But most interesting research papers in deep learning these days have accompanying open source implementations you can run in colab.

So from the standpoint of a digital artist experimenting with this stuff, you are using a research model implementation of some experimental generative deep learning model to generate custom processed images for you.

The first video in the series covers the basics of working with google colab.  It's very much a 'getting started with colab' lesson, not really any real neural net discussion or analysis going on in it (other than he very quickly shows how he used a prebuilt MUNIT net in colab to process some photos)


Here's a link to the course notes.

To be honest, i think people might be better served watching the second video below first, because it shows off actually using this stuff for doing interesting visual things rather than just covering the basics of the colab like the firs tone does.

This second video covers a very very basic loose intro to deep learning, andt hen dives into deepdream, which was developed as a visualization technique for understanding the representations encodec insid of different layers in deep neural networks, but which can also be used to generate a wide variety of different trippy visual images.

Derrick then shows off a bunch of different art examples from a few artists using this stuff in their work


We've covered the feature visualization techniques quite a bit here at HTC.  And i would highly encourage you to check out the various distill papers if you are into that kind of stuff.


Observations:

1.  I really loved seeing the artwork from the individual artists.  It's very inspiring, and can help you think about how to approach working with these kinds of systems.

However, at least 50% of the output seems like it could have been generate working with movie or image folder brushes using the Studio Artist paint synthesizer.  An approach i think is way easier, way faster, much higher resolution, and can be done on your desktop rather than deal int with colab and gpu training in the cloud.

I'm obviously biased, but i'm also extremely aware of what Studio Artist can achieve because of my close relationship with it.

And certainly my deep dive journey into the world of deep learning has influenced the recent development of Studio Artist V5.5 (especially the load style features that allow for visual attribute modulation from both a source and a style image simultaneously).

Working with collections of images (as opposed to a single image) also has a fairly long history that pre-dates the recent influences driven by exposure to what is going on in deep learning systems.  We've been doing stack filtering for over 10 years.


Comments

Popular posts from this blog

CycleGAN: a GAN architecture for learning unpaired image to image transformations

Pix2Pix: a GAN architecture for image to image transformation

Smart Fabrics