Creative Adversarial Networks (CAN) - Generating Art by Learning About Styles and then Deviating from Style Norms

 What is art?  What is creativity?  What is 'aesthetically appealing?  These are fascinating questions that perhaps tell us more about what it is like to be human.

Which poses the following question.  Is it possible to remove the human completely from the art making process and still generate what humans would consider art?


Here's a link to a blog article on this topic.  It points out something interesting, which is that conventional GAN systems might be great at creating new images of a particular type (faces, landscapes, furniture), but in some sense they aren't really being creative.  At least not in a certain artistic sense.  Because they aren't breaking away from the representations they were trained to generate.

The developers of these GAN systems would probably reply, of course not, because it would defeat the whole point of our trained GAN system, which is to generate things that look like the type of images they were trained on.

But in some sense, the whole point of being creative is to break away from convention.  To create something new, different.  To explore the creative space of possibilities.



Here's a link to a technical paper on this topic called 'CAN: Creative Adversarial Networks, Generating "Art" by Learning About Styles and Deviating from Style Norms'.  It introduce a new kind of GAN, called a Creative Adversarial Network (CAN).  

Here's an example of some CAN system generated artwork from the paper linked above.

A CAN generator starts with random noise like a GAN, but receives 2 different signals from the CAN system's discriminator.  The first is a classification of 'art or not art'.  The second is a 'style ambiguity' signal that measures how confused the discriminator is in trying to identify the output of the generator as art of known styles.  This allows a CAN system to train it's generator to output images the discriminator recognizes as 'art', but that have a level of style ambiguity.

Here's a block diagram of the CAN architecture from the paper linked above.


The notion of having a discriminator that computes multiple outputs, and then passing that back to a GAN generator is interesting.  This work was done at the The Art and Artificial Intelligence Laboratory at Rutgers University.


Comments

  1. So i looked at another related article on above that describes an art installation in NYC based on this algorithm. And it showed off a bunch of different images generated by the algorithm as a part of the art gallery installation.

    I have to be honest, the imagery in the art gallery installation looked a lot like MSG (modular synthesized graphics) procedural imagery that one could have generated in Studio Artist 10 years ago or more. Note that the imagery could be completely generated by the MSG system in Studio Artist without any human intervention other than grabbing ones they liked (and i'm sure they did exactly the same for the Ai art exhibit i'm referring to, ie: hand picked the 'best' output images for their gallery exhibition).

    So i think we need to be a little bit cautious about rave reviews of 'neural net generated art' or AI art, where reviewers or the media get caught up in the hype of AI Art.

    ReplyDelete

Post a Comment

Popular posts from this blog

CycleGAN: a GAN architecture for learning unpaired image to image transformations

Pix2Pix: a GAN architecture for image to image transformation

Smart Fabrics