Sketch to Art Style Transfer Techniques - how to bring the artist into the process

How can artists become more directly involved in working with and controlling different deep learning based style transfer techniques?  This is an important consideration, as many deep learning systems and algorithms are developed by computer scientists or engineers, who oftentimes have very different sensibilities than people trained with a traditional art background (artists).

Several different approaches were presented and discussed at Siggraph 2020.  Let's take a look at two of them that were presented at the Real-Time Live demo session.  Both of these are from sections of a much longer video that presents a wide variety of different live demos, and while the rest of the longer video does not directly relate to today's post topic, feel free to watch them if you are interested.

The first live demo is called 'Interactive Video Stylization Using Few-Shot Patch-Based Training' by Ondrej Texler.


The second live demo is called 'Sketch-To-Art: Synthesizing Stylized Art Images From Hand-Drawn Sketches With No Semantic Labeling' by Ahmed Elgammal.  To get to it, move the above video's start time to 46:23, and then watch from that point until that presentation ends (at time 53:44).


Both of these approaches are very interesting.  The artist is directly incorporated into the overall process of making the stylized art.  


In the first demo, an artist can live sketch or paint by hand using an onion skin view of the source image they are working with.  The system learns a stylization based on the artist's interactive drawing in real time.

Here's a link to their paper titled 'Interactive Video Stylization Using Few-Shot Patch-Based Training'.

In a Siggraph interview, Ondrej had the following to say.  

" Imagine you would like to stylize a video for which it would be challenging to collect a sufficiently variable training dataset, such as the interior of a medieval castle. You do not want to paint every frame by hand, so you provide one — or a few — stylized keyframes. Then, you would like to see your style transferred to the rest of the sequence in a semantically meaningful way. For example, bricks depicted using the same red brushstrokes as in the exemplar. You also would like to see an arbitrary frame in the sequence stylized quickly. You do not need to wait a long time to get the entire video stylized. All of those practical requirements were highly challenging for previous methods, and thus we tried to address them in our framework.

In our training strategy, we take into account the way artists create their artworks. Painting is usually a slow and incremental process. Thanks to this fact, the network also can follow the artist and gradually improve. Our method’s great advantage is that it allows us to parallelize the painting creation with training, and to amortize the time that would otherwise be required to train the network for the already existing style exemplar."


In the second demo, again the artist is directly involved in the process in a live interactive way.  

Here's a link to a related paper called "Sketch-To-Art: Synthesizing Stylized Art Images From Sketches'.  This part of the demo is based on a GAN framework implementation. Unlike GauGAN and related techniques, a user does not need to use semantic labels for different parts of the sketch.

Here's a link to a blog post article about what is going on in the demo called "Sketch-To-Art: Synthesizing Stylized Art Images From Hand-drawn Sketches'.  Here's a quote from this article.

"Most research works have focused on synthesizing in specific style and genre, for example photo-realistic images of specific categories, such as dog and cats in SketchyGAN, or photo-realistic landscapes in Nvidia’s GauGAN, or cartoonish images from sketches in Auto-Painter.

The process we are proposing is different. Users can define a style by either choosing a reference image, or selecting an artist, or an art movement. For example, choosing a particular William Turner landscape as a style to dictate dramatic foggy sunset effect, or selecting Cézanne style in general to achieve flat patch post-impressionist effect, or expressionism as a movement to enforce dramatic strong color effect. in. The approach we developed is able to capture style characteristics from the variety of user inputs and rendered the user sketch based on these characteristics."

Ahmed is the Director of the The Art and Artificial Intelligence Laboratory at Rutgers University.  Their homepage is here.

Comments

Popular posts from this blog

CycleGAN: a GAN architecture for learning unpaired image to image transformations

Pix2Pix: a GAN architecture for image to image transformation

Smart Fabrics