Siggraph 2020 Roundup

Like everything this year, the yearly grand poobah of computer graphics conferences, Siggraph, was held virtually.  Including a virtual exhibition hall.  As a previous exhibitor at Siggraph for many years in the past, my immediate thoughts were whether exhibition management would charge you for the chair you were sitting in at your computer while the virtual exhibition was running.

The technical paper sessions are the real meat of this particular conference.  If you look at the history of computer graphics research, you're going to be looking at a trail of Siggraph technical papers stretching from the present all the way back to the very first one held in Boulder Colorado in 1974.

Of course some historians feel that the first real Siggraph was the one held in San Jose in 1977, since it was the first to include commercial exhibits in the mix along with the usual academic conference stuff.  And Siggraph always has successfully married academic research along with commercial applications and hardware based on that body of research.

Siggraph technical papers are all about presenting state of the art research into the latest techniques and algorithms for generating computer graphics. Becaause of this, any given year of technical papers provides an overview of that era's geigist of thinking about problems in computer graphics.  And looking back, you can date the papers by matching their keyword terminology to the hot new technology or algorithm of those past eras.

So in the early days, vectors and splines and then raster graphics, and then things like OpenGL or GPU algorithms, ray tracing, blue noise, non-photorealistic rendering, etc.  Hot topics in their time. And moving to the present, one would expect a big emphasis on deep learning and neural net enhancements to graphics algorithms, since everything tastes better with a little deep learning sprinkled onto it.

Since it's the weekend, we'll start a new HTC siggraph tradition with the 'HTC Siggraph Deep Learning Drinking Challenge'.  Maybe we'll print up tee shirts to commemorate the event and promote team building.  Like all drinking games, it involves taking a swig of something every time you hear a key word phrase.  In our game, we'll knock back a shot whenever deep or deep learning or neural or neural net is used as a part of a technical paper presentation. 

Here's a quick run through some of the different technical paper presentation sessions.  This is not complete coverage of everything presented, but covers some specific sessions i was interested in.

Modeling and Synthesis.

Immediately 3 of the 4 papers call for a shot just from their titles. Read the abstracts, and all 4 qualify.

Neural Subdivision     -abstract,pdf

Like regular geometry subdivision, but better because it uses a neural network to predict vertex positions based on a single view of a 3D shape.

Deep Geometric Texture Synthesis     -abstract,pdf

Proposes an analogy to a Generative Adversarial Network (GAN) that can be used to generate 3D models.   So you show their system a 3D texture sample and it then can apply that 3D texture to an existing 3D model by displacing vertices on the target 3D shape.

Graph2Plan: Learning Floorplan Generation From Layout Graphs     -abstract,pdf

Introduces a learning framework for automated floorplan generation which combines generative modeling using deep neural networks and user-in-the-loop designs to enable human users to provide sparse design constraints represented by a layout graph.

Deep Generative Modeling for Scene Synthesis via Hybrid Representations     -abstract,pdf

Uses a neural network to parameterize the space of 3D indoor scenes from a low-dimensional parametric vector.  With the goal of creating a generative model that procedurally generates indoor scenes.

Geometric Deep Learning

Dynamic Graph CNN for Learning on Point Clouds     -abstract,pdf

Introduces a CNN (convolutional neural network) called EdgeConv that can perform tasks like classification and segmentation using 3dD point cloud data as it's input.

Point2Mesh: A Self-prior for Deformable Meshes     -abstract,pdf

Introduces a CNN called Point2Mesh that reconstructs a surface mesh from 3D point cloud data input.

MGCN: Descriptor Learning Using Multiscale GCNs     -abstract,pdf

Introduces a MGCN (multiscale graph convolutional network) and a WEDS (wavelet energy decomposition signature) for computing point descriptors on a 3D surface.

CNNs on Surfaces Using Rotation-equivariant Features     -abstract,pdf

Propose a CNN architecture for surfaces that consists of vector-valued, rotation-equivariant features.  This allows local features computed in arbitrary coordinate systems to be aligned.

Capturing and Editing Faces

DeepFaceDrawing: Deep Generation of Face Images From Sketches     -abstract,pdf

Trains neural net to convert user sketches of faces into photo-realistic raster facial images.

MichiGAN: Multi-Input-Conditioned Hair Image Generation for Portrait Editing     -abstract,pdf

Enhancements to a GAN (Generative Adversarial Network) algorithm for generating portrait images that allows for improved portrait hair generation as well as interactive user adjustments.

Single-shot High-quality Facial Geometry and Skin Appearance Capture     -abstract,pdf


Popular posts from this blog

Simulating the Universe with Machine Learning

CycleGAN: a GAN architecture for learning unpaired image to image transformations

Pix2Pix: a GAN architecture for image to image transformation