Material Editing Using a Physically Based Rendering Network


Guilin Liu, Duygu Ceylan, Ersin Yumer, Jimei Yang and Jyh-Ming Lien


Overview

The ability to edit materials of objects in images is desirable by many content creators. However, this is an extremely challenging task as it requires to disentangle intrinsic physical properties of an image. We propose an end-to-end network architecture that replicates the forward image formation process to accomplish this task. Specifically, given a single image, the network first predicts intrinsic properties, i.e. shape, illumination, and material, which are then provided to a rendering layer. This layer performs in-network image synthesis, thereby enabling the network to understand the physics behind the image formation process. The proposed rendering layer is fully differentiable, supports both diffuse and specular materials, and thus can be applicable in a variety of problem settings. We demonstrate a rich set of visually plausible material editing examples and provide an extensive comparative study.

Paper

Material Editing using a Physically Based Rendering Network, Guilin Liu and Duygu Ceylan and Ersin Yumer and Jimei Yang and Jyh-Ming Lien, International Conference on Computer Vision (ICCV) (spotlight), 2017
Website / paper / BibTeX

Rendering Layer Code & Detailed Normal Generation Code

Coming...

Result

Material Transfer Example 1
image



Material Transfer Example 2 (Cross Material Transfer Between Images High Resolution PDF)
image

Given a set of images (in diagonal, in red boxes), we synthesize new images by using shape and light from its row and material from it column using our approach.


Computer Science @ George Mason University