Single‐image appearance editing is a challenging task, traditionally requiring the estimation of additional scene properties such as geometry or illumination. Moreover, the exact interaction of light, shape and material reflectance… Click to show full abstract
Single‐image appearance editing is a challenging task, traditionally requiring the estimation of additional scene properties such as geometry or illumination. Moreover, the exact interaction of light, shape and material reflectance that elicits a given perceptual impression is still not well understood. We present an image‐based editing method that allows to modify the material appearance of an object by increasing or decreasing high‐level perceptual attributes, using a single image as input. Our framework relies on a two‐step generative network, where the first step drives the change in appearance and the second produces an image with high‐frequency details. For training, we augment an existing material appearance dataset with perceptual judgements of high‐level attributes, collected through crowd‐sourced experiments, and build upon training strategies that circumvent the cumbersome need for original‐edited image pairs. We demonstrate the editing capabilities of our framework on a variety of inputs, both synthetic and real, using two common perceptual attributes (Glossy and Metallic), and validate the perception of appearance in our edited images through a user study.
               
Click one of the above tabs to view related content.