Publikation
Generative View Synthesis: From Single-view Semantics to Novel-view Images
Tewodros Amberbir Habtegebrial; Varun Jampani; Orazio Gallo; Didier Stricker
In: Advances in Neural Information Processing Systems. Neural Information Processing Systems (NeurIPS-2020), December 6-12, Vancouver, Canada, Pages 4745-4755, No. 33, Curran Associates, Inc. 12/2020.
Zusammenfassung
Content creation, central to applications such as virtual reality, can be tedious and
time-consuming. Recent image synthesis methods simplify this task by offering
tools to generate new views from as little as a single input image, or by converting
a semantic map into a photorealistic image. We propose to push the envelope further,
and introduce Generative View Synthesis (GVS) that can synthesize multiple
photorealistic views of a scene given a single semantic map. We show that the
sequential application of existing techniques, e.g., semantics-to-image translation
followed by monocular view synthesis, fail at capturing the scene’s structure. In
contrast, we solve the semantics-to-image translation in concert with the estimation
of the 3D layout of the scene, thus producing geometrically consistent novel views
that preserve semantic structures. We first lift the input 2D semantic map onto a 3D
layered representation of the scene in feature space, thereby preserving the semantic
labels of 3D geometric structures. We then project the layered features onto the
target views to generate the final novel-view images. We verify the strengths of our
method and compare it with several advanced baselines on three different datasets.
Our approach also allows for style manipulation and image editing operations, such
as the addition or removal of objects, with simple manipulations of the input style
images and semantic maps respectively. For code and additional results, visit the
project page at https://gvsnet.github.io