纳金网

标题: Rendering Synthetic Objects into Legacy Photographs [打印本页]

作者: 晃晃    时间: 2011-12-28 10:16
标题: Rendering Synthetic Objects into Legacy Photographs
Rendering Synthetic Objects into Legacy Photographs

Kevin Karsch     Varsha Hedau    David Forsyth     Derek Hoiem

University of Illinois at Urbana-Champaign

{karsch1,vhedau2,daf,dhoiem}@uiuc.edu





Abstract

We propose a method to realistically insert synthetic objects into

existing photographs without requiring access to the scene or any

additional scene measurements. With a single image and a small

amount of annotation, our method creates a physical model of the

scene that is suitable for realistically rendering synthetic objects

with diffuse, specular, and even glowing materials while account-

ing for lighting interactions between the objects and the scene. We

demonstrate in a user study that synthetic images produced by our

method are confusable with real scenes, even for people who be-

lieve they are good at telling the difference. Further, our study

shows that our method is competitive with other insertion meth-

ods while requiring less scene information. We also collected new

illumination and reflectance datasets; renderings produced by our

system compare well to ground ***th. Our system has applications

in the movie and gaming industry, as well as home decorating and

user content creation, among others.

CR Categories: I.2.10 [Computing Methodologies]: Artificial

Intelligence—Vision and Scene Understanding; I.3.6 [Comput-

ingMethodologies]: Computer Graphics—Methodology and Tech-

niques

Keywords: image-based rendering, computational photography,

light estimation, photo editing

1 Introduction

Many applications require a user to insert 3D meshed characters,

props, or other synthetic objects into images and videos. Currently,

to insert objects into the scene, some scene geometry must be man-

ually created, and lighting models may be produced by photograph-

ing mirrored light probes placed in the scene, taking multiple pho-

tographs of the scene, or even modeling the sources manually. Ei-

ther way, the process is painstaking and requires expertise.

We propose a method to realistically insert synthetic objects into

existing photographs without requiring access to the scene, special

equipment, multiple photographs, time lapses, or any other aids.

Our approach, outlined in Figure 2, is to take advantage of small

amounts of annotation to recover a simplistic model of geometry

and the position, shape, and intensity of light sources. First, we

automatically estimate a rough geometric model of the scene, and

ask the user to specify (through image space annotations) any ad-

ditional geometry that synthetic objects should interact with. Next,

the user annotates light sources and light shafts (strongly directed

light) in the image. Our system automatically generates a physical

model of the scene using these annotations. The models created by

our method are suitable for realistically rendering synthetic objects

with diffuse, specular, and even glowing materials while accounting

for lighting interactions between the objects and the scene.

In addition to our overall system, our primary technical contribu-

tion is a semiautomatic algorithm for estimating a physical lighting

model from a single image. Our method can generate a full lighting

model that is demonstrated to be physically meaningful through a

ground ***th evaluation. We also introduce a novel image decompo-

sition algorithm that uses geometry to improve lightness estimates,

and we show in another evaluation to be state-of-the-art for single

image reflectance estimation. We demonstrate with a user study

that the results of our method are confusable with real scenes, even

for people who believe they are good at telling the difference. Our

study also shows that our method is competitive with other inser-

tion methods while requiring less scene information. This method

has become possible from advances in recent literature. In the past

few years, we have learned a great deal about extracting high level

information from indoor scenes [Hedau et al. 2009; Lee et al. 2009;

Lee et al. 2010], and that detecting shadows in images is relatively

straightforward [Guo et al. 2011]. Grosse et al. [2009] have also

shown that simple lightness assumptions lead to powerful surface

estimation algorithms; Retinex remains among the best methods.









全文请下载附件:
作者: 奇    时间: 2012-10-2 23:20
再看一看,再顶楼主

作者: 菜刀吻电线    时间: 2012-10-24 23:30
不错不错,收藏了

作者: 奇    时间: 2012-11-29 23:22
很经典,很实用,学习了!

作者: 奇    时间: 2013-1-28 23:18
呵呵,很好,方便罗。

作者: 菜刀吻电线    时间: 2013-2-10 23:21
呵呵,很好,方便罗。





欢迎光临 纳金网 (http://go.narkii.com/club/) Powered by Discuz! X2.5