- 最后登录
- 2018-6-29
- 注册时间
- 2011-7-1
- 阅读权限
- 20
- 积分
- 359

- 纳金币
- 335582
- 精华
- 0
|
Image-Based Bidirectional Scene Reprojection
Lei Yang1 Yu-Chiu Tse1 Pedro V. Sander1 Jason Lawrence2 Diego Nehab3;4 Hugues Hoppe3 Clara L. Wilkins5
1Hong Kong UST 2University of Virginia 3Microsoft Research 4IMPA 5Wesleyan University
Abstract
We introduce a method for increasing the framerate of real-time
rendering applications. Whereas many existing temporal upsam-
pling strategies only reuse information from previous frames, our
bidirectional technique reconstructs intermediate frames from a pair
of consecutive rendered frames. This significantly improves the
accuracy and efficiency of data reuse since very few pixels are si-
multaneously occluded in both frames. We present two versions of
this basic algorithm. The first is appropriate for fill-bound scenes as
it limits the number of expensive shading calculations, but involves
rasterization of scene geometry at each intermediate frame. The sec-
ond version, our more significant contribution, reduces both shading
and geometry computations by performing reprojection using only
image-based buffers. It warps and combines the adjacent rendered
frames using an efficient iterative search on their stored scene depth
and flow. Bidirectional reprojection introduces a small amount of
lag. We perform a user study to investigate this lag, and find that its
effect is minor. We demonstrate substantial performance improve-
ments (3–4) for a variety of applications, including vertex-bound
and fill-bound scenes, multi-pass effects, and motion blur.
Keywords: real-time rendering, temporal upsampling
1 Introduction
Reprojection is a general approach for improving real-time rendering
by reusing expensive pixel shading from nearby frames [Scherzer
et al. 2011]. It has proven beneficial in popular games. For instance,
reverse reprojection [e.g. Nehab et al. 2007; Scherzer et al. 2007] is
used in Gears of War II to accelerate low-frequency lighting effects,
and in Crysis 2 to antialias distant geometry.
Such data reuse techniques can be broadly categorized based on
three separate algorithmic choices:
Temporal direction: Whether shading data is propagated forward,
backward, or in both directions in animation time;
Data access: Whether pixels are “pushed” (scattered) onto the
current frame, or “pulled” (gathered) from other frames;
Correspondence domain: Whether the motion data (e.g., velocity
vectors) used to reproject the samples is defined over the source
image domain or over the rendered target.
As reviewed in Section 2, different techniques follow different strate-
gies for each of the choices above (see also Table 1).
In this paper, we present reprojection techniques for temporally up-
sampling rendered content by inserting interpolated frames between
![]()
pairs of rendered frames. Borrowing terminology from video com-
pression, we refer to rendered frames as intra- or simply I-frames,
and to interpolated frames as bidirectionally predicted- or B-frames.
Our approach offers two major contributions: (1) bidirectional repro-
ection which combines samples both forward and backward in time,
and (2) image-based reprojection which establishes reprojection
correspondences based on velocity fields stored in the I-frames.
Temporal direction A fundamental limitation of existing reverse
reprojection techniques [e.g. Nehab et al. 2007] is that they incur a
drop in performance whenever there are disoccluded regions in the
scene—elements visible in the current frame that were not visible in
he preceding frame. This is because such regions must be reshaded
from scratch. Since the number of disoccluded pixels varies over
ime, framerates may fluctuate undesirably. In addition, the entire
scene geometry must be processed in order to reshade, incurring
significant overhead in complex scenes.
Our bidirectional reprojection temporally upsamples rendered con-
ent by reusing data from both backward and forward temporal
directions. This provides two clear benefits:
Smooth shading interpolation: The vast majority of pixels in a B-
frame are also visible in both I-frames. This lets us fetch shading
information from both directions and create an interpolated sig-
nal that greatly attenuates the popping artifacts associated with
one-sided reconstruction (i.e., sample-and-hold extrapolation).
This is particularly important for fast changing shading signals
(e.g., dynamic shadows and glossy lighting).
Higher, more stable framerate: Disoccluded regions are ex-
tremely rare since they must be occluded in both I-frames.
Thus with bidirectional reprojection we can avoid reshading
and achieve higher and steadier framerates.
One downside of bidirectional reprojection is that it introduces a lag
n the resulting image sequence. This lag is not present in forward-
only reprojection schemes. We present a careful analysis of this lag,
showing that it is small (less than one I-frame). Moreover, results of
a user study we conducted allow us to conclude that it is beneficial
o use bidirectional reprojection in a real-time gaming scenario.
全文请下载附件:
|
|