- 最后登录
- 2017-9-18
- 注册时间
- 2011-1-12
- 阅读权限
- 90
- 积分
- 12276
  
- 纳金币
- 5568
- 精华
- 0
|
1 Introduction
Video composition is an indispensable technique in the production
of many types of videos; nevertheless, it remains challenging problem
if the source video is normally captured without a blue or green
screen, especially when the object to be pasted between videos does
not have a clear boundary (e.g. water, gas, and fire). We will call
such a region a secondary foreground, and we propose a simple
composition method in which foreground, background, and secondary
foreground weights are determined using a geodesic distance
tramsform [Criminisi et al. 2010]. This method will run in
real time, but is nevertheless able to produce convincing results with
difficult secondary foregrounds.
2 Method
Figure 1(b) shows one of the user-defined trimaps for the source
video, which is the usual starting-point for image and video matting
processes. Let the binary masks Mf and Mb be one for foreground
and background region, respectively. We can compute the
geodesic distance transform [Criminisi et al. 2010] for each region:
Df (x) = D(x;Mf ;rI);Db(x) = D(x;Mb;rI): Figure 1(c)
shows the resulting geodesic distances Df and Db in blue and red
respectively. Intuitively, Df (x) is the distance from a point x the
boundary of the foreground, and Db(x) is the distance from x to
the boundary of the background.
The key feature of our approach is the additional secondary foreground
region. Let Pf (x), Pb(x), and Ps(x) respectively be the
probabilities that a pixel x belongs to the foreground, background,
or the secondary foreground. We can express these probabilities as
follows: |
|