- 最后登录
- 2017-9-18
- 注册时间
- 2011-1-12
- 阅读权限
- 90
- 积分
- 12276
  
- 纳金币
- 5568
- 精华
- 0
|
Introduction
3D full head synthesis (3DFHS) is of interest in many
applications, including gaming and social networks. One well
known related work is [Blanz and Vetter 1999] but it is only
restricted to the synthesis of the face area (and not the head area).
FaceGen [FaceGen 2011] uses a similar statistical approach and it
provided a solution by coloring the head area with a single
matching color. However, both of these approaches are not
automatic, and user interactions are needed to manually assign the
locations of the feature points. In addition, computation time to
perform 3D face synthesis is in the order of several minutes on
relatively fast PCs.
We present a new approach to achieve the goal for automatic and
fast photo-realistic 3D full head synthesis that can be operated
using mobile devices. More specifically, we developed an iPhone
app that can first capture the user’s face image using the iPhone’s
built-in camera and subsequently upload it to our own 3D Face
Synthesis (3DFM) web-server (see Figure 2 for system diagram).
The server generates a 3D face model of the user and returns it to
the iPhone for display, with the entire process completed in a
matter of seconds. The user can then choose to add any desired
image as the background image, and also rotate and scale the
displayed 3D face on the iPhone devices (see sample results in
Figure 1 (b)-(e)).
Compared to [FaceGen 2011], our system does not require
manual assignment of the feature points. The result can be
achieved in near real-time (a few seconds) on our test machine vs
about 2 minutes using FaceGen***nning on the same machine. In
our case here, we have also applied photo-realistic texture
synthesis and mapping to the head area to improve visual realism.
Experiments conducted show that good results can be obtained
with no user intervention. |
|