Interactive Facial-Geometric-Feature Animation for Generating Expressions of Novel Faces
-
- YANG Yang
- Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University Department of Electrical and Electronic Engineering, The University of Tokushima
-
- YUAN Zejian
- Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University
-
- ZHENG Nanning
- Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University
-
- LIU Yuehu
- Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University
-
- YANG Lei
- Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University
-
- NISHIO Yoshifumi
- Department of Electrical and Electronic Engineering, The University of Tokushima
この論文をさがす
説明
This paper introduces an interactive expression editing system that allows users to design facial expressions easily. Currently, popular example-based methods construct face models based on the examples of target face. The shortcoming of these methods is that they cannot create expressions for novel faces: target faces not previously recorded in the database. We propose a solution to overcome this limitation. We present an interactive facial-geometric-feature animation system for generating expressions of novel faces. Our system is easy to use. By click-dragging control points on the target face, on the computer screen display, unique expressions are generated automatically. To guarantee natural animation results, our animation model employs prior knowledge based on various individuals' expressions. One model prior is learned from motion vector fields to guarantee effective facial motions. Another, different, model prior is learned from facial shape space to ensure the result has a real facial shape. Interactive animation problem is formulated in a maximum a posterior (MAP) framework to search for optimal results by combining the priors with user-defined constraints. We give an extension of the Motion Propagation (MP) algorithm to infer facial motions for novel target faces from a subset of the control points. Experimental results on different facial animations demonstrate the effectiveness of the proposed method. Moreover, one application of our system is exhibited in this paper, where users create expressions for facial sketches interactively.
収録刊行物
-
- IEICE Transactions on Information and Systems
-
IEICE Transactions on Information and Systems E94-D (5), 1099-1108, 2011
一般社団法人 電子情報通信学会
- Tweet
詳細情報 詳細情報について
-
- CRID
- 1390001204379911296
-
- NII論文ID
- 10029507238
-
- NII書誌ID
- AA10826272
-
- ISSN
- 17451361
- 09168532
-
- 本文言語コード
- en
-
- データソース種別
-
- JaLC
- Crossref
- CiNii Articles
-
- 抄録ライセンスフラグ
- 使用不可