Increasing pose comprehension through augmented reality reenactment
説明
Standard video does not capture the 3D aspect of human motion, which is important for comprehension of motion that may be ambiguous. In this paper, we apply augmented reality (AR) techniques to give viewers insight into 3D motion by allowing them to manipulate the viewpoint of a motion sequence of a human actor using a handheld mobile device. The motion sequence is captured using a single RGB-D sensor, which is easier for a general user, but presents the unique challenge of synthesizing novel views using images captured from a single viewpoint. To address this challenge, our proposed system reconstructs a 3D model of the actor, then uses a combination of the actor's pose and viewpoint similarity to find appropriate images to texture it. The system then renders the 3D model on the mobile device using visual SLAM to create a map in order to use it to estimate the mobile device's camera pose relative to the original capturing environment. We call this novel view of a moving human actor a reenactment, and evaluate its usefulness and quality with an experiment and a survey.
収録刊行物
-
- Multimedia Tools and Applications
-
Multimedia Tools and Applications 76 (1), 1-22, 2015-12-07
Springer
- Tweet
詳細情報 詳細情報について
-
- CRID
- 1050577309353384960
-
- NII論文ID
- 120005867201
-
- ISSN
- 15737721
- 13807501
-
- HANDLE
- 10061/11043
-
- 本文言語コード
- en
-
- 資料種別
- journal article
-
- データソース種別
-
- IRDB
- Crossref
- CiNii Articles
- KAKEN
- OpenAIRE