Increasing pose comprehension through augmented reality reenactment

DOI 機関リポジトリ 機関リポジトリ (HANDLE) HANDLE PDF ほか2件をすべて表示 一部だけ表示 被引用文献1件 参考文献31件 オープンアクセス

説明

Standard video does not capture the 3D aspect of human motion, which is important for comprehension of motion that may be ambiguous. In this paper, we apply augmented reality (AR) techniques to give viewers insight into 3D motion by allowing them to manipulate the viewpoint of a motion sequence of a human actor using a handheld mobile device. The motion sequence is captured using a single RGB-D sensor, which is easier for a general user, but presents the unique challenge of synthesizing novel views using images captured from a single viewpoint. To address this challenge, our proposed system reconstructs a 3D model of the actor, then uses a combination of the actor's pose and viewpoint similarity to find appropriate images to texture it. The system then renders the 3D model on the mobile device using visual SLAM to create a map in order to use it to estimate the mobile device's camera pose relative to the original capturing environment. We call this novel view of a moving human actor a reenactment, and evaluate its usefulness and quality with an experiment and a survey.

収録刊行物

被引用文献 (1)*注記

もっと見る

参考文献 (31)*注記

もっと見る

関連プロジェクト

もっと見る

詳細情報 詳細情報について

問題の指摘

ページトップへ