Spatial and temporal segmented dense trajectories for gesture recognition

この論文をさがす

説明

Recently, dense trajectories [1] have been shown to be a successful video representation for action recognition, and have demonstrated state-of-the-art results with a variety of datasets. However, if we apply these trajectories to gesture recognition, recognizing similar and fine-grained motions is problematic. In this paper, we propose a new method in which dense trajectories are calculated in segmented regions around detected human body parts. Spatial segmentation is achieved by body part detection [2]. Temporal segmentation is performed for a fixed number of video frames. The proposed method removes background video noise and can recognize similar and fine-grained motions. Only a few video datasets are available for gesture classification; therefore, we have constructed a new gesture dataset and evaluated the proposed method using this dataset. The experimental results show that the proposed method outperforms the original dense trajectories.

収録刊行物

詳細情報 詳細情報について

  • CRID
    1873679867633568256
  • DOI
    10.1117/12.2266859
  • ISSN
    0277786X
  • データソース種別
    • OpenAIRE

問題の指摘

ページトップへ