Audio-Visual Tracking System for Multi-Modal Interface
-
- ZOTKIN Dmitry
- University of Maryland ATR
-
- TAKAHASHI Kazuhiko
- Yamaguchi University ATR
-
- YOTSUKURA Tatsuo
- Seikei University ATR
-
- MORISHIMA Shigeo
- Seikei University ATR
-
- TETSUTANI Nobuji
- ATR
Search this article
Abstract
In this paper, a front end system which uses audio and video information to track the people or other sound sources in the ordinary room has developed. The microphone array is used for determining the spatial location of the sound; the active video camera acquires the image of the area where the sound is detected, detects the people in the image by using skin color and can zoom and track a speaker. Several add-ons to the system include various visualization tools such as on-screen displays of waveforms, correlation plots, spectrum plots, spatial acoustic energy distribution, running time-frequency acoustic energy plots, and the possibility of real-time beamforming with real-time output to the headphones. The system can be used as a front-end for the non-encumbering human-computer interaction by video and audio means.
Journal
-
- The Journal of the Institute of Image Electronics Engineers of Japan
-
The Journal of the Institute of Image Electronics Engineers of Japan 30 (4), 452-463, 2001
The Institute of Image Electronics Engineers of Japan
- Tweet
Details 詳細情報について
-
- CRID
- 1390282679587557504
-
- NII Article ID
- 10010070462
-
- NII Book ID
- AN00041650
-
- ISSN
- 13480316
- 02859831
-
- NDL BIB ID
- 5877832
-
- Text Lang
- en
-
- Data Source
-
- JaLC
- NDL
- CiNii Articles
-
- Abstract License Flag
- Disallowed