Real-Time Human Tracking by Audio-Visual Integration for Humanoids-Integration of Active Audition and Face Recognition-

  • Nakadai Kazuhiro
    Kitano Symbiotic Systems Project, ERATO, Japan Science and Technology Corp.
  • Hidai Ken-ichi
    Kitano Symbiotic Systems Project, ERATO, Japan Science and Technology Corp.
  • Mizoguchi Hiroshi
    Faculty of Science and Technology, Science University of Tokyo
  • Okuno Hiroshi G.
    Kitano Symbiotic Systems Project, ERATO, Japan Science and Technology Corp. Graduate School of Informatics, Kyoto University
  • Kitano Hiroaki
    Kitano Symbiotic Systems Project, ERATO, Japan Science and Technology Corp.

Bibliographic Information

Other Title
  • ヒューマノイドを対象にした視聴覚統合による実時間人物追跡―アクティブオーディションと顔認識の統合―
  • ヒューマノイド オ タイショウ ニ シタ シチョウカク トウゴウ ニ ヨル ジツジカン ジンブツ ツイセキ アクティブオーディション ト カオ ニンシキ ノ トウゴウ
  • —Integration of Active Audition and Face Recognition—
  • ―アクティブオーディションと顔認識の統合―

Search this article

Abstract

This paper describes a real-time human tracking system by audio-visual integrtation for the humanoid SIG. An essential idea for real-time and robust tracking is hierarchical integration of multi-modal information. The system creates three kinds of streams - auditory, visual and associated streams. An auditory stream with sound source direction is formed as temporal series of events from audition module which localizes multiple sound sources and cancels motor noise from a pair of microphones. A visual stream with a face ID and its 3D-position is formed as temporal series of events from vision module by combining face detection, face identification and face localization by stereo vision. Auditory and visual streams are associated into an associated stream, a higher level representation according to their proximity. Because the associated stream disambiguates parcially missing information in auditory or visual streams, “focus-of-attention” control of SIG works well enough to robust human tracking. These processes are executed in real-time with the delay of 200 msec using off-the-shelf PCs distributed via TCP/IP. As a result, robust human tracking is attained even when the person is visually occluded and simultaneous speeches occur.

Journal

Citations (6)*help

See more

References(11)*help

See more

Details 詳細情報について

Report a problem

Back to top