AudioHaptics : Audio and Haptic Rendering Based on Physical Model
-
- YANO Hiroaki
- University of Tsukuba
-
- IWATA Hiroo
- University of Tsukuba
Bibliographic Information
- Other Title
-
- AudioHaptics : 物理法則に基づく聴覚と力覚の融合
- Audio Haptics ブツリ ホウソク ニ モトヅク チョウカク ト リキカク ノ ユウゴウ
Search this article
Abstract
Currently most of virtual reality system with auditory feedback can produce good feel of presence to the users. In VR(Virtual Reality) environment, interactions with virtual objects are common events. Some systems support sounds generated by interactions between virtual objects, although they are generated using modulation technique of prerecorded sounds. There is a limitation in that method. We should consider the sound generation method based on physical models. In this paper, we propose a method of synthesis haptic and auditory senses that is based on physical model. We have developed an auditory environment with haptic sensation. We equipped a speaker at the end effecter of the HapticMaster. The FEM(Finite Element Method) is used to calculate the vibration of the virtual object. And the sound pressure data at the speaker position are calculated based on the vibration in real time. The effectiveness of our method is tested through the experiments by users.
Journal
-
- ITE Technical Report
-
ITE Technical Report 24.34 (0), 19-24, 2000
The Institute of Image Information and Television Engineers
- Tweet
Details 詳細情報について
-
- CRID
- 1390001204524062208
-
- NII Article ID
- 110003687811
-
- NII Book ID
- AN1059086X
-
- ISSN
- 24241970
- 13426893
-
- NDL BIB ID
- 5431293
-
- Text Lang
- ja
-
- Data Source
-
- JaLC
- NDL
- CiNii Articles
-
- Abstract License Flag
- Disallowed