Speech Synthesis by Mimicking Human Speech Production(<Feature Articles>Newer Research Methods in Phonetic Sciences)

  • HONDA Masaaki
    Information Science Research Laboratory, NTT Basic Research Laboratories

Bibliographic Information

Other Title
  • 調音モデルに基づく音声合成(<特集>音声研究の新しい手法)
  • 調音モデルに基づく音声合成
  • チョウオン モデル ニ モトズク オンセイ ゴウセイ

Search this article

Description

Speech is produced by articulating speech organs such as the jaw, tongue, and lips. We have developed an articulatory-based speech synthesis model that converts a phoneme string into a continuous acoustic signal by mimicking human speech production process. This paper describes a computational model of the speech production process which involves a motor process to generate articulatory movements from the motor task sequence and an articulatory-to-acoustic mapping to determine the vocal-tract acoustic characteristics. A method for recovering articulatory parameters from speech acoustics is also described within a framework of articulatory-based speech analysis and synthesis.

Journal

Details 詳細情報について

Report a problem

Back to top