Text-To-Speech Synthesis Based on Latent Variable Conversion Using Diffusion Probabilistic Model and Variational Autoencoder
説明
Text-to-speech synthesis (TTS) is a task to convert texts into speech. Two of the factors that have been driving TTS are the advancements of probabilistic models and latent representation learning. We propose a TTS method based on latent variable conversion using a diffusion probabilistic model and the variational autoencoder (VAE). In our TTS method, we use a waveform model based on VAE, a diffusion model that predicts the distribution of latent variables in the waveform model from texts, and an alignment model that learns alignments between the text and speech latent sequences. Our method integrates diffusion with VAE by modeling both mean and variance parameters with diffusion, where the target distribution is determined by approximation from VAE. This latent variable conversion framework potentially enables us to flexibly incorporate various latent feature extractors. Our experiments show that our method is robust to linguistic labels with poor orthography and alignment errors.
Submitted to ICASSP 2023
収録刊行物
-
- ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
-
ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 1-5, 2023-06-04
IEEE
- Tweet
キーワード
- FOS: Computer and information sciences
- Computer Science - Computation and Language
- Machine Learning (stat.ML)
- Statistics - Machine Learning
- Audio and Speech Processing (eess.AS)
- FOS: Electrical engineering, electronic engineering, information engineering
- Computation and Language (cs.CL)
- Electrical Engineering and Systems Science - Audio and Speech Processing
詳細情報 詳細情報について
-
- CRID
- 1870020692876020992
-
- データソース種別
-
- OpenAIRE