- 【Updated on May 12, 2025】 Integration of CiNii Dissertations and CiNii Books into CiNii Research
- Trial version of CiNii Research Knowledge Graph Search feature is available on CiNii Labs
- 【Updated on June 30, 2025】Suspension and deletion of data provided by Nikkei BP
- Regarding the recording of “Research Data” and “Evidence Data”
Neural Source-Filter Waveform Models for Statistical Parametric Speech Synthesis
Search this article
Description
Neural waveform models such as WaveNet have demonstrated better performance than conventional vocoders for statistical parametric speech synthesis. As an autoregressive (AR) model, WaveNet is limited by a slow sequential waveform generation process. Some new models that use the inverse-autoregressive flow (IAF) can generate a whole waveform in a one-shot manner. However, these IAF-based models require sequential transformation during training, which severely slows down the training speed. Other models such as Parallel WaveNet and ClariNet bring together the benefits of AR and IAF-based models and train an IAF model by transferring the knowledge from a pre-trained AR teacher to an IAF student without any sequential transformation. However, both models require additional training criteria, and their implementation is prohibitively complicated. We propose a framework for neural source-filter (NSF) waveform modeling without AR nor IAF-based approaches. This framework requires only three components for waveform generation: a source module that generates a sine-based signal as excitation, a non-AR dilated-convolution-based filter module that transforms the excitation into a waveform, and a conditional module that pre-processes the acoustic features for the source and filer modules. This framework minimizes spectral-amplitude distances for model training, which can be efficiently implemented by using short-time Fourier transform routines. Under this framework, we designed three NSF models and compared them with WaveNet. It was demonstrated that the NSF models generated waveforms at least 100 times faster than WaveNet, and the quality of the synthetic speech from the best NSF model was better than or equally good as that from WaveNet.
Accepted to IEEE/ACM TASLP. Note: this paper is on a follow-up work of our ICASSP paper. Based on the h-NSF introduced in this work, we proposed a h-sinc-NSF model and published the third paper in SSW 10 (https://www.isca-speech.org/archive/SSW_2019/pdfs/SSW10_O_1-1.pdf)
Journal
-
- IEEE/ACM Transactions on Audio, Speech, and Language Processing
-
IEEE/ACM Transactions on Audio, Speech, and Language Processing 28 402-415, 2020
Institute of Electrical and Electronics Engineers (IEEE)
- Tweet
Keywords
- FOS: Computer and information sciences
- Sound (cs.SD)
- neural network
- Machine Learning (stat.ML)
- Computer Science - Sound
- Speech synthesis
- Statistics - Machine Learning
- Audio and Speech Processing (eess.AS)
- short-time Fourier transform
- waveform model
- FOS: Electrical engineering, electronic engineering, information engineering
- Electrical Engineering and Systems Science - Audio and Speech Processing
Details 詳細情報について
-
- CRID
- 1360286995362879104
-
- ISSN
- 23299304
- 23299290
-
- Article Type
- journal article
-
- Data Source
-
- Crossref
- KAKEN
- OpenAIRE