Feature-Based Learning Hidden Unit Contributions for Domain Adaptation of RNN-LMs
説明
In recent years, many approaches have been proposed for domain adaptation of neural network language models. These methods can be separated into two categories. The first is model-based adaptation, which creates a domain specific language model by re-training the weights in the network on the in-domain data. This requires domain annotation in the training and test data. The second is feature-based adaptation, which uses topic features to perform mainly bias adaptation of network input or output layers in an unsupervised manner. Recently, a scheme called learning hidden unit contributions was proposed for acoustic model adaptation. We propose applying this scheme to feature-based domain adaptation of recurrent neural network language model. In addition, we also investigate the combination of this approach with bias-based domain adaptation. For the experiments, we use a corpus based on TED talks and the CSJ lecture corpus to show perplexity and speech recognition results. Our proposed method consistently outperforms a pure non-adapted baseline and the combined approach can improve on pure bias adaptation.
収録刊行物
-
- 2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)
-
2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC) 1692-1696, 2018-11-01
IEEE