An End-to-End Approach to Joint Social Signal Detection and Automatic Speech Recognition
説明
Social signals such as laughter and fillers are often observed in natural conversation, and they play various roles in human-to-human communication. Detecting these events is useful for transcription systems to generate rich transcription and for dialogue systems to behave as we do such as synchronized laughing or attentive listening. We have studied an end-to-end approach to directly detect social signals from speech by using connectionist temporal classification (CTC), which is one of the end-to-end sequence labelling models. In this work, we propose a unified framework that integrates social signal detection (SSD) and automatic speech recognition (ASR). We investigate several reference labelling methods regarding social signals. Experimental evaluations demonstrate that our end-to-end framework significantly outperforms the conventional DNN-HMM system with regard to SSD performance as well as the character error rate (CER).
収録刊行物
-
- 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
-
2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 6214-6218, 2018-04-01
IEEE