Compression Algorithm Of Trigram Language Models Based On Maximum Likelihood Estimation
説明
In this paper we propose an algorithm for reducing the size of back-off N-gram models, with less affecting its performance than the traditional cutoff method. The algorithm is based on the Maximum Likelihood (ML) estimation and realizes an N-gram language model with a given number of N-gram probability parameters that minimize the training set perplexity. To confirm the effectiveness of our algorithm, we apply it to trigram and bigram models, and the experiments in terms of perplexity and word error rate in a dictation system are carried out.
収録刊行物
-
- 5th International Conference on Spoken Language Processing (ICSLP 1998)
-
5th International Conference on Spoken Language Processing (ICSLP 1998) 1683-1686, 1998-11
ISCA
- Tweet
詳細情報 詳細情報について
-
- CRID
- 1050858784329811840
-
- NII論文ID
- 10020648410
- 120006659340
-
- HANDLE
- 10061/8101
-
- 本文言語コード
- en
-
- 資料種別
- conference paper
-
- データソース種別
-
- IRDB
- Crossref
- CiNii Articles