Compression Algorithm Of Trigram Language Models Based On Maximum Likelihood Estimation

説明

In this paper we propose an algorithm for reducing the size of back-off N-gram models, with less affecting its performance than the traditional cutoff method. The algorithm is based on the Maximum Likelihood (ML) estimation and realizes an N-gram language model with a given number of N-gram probability parameters that minimize the training set perplexity. To confirm the effectiveness of our algorithm, we apply it to trigram and bigram models, and the experiments in terms of perplexity and word error rate in a dictation system are carried out.

収録刊行物

被引用文献 (2)*注記

もっと見る

詳細情報 詳細情報について

問題の指摘

ページトップへ