日本語文法誤り訂正における事前学習済みモデルを用いたデータ増強

書誌事項

タイトル別名
  • Data Augmentation Using Pretrained Models in Japanese Grammatical Error Correction

抄録

<p>Grammatical error correction (GEC) is commonly referred to as a machine translation task that converts an ungrammatical sentence to a grammatical sentence. This task requires a large amount of parallel data consisting of pairs of ungrammatical and grammatical sentences. However, for the Japanese GEC task, only a limited number of large-scale parallel data are available. Therefore, data augmentation (DA), which generates pseudo-parallel data, is being actively researched. Many previous studies have focused on generating ungrammatical sentences rather than grammatical sentences. To tackle this problem, this study proposes the BERT-DA algorithm, which is a DA algorithm that generates correct sentences using a pre-trained BERT model. In our experiments, we focused on two factors: the source data and the amount of data generated. Considering these elements proved to be more effective for BERT-DA. Based on the evaluation results of multiple domains, the BERT-DA model outperformed the existing system in terms of the Max Match and GLEU+.</p>

収録刊行物

参考文献 (14)*注記

もっと見る

詳細情報 詳細情報について

問題の指摘

ページトップへ