Selective training with generated samples from generative language models for generalized zero-shot text classification

DOI

Bibliographic Information

Other Title
  • 言語モデルから生成されたサンプルを選択的に利用する一般化ゼロショットテキスト分類

Description

<p>Generalized zero-shot text classification (GZSTC) is a task to classify text into a set of classes including unseen classes, which has no teacher data. GZSTC is widely applied to such as news and product classification. One of the existing approaches to zero-shot text classification is to use a language model to generate pseudo samples of the unseen class and use them as teacher data. However, these existing approaches use language models that have been pre-trained by data from a wide range of domains, and thus cannot generate only samples that correspond to the target domain, which adversely affects the training of the classifier. In this paper, we propose a GZSTC method that improves classification performance. This method removes the out-of-domain samples generated by the language model and creates a data set with only the samples corresponding to the target domain. Experiments on real data show the improvement of the classification performance of the proposed method against the baseline.</p>

Journal

Details 詳細情報について

Report a problem

Back to top