-
- Ma Congda
- Tokyo Institute of Technology
-
- Zhao Tianyu
- rinna Co. Ltd.
-
- Shing Makoto
- Stability AI Ltd.
-
- Sawada Kei
- rinna Co. Ltd.
-
- Okumura Manabu
- Tokyo Institute of Technology
この論文をさがす
説明
<p> In a controllable text generation dataset, unannotated attributes may provide irrelevant learning signals to models that use them for training, thereby degrading their performance. We propose focused prefix tuning(FPT) to mitigate this problem and enable control to focus on the desired attribute. Experimental results show that FPT can achieve better control accuracy and text fluency than baseline models in single-attribute control tasks. In multi-attribute control tasks, FPT achieves control accuracy comparable to that of the state-of-the-art approach while maintaining the flexibility to control new attributes without retraining existing models. </p>
収録刊行物
-
- 自然言語処理
-
自然言語処理 31 (1), 250-265, 2024
一般社団法人 言語処理学会
- Tweet
詳細情報 詳細情報について
-
- CRID
- 1390862422951794560
-
- ISSN
- 21858314
- 13407619
-
- 本文言語コード
- en
-
- データソース種別
-
- JaLC
- Crossref
- OpenAIRE
-
- 抄録ライセンスフラグ
- 使用不可