Focused Prefix Tuning for Controllable Text Generation

この論文をさがす

説明

<p> In a controllable text generation dataset, unannotated attributes may provide irrelevant learning signals to models that use them for training, thereby degrading their performance. We propose focused prefix tuning(FPT) to mitigate this problem and enable control to focus on the desired attribute. Experimental results show that FPT can achieve better control accuracy and text fluency than baseline models in single-attribute control tasks. In multi-attribute control tasks, FPT achieves control accuracy comparable to that of the state-of-the-art approach while maintaining the flexibility to control new attributes without retraining existing models. </p>

収録刊行物

  • 自然言語処理

    自然言語処理 31 (1), 250-265, 2024

    一般社団法人 言語処理学会

参考文献 (11)*注記

もっと見る

詳細情報 詳細情報について

問題の指摘

ページトップへ