Uncertainty and Explanation-based Human Debugging of Text Classification Model

DOI

Bibliographic Information

Other Title
  • 文章分類モデルの不確実性に基づく人間によるデバッグ手法の提案

Abstract

<p>AI democratization is advancing quickly through the availability of NLP pre-trained models. As a consequence, data scientists, as well as subject matter experts (SME), are moving towards using data-driven AI products to solve their problems. Utilizing these products requires NLP model understanding and continuous accuracy improvement, skills only data scientists have. However, data scientists are not always involved. Establishing a flow that allows SMEs to improve model accuracy independently is essential. Therefore, we focus on debugging NLP models via human feedback, an approach addressed in Explainable AI. Humans provide feedback to the system based on the model explanation. The feedback can be varied, such as grouping similar samples or correcting invalid explanations. In our case, we aim to improve accuracy by domain-knowledge-aware data augmentation. In this study, we propose an efficient way to reduce the cost of manual data augmentation by exploiting uncertainties. We experimented with text classification tasks and verified that human feedback effectively improves model accuracy, and introducing uncertainties speeds up the augmentation and improves the data quality.</p>

Journal

Details 詳細情報について

Report a problem

Back to top