Learning Concept-based Explainable Model that Guarantees the False Discovery Rate

DOI
  • XU Kaiwen
    University of Tsukuba RIKEN Center for Advanced Intelligence Project
  • FUKUCHI Kazuto
    University of Tsukuba RIKEN Center for Advanced Intelligence Project
  • AKIMOTO Youhei
    University of Tsukuba RIKEN Center for Advanced Intelligence Project
  • SAKUMA Jun
    University of Tsukuba RIKEN Center for Advanced Intelligence Project

Bibliographic Information

Other Title
  • 偽発見率を保証したコンセプトによる説明可能モデルの学習

Abstract

<p>Explain deep learning model by concept is a common method in interpretability of deep learning.However,we can't guarantee that all concepts will be important for the prediction.In this study, we propose a method to select the concepts important for prediction under a certain false discovery rate (FDR). Our method uses latent variables acquired by Variational Autoencoder(VAE) to represent the concepts and use a variable selection tool called Knockoffs to find the statistical significant concepts. In our experiments, we use multiple datasets to show that the concepts selected by the proposed method are interpretable. It also can achieve high accuracy even when the predictions are made only by these selected concepts.</p>

Journal

Details 詳細情報について

Report a problem

Back to top