A Study on Interactive, Contrastive Explanation in Explainable AI
-
- HAMAMOTO Koshi
- Hitotsubashi University RIKEN
-
- KUZUYA Jun
- RIKEN
-
- ARAI Hiromi
- RIKEN Japan Science and Technology Agency
Bibliographic Information
- Other Title
-
- 説明可能AIにおける対話型の対比的説明についての一検討
- From the Philosophical Perspectives of Explanation and Causation
Description
<p>While the development of artificial intelligence (AI) has been remarkable, the black-box nature of the underlying machine learning, especially deep learning, has been an obstacle to its implementation in society with respect to trust and responsibility. To solve these black-box problems, not only technical efforts to implement transparency and accountability have rapidly been made in explainable AI community, but in recent years, some research has also begun to address philosophical questions about the nature of explanation. One of the existing research is Mittelstadt et al. (2019), which calls for the development of explainable AI to provide interactive, contrastive explanations, based on the analysis of the concept of explanation by Miller (2019). In this paper, first, we illustrate the need for explainable AI with pneumonia risk prediction system case, next, review Mittelstadt et al. (2019) and then, discuss utilities of the interactive, contrastive explanation which is proposed in it.</p>
Journal
-
- Proceedings of the Annual Conference of JSAI
-
Proceedings of the Annual Conference of JSAI JSAI2021 (0), 2C4OS9b02-2C4OS9b02, 2021
The Japanese Society for Artificial Intelligence
- Tweet
Keywords
Details 詳細情報について
-
- CRID
- 1390006895527371136
-
- NII Article ID
- 130008051625
-
- ISSN
- 27587347
-
- Text Lang
- ja
-
- Data Source
-
- JaLC
- CiNii Articles
-
- Abstract License Flag
- Disallowed