Algorithm Aversion or Automation Bias in AI Hiring:

DOI

Bibliographic Information

Other Title
  • AIに不適合なアルゴリズム回避論:
  • Officer K or Lt. Col. Stanislav Petrov
  • 機械的な人事採用選別と自動化バイアス

Abstract

<p>Showing the incident of the Lt. Col. Stanislav Petrov in the Cold War era as the case exemplifying the importance of the “meaningful human oversight,” the author emphasizes its importance in the recent AI usage in the areas of the “high risk” or “sensitive domain” in light of “automation bias” due to which human beings are inclined to over-trust AI’s forecasts, decisions, or recommendations in spite of adoption of the “human-in-the-loop” system. Then, he introduces examples of the countermeasures against the automation bias in AI usage; such examples include the ones suggested in the Detailed Explanation on Key Points concerning AI Utilization Principles of the MIC (Ministry of Internal Affairs and Communications) in Japan, the AI ACT of EU in Europe, and the AI Bill of Rights Blueprint of the White House in the U.S. Thereafter, the author introduces human beings’ under-trust of AI, which is called “algorithm aversion,” along with its relevant allegations against the bias preferring human decisions to AI forecasts. Then, he criticizes all of them based upon:</p><p>(i) logical reasons such as the limits of AI’s ability due to which AI should not be applicable to decisions in complicated and context-dependent tasks, compared with simplistic, repetitive, and mechanical tasks; and </p><p>(ii) ethical reasons such as human dignity, respect for human beings, and necessity of taking into consideration individual personhood.</p>

Journal

Details 詳細情報について

  • CRID
    1390581070839149824
  • DOI
    10.24798/jicp.7.2_1
  • ISSN
    24329177
    24336254
  • Text Lang
    ja
  • Data Source
    • JaLC
  • Abstract License Flag
    Disallowed

Report a problem

Back to top