Do Explanations Improve the Quality of AI-assisted Human Decisions? An Algorithm-in-the-Loop Analysis of Factual & Counterfactual Explanations

Lujain Ibrahim, Mohammad M. Ghassemi, Tuka Alhanai

Research output: Contribution to journalConference articlepeer-review

Abstract

The increased use of AI algorithmic aids in high-stakes decision making has prompted interest in explainable AI (xAI), and the role of counterfactual explanations to increase trust in human-algorithm collaborations and to mitigate unfair outcomes. However, research is limited in understanding how explainable AI improves human decision-making. We conduct an online experiment with 559 participants, utilizing an “algorithm-in-the-loop" framework and real-world pre-trial data to investigate how explanations of algorithmic pretrial risk assessments generated from state-of-the-art machine learning explanation methods (counterfactual explanations via DiCE & factual explanations via SHAP) influences the quality of decision-makers' assessment of recidivism. Our results show that counterfactual and factual explanations achieve different desirable goals (separately improve human assessment of model accuracy, fairness, and calibration), yet still fall short of improving the combined accuracy, fairness, and reliability of human predictions - reinstating the need for sociotechnical, empirical evaluations in xAI. We conclude with user feedback on DiCE counterfactual explanations, as well as a discussion of the broader implications of our results to AI-assisted decision-making and xAI.

Original languageEnglish (US)
Pages (from-to)326-334
Number of pages9
JournalProceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS
Volume2023-May
StatePublished - 2023
Event22nd International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2023 - London, United Kingdom
Duration: May 29 2023Jun 2 2023

Keywords

  • counterfactuals
  • explanations
  • fairness
  • risk assessments
  • sociotechnical systems
  • trust
  • user studies

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering

Fingerprint

Dive into the research topics of 'Do Explanations Improve the Quality of AI-assisted Human Decisions? An Algorithm-in-the-Loop Analysis of Factual & Counterfactual Explanations'. Together they form a unique fingerprint.

Cite this