TY - GEN
T1 - Bayesian sensitivity analysis for offline policy evaluation
AU - Jung, Jongbin
AU - Shroff, Ravi
AU - Feller, Avi
AU - Goel, Sharad
N1 - Publisher Copyright:
© 2020 Copyright held by the owner/author(s).
PY - 2020/2/7
Y1 - 2020/2/7
N2 - On a variety of complex decision-making tasks, from doctors prescribing treatment to judges setting bail, machine learning algorithms have been shown to outperform expert human judgments. One complication, however, is that it is often difficult to anticipate the effects of algorithmic policies prior to deployment, as one generally cannot use historical data to directly observe what would have happened had the actions recommended by the algorithm been taken. A common strategy is to model potential outcomes for alternative decisions assuming that there are no unmeasured confounders (i.e., to assume ignorability). But if this ignorability assumption is violated, the predicted and actual effects of an algorithmic policy can diverge sharply. In this paper we present a flexible Bayesian approach to gauge the sensitivity of predicted policy outcomes to unmeasured confounders. In particular, and in contrast to past work, our modeling framework easily enables confounders to vary with the observed covariates. We demonstrate the efficacy of our method on a large dataset of judicial actions, in which one must decide whether defendants awaiting trial should be required to pay bail or can be released without payment.
AB - On a variety of complex decision-making tasks, from doctors prescribing treatment to judges setting bail, machine learning algorithms have been shown to outperform expert human judgments. One complication, however, is that it is often difficult to anticipate the effects of algorithmic policies prior to deployment, as one generally cannot use historical data to directly observe what would have happened had the actions recommended by the algorithm been taken. A common strategy is to model potential outcomes for alternative decisions assuming that there are no unmeasured confounders (i.e., to assume ignorability). But if this ignorability assumption is violated, the predicted and actual effects of an algorithmic policy can diverge sharply. In this paper we present a flexible Bayesian approach to gauge the sensitivity of predicted policy outcomes to unmeasured confounders. In particular, and in contrast to past work, our modeling framework easily enables confounders to vary with the observed covariates. We demonstrate the efficacy of our method on a large dataset of judicial actions, in which one must decide whether defendants awaiting trial should be required to pay bail or can be released without payment.
KW - Offline policy evaluation
KW - Pretrial risk assessment
KW - Sensitivity to unmeasured confounding
UR - http://www.scopus.com/inward/record.url?scp=85082167136&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85082167136&partnerID=8YFLogxK
U2 - 10.1145/3375627.3375822
DO - 10.1145/3375627.3375822
M3 - Conference contribution
T3 - AIES 2020 - Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society
SP - 64
EP - 70
BT - AIES 2020 - Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society
PB - Association for Computing Machinery, Inc
T2 - 3rd AAAI/ACM Conference on AI, Ethics, and Society, AIES 2020, co-located with AAAI 2020
Y2 - 7 February 2020 through 8 February 2020
ER -