TY - JOUR
T1 - Identifying and mitigating algorithmic bias in the safety net
AU - Mackin, Shaina
AU - Major, Vincent J.
AU - Chunara, Rumi
AU - Newton-Dame, Remle
N1 - Publisher Copyright:
© The Author(s) 2025.
PY - 2025/12
Y1 - 2025/12
N2 - Algorithmic bias occurs when predictive model performance varies meaningfully across sociodemographic classes, exacerbating systemic healthcare disparities. NYC Health + Hospitals, an urban safety net system, assessed bias in two binary classification models in our electronic medical record: one predicting acute visits for asthma and one predicting unplanned readmissions. We evaluated differences in subgroup performance across race/ethnicity, sex, language, and insurance using equal opportunity difference (EOD), a metric comparing false negative rates. The most biased classes (race/ethnicity for asthma, insurance for readmission) were targeted for mitigation using threshold adjustment, which adjusts subgroup thresholds to minimize EOD, and reject option classification, which re-classifies scores near the threshold by subgroup. Successful mitigation was defined as 1) absolute subgroup EODs <5 percentage points, 2) accuracy reduction <10%, and 3) alert rate change <20%. Threshold adjustment met these criteria; reject option classification did not. We introduce a Supplementary Playbook outlining our approach for low-resource bias mitigation.
AB - Algorithmic bias occurs when predictive model performance varies meaningfully across sociodemographic classes, exacerbating systemic healthcare disparities. NYC Health + Hospitals, an urban safety net system, assessed bias in two binary classification models in our electronic medical record: one predicting acute visits for asthma and one predicting unplanned readmissions. We evaluated differences in subgroup performance across race/ethnicity, sex, language, and insurance using equal opportunity difference (EOD), a metric comparing false negative rates. The most biased classes (race/ethnicity for asthma, insurance for readmission) were targeted for mitigation using threshold adjustment, which adjusts subgroup thresholds to minimize EOD, and reject option classification, which re-classifies scores near the threshold by subgroup. Successful mitigation was defined as 1) absolute subgroup EODs <5 percentage points, 2) accuracy reduction <10%, and 3) alert rate change <20%. Threshold adjustment met these criteria; reject option classification did not. We introduce a Supplementary Playbook outlining our approach for low-resource bias mitigation.
UR - http://www.scopus.com/inward/record.url?scp=105007471786&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=105007471786&partnerID=8YFLogxK
U2 - 10.1038/s41746-025-01732-w
DO - 10.1038/s41746-025-01732-w
M3 - Article
AN - SCOPUS:105007471786
SN - 2398-6352
VL - 8
JO - npj Digital Medicine
JF - npj Digital Medicine
IS - 1
M1 - 335
ER -