TY - JOUR
T1 - Survival After Radical Cystectomy for Bladder Cancer
T2 - Development of a Fair Machine Learning Model
AU - Carbunaru, Samuel
AU - Neshatvar, Yassamin
AU - Do, Hyungrok
AU - Murray, Katie
AU - Ranganath, Rajesh
AU - Nayan, Madhur
N1 - Publisher Copyright:
©Samuel Carbunaru, Yassamin Neshatvar, Hyungrok Do, Katie Murray, Rajesh Ranganath, Madhur Nayan.
PY - 2024
Y1 - 2024
N2 - Background: Prediction models based on machine learning (ML) methods are being increasingly developed and adopted in health care. However, these models may be prone to bias and considered unfair if they demonstrate variable performance in population subgroups. An unfair model is of particular concern in bladder cancer, where disparities have been identified in sex and racial subgroups. Objective: This study aims (1) to develop a ML model to predict survival after radical cystectomy for bladder cancer and evaluate for potential model bias in sex and racial subgroups; and (2) to compare algorithm unfairness mitigation techniques to improve model fairness. Methods: We trained and compared various ML classification algorithms to predict 5-year survival after radical cystectomy using the National Cancer Database. The primary model performance metric was the F1-score. The primary metric for model fairness was the equalized odds ratio (eOR). We compared 3 algorithm unfairness mitigation techniques to improve eOR. Results: We identified 16,481 patients; 23.1% (n=3800) were female, and 91.5% (n=15,080) were “White,” 5% (n=832) were “Black,” 2.3% (n=373) were “Hispanic,” and 1.2% (n=196) were “Asian.” The 5-year mortality rate was 75% (n=12,290). The best naive model was extreme gradient boosting (XGBoost), which had an F1-score of 0.860 and eOR of 0.619. All unfairness mitigation techniques increased the eOR, with correlation remover showing the highest increase and resulting in a final eOR of 0.750. This mitigated model had F1-scores of 0.86, 0.904, and 0.824 in the full, Black male, and Asian female test sets, respectively. Conclusions: The ML model predicting survival after radical cystectomy exhibited bias across sex and racial subgroups. By using algorithm unfairness mitigation techniques, we improved algorithmic fairness as measured by the eOR. Our study highlights the role of not only evaluating for model bias but also actively mitigating such disparities to ensure equitable health care delivery. We also deployed the first web-based fair ML model for predicting survival after radical cystectomy.
AB - Background: Prediction models based on machine learning (ML) methods are being increasingly developed and adopted in health care. However, these models may be prone to bias and considered unfair if they demonstrate variable performance in population subgroups. An unfair model is of particular concern in bladder cancer, where disparities have been identified in sex and racial subgroups. Objective: This study aims (1) to develop a ML model to predict survival after radical cystectomy for bladder cancer and evaluate for potential model bias in sex and racial subgroups; and (2) to compare algorithm unfairness mitigation techniques to improve model fairness. Methods: We trained and compared various ML classification algorithms to predict 5-year survival after radical cystectomy using the National Cancer Database. The primary model performance metric was the F1-score. The primary metric for model fairness was the equalized odds ratio (eOR). We compared 3 algorithm unfairness mitigation techniques to improve eOR. Results: We identified 16,481 patients; 23.1% (n=3800) were female, and 91.5% (n=15,080) were “White,” 5% (n=832) were “Black,” 2.3% (n=373) were “Hispanic,” and 1.2% (n=196) were “Asian.” The 5-year mortality rate was 75% (n=12,290). The best naive model was extreme gradient boosting (XGBoost), which had an F1-score of 0.860 and eOR of 0.619. All unfairness mitigation techniques increased the eOR, with correlation remover showing the highest increase and resulting in a final eOR of 0.750. This mitigated model had F1-scores of 0.86, 0.904, and 0.824 in the full, Black male, and Asian female test sets, respectively. Conclusions: The ML model predicting survival after radical cystectomy exhibited bias across sex and racial subgroups. By using algorithm unfairness mitigation techniques, we improved algorithmic fairness as measured by the eOR. Our study highlights the role of not only evaluating for model bias but also actively mitigating such disparities to ensure equitable health care delivery. We also deployed the first web-based fair ML model for predicting survival after radical cystectomy.
KW - algorithmic fairness
KW - bias
KW - bladder cancer
KW - fairness
KW - health equity
KW - healthcare disparities
KW - machine learning
KW - model
KW - mortality rate
KW - prediction
KW - radical cystectomy
KW - survival
UR - http://www.scopus.com/inward/record.url?scp=85214354202&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85214354202&partnerID=8YFLogxK
U2 - 10.2196/63289
DO - 10.2196/63289
M3 - Article
AN - SCOPUS:85214354202
SN - 2291-9694
VL - 12
JO - JMIR Medical Informatics
JF - JMIR Medical Informatics
M1 - e63289
ER -