TY - GEN
T1 - Estimating and Implementing Conventional Fairness Metrics With Probabilistic Protected Features
AU - Elzayn, Hadi
AU - Black, Emily
AU - Vossler, Patrick
AU - Jo, Nathanael
AU - Goldin, Jacob
AU - Ho, Daniel E.
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - The vast majority of techniques to train fair models require access to the protected attribute (e.g., race, gender), either at train time or in production. However, in many practically important applications, this protected attribute is largely unavailable. Still, AI systems used in sensitive business and government applications - such as housing, ad delivery, and credit underwriting - are increasingly required by law to measure and mitigate their bias. In this paper, we develop methods for measuring and reducing fairness violations in a setting with limited access to protected attribute labels. Specifically, we assume access to protected attribute labels on a small subset of the dataset of interest, but only probabilistic estimates of protected attribute labels (e.g., via Bayesian Improved Surname Geocoding) for the rest of the dataset. With this setting in mind, we propose a method to estimate bounds on common fairness metrics for an existing model, as well as a method for training a model to limit fairness violations by solving a constrained non-convex optimization problem. Unlike existing approaches, our methods take advantage of contextual information - specifically the relationships between a model's predictions and the probabilistic prediction of protected attributes, given the true protected attribute, and vice versa - to provide tighter bounds on the true disparity. We provide an empirical illustration of our methods using voting data as well as the COMPAS dataset. First, we show that our measurement method can bound the true disparity up to 5.5x tighter than previous methods in these applications. Then, we demonstrate that our training technique effectively reduces disparity in comparison to an unconstrained model while often incurring less severe fairness-accuracy trade-offs than other fair optimization methods with limited access to protected attributes.
AB - The vast majority of techniques to train fair models require access to the protected attribute (e.g., race, gender), either at train time or in production. However, in many practically important applications, this protected attribute is largely unavailable. Still, AI systems used in sensitive business and government applications - such as housing, ad delivery, and credit underwriting - are increasingly required by law to measure and mitigate their bias. In this paper, we develop methods for measuring and reducing fairness violations in a setting with limited access to protected attribute labels. Specifically, we assume access to protected attribute labels on a small subset of the dataset of interest, but only probabilistic estimates of protected attribute labels (e.g., via Bayesian Improved Surname Geocoding) for the rest of the dataset. With this setting in mind, we propose a method to estimate bounds on common fairness metrics for an existing model, as well as a method for training a model to limit fairness violations by solving a constrained non-convex optimization problem. Unlike existing approaches, our methods take advantage of contextual information - specifically the relationships between a model's predictions and the probabilistic prediction of protected attributes, given the true protected attribute, and vice versa - to provide tighter bounds on the true disparity. We provide an empirical illustration of our methods using voting data as well as the COMPAS dataset. First, we show that our measurement method can bound the true disparity up to 5.5x tighter than previous methods in these applications. Then, we demonstrate that our training technique effectively reduces disparity in comparison to an unconstrained model while often incurring less severe fairness-accuracy trade-offs than other fair optimization methods with limited access to protected attributes.
KW - algorithmic fairness
KW - anti-discrimination
KW - disparity reduction
KW - fair machine learning
KW - probabilistic protected attribute
UR - http://www.scopus.com/inward/record.url?scp=85193782404&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85193782404&partnerID=8YFLogxK
U2 - 10.1109/SaTML59370.2024.00016
DO - 10.1109/SaTML59370.2024.00016
M3 - Conference contribution
AN - SCOPUS:85193782404
T3 - Proceedings - IEEE Conference on Safe and Trustworthy Machine Learning, SaTML 2024
SP - 161
EP - 193
BT - Proceedings - IEEE Conference on Safe and Trustworthy Machine Learning, SaTML 2024
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2024 IEEE Conference on Safe and Trustworthy Machine Learning, SaTML 2024
Y2 - 9 April 2024 through 11 April 2024
ER -