Estimating and Implementing Conventional Fairness Metrics With Probabilistic Protected Features

Hadi Elzayn, Emily Black, Patrick Vossler, Nathanael Jo, Jacob Goldin, Daniel E. Ho

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    The vast majority of techniques to train fair models require access to the protected attribute (e.g., race, gender), either at train time or in production. However, in many practically important applications, this protected attribute is largely unavailable. Still, AI systems used in sensitive business and government applications - such as housing, ad delivery, and credit underwriting - are increasingly required by law to measure and mitigate their bias. In this paper, we develop methods for measuring and reducing fairness violations in a setting with limited access to protected attribute labels. Specifically, we assume access to protected attribute labels on a small subset of the dataset of interest, but only probabilistic estimates of protected attribute labels (e.g., via Bayesian Improved Surname Geocoding) for the rest of the dataset. With this setting in mind, we propose a method to estimate bounds on common fairness metrics for an existing model, as well as a method for training a model to limit fairness violations by solving a constrained non-convex optimization problem. Unlike existing approaches, our methods take advantage of contextual information - specifically the relationships between a model's predictions and the probabilistic prediction of protected attributes, given the true protected attribute, and vice versa - to provide tighter bounds on the true disparity. We provide an empirical illustration of our methods using voting data as well as the COMPAS dataset. First, we show that our measurement method can bound the true disparity up to 5.5x tighter than previous methods in these applications. Then, we demonstrate that our training technique effectively reduces disparity in comparison to an unconstrained model while often incurring less severe fairness-accuracy trade-offs than other fair optimization methods with limited access to protected attributes.

    Original languageEnglish (US)
    Title of host publicationProceedings - IEEE Conference on Safe and Trustworthy Machine Learning, SaTML 2024
    PublisherInstitute of Electrical and Electronics Engineers Inc.
    Pages161-193
    Number of pages33
    ISBN (Electronic)9798350349504
    DOIs
    StatePublished - 2024
    Event2024 IEEE Conference on Safe and Trustworthy Machine Learning, SaTML 2024 - Toronto, Canada
    Duration: Apr 9 2024Apr 11 2024

    Publication series

    NameProceedings - IEEE Conference on Safe and Trustworthy Machine Learning, SaTML 2024

    Conference

    Conference2024 IEEE Conference on Safe and Trustworthy Machine Learning, SaTML 2024
    Country/TerritoryCanada
    CityToronto
    Period4/9/244/11/24

    Keywords

    • algorithmic fairness
    • anti-discrimination
    • disparity reduction
    • fair machine learning
    • probabilistic protected attribute

    ASJC Scopus subject areas

    • Artificial Intelligence
    • Safety, Risk, Reliability and Quality
    • Modeling and Simulation

    Fingerprint

    Dive into the research topics of 'Estimating and Implementing Conventional Fairness Metrics With Probabilistic Protected Features'. Together they form a unique fingerprint.

    Cite this