TY - GEN
T1 - Causal Multi-level Fairness
AU - Mhasawade, Vishwali
AU - Chunara, Rumi
N1 - Publisher Copyright:
© 2021 ACM.
PY - 2021/7/21
Y1 - 2021/7/21
N2 - Algorithmic systems are known to impact marginalized groups severely, and more so, if all sources of bias are not considered. While work in algorithmic fairness to-date has primarily focused on addressing discrimination due to individually linked attributes, social science research elucidates how some properties we link to individuals can be conceptualized as having causes at macro (e.g. structural) levels, and it may be important to be fair to attributes at multiple levels. For example, instead of simply considering race as a causal, protected attribute of an individual, the cause may be distilled as perceived racial discrimination an individual experiences, which in turn can be affected by neighborhood-level factors. This multi-level conceptualization is relevant to questions of fairness, as it may not only be important to take into account if the individual belonged to another demographic group, but also if the individual received advantaged treatment at the macro-level. In this paper, we formalize the problem of multi-level fairness using tools from causal inference in a manner that allows one to assess and account for effects of sensitive attributes at multiple levels. We show importance of the problem by illustrating residual unfairness if macro-level sensitive attributes are not accounted for, or included without accounting for their multi-level nature. Further, in the context of a real-world task of predicting income based on macro and individual-level attributes, we demonstrate an approach for mitigating unfairness, a result of multi-level sensitive attributes.
AB - Algorithmic systems are known to impact marginalized groups severely, and more so, if all sources of bias are not considered. While work in algorithmic fairness to-date has primarily focused on addressing discrimination due to individually linked attributes, social science research elucidates how some properties we link to individuals can be conceptualized as having causes at macro (e.g. structural) levels, and it may be important to be fair to attributes at multiple levels. For example, instead of simply considering race as a causal, protected attribute of an individual, the cause may be distilled as perceived racial discrimination an individual experiences, which in turn can be affected by neighborhood-level factors. This multi-level conceptualization is relevant to questions of fairness, as it may not only be important to take into account if the individual belonged to another demographic group, but also if the individual received advantaged treatment at the macro-level. In this paper, we formalize the problem of multi-level fairness using tools from causal inference in a manner that allows one to assess and account for effects of sensitive attributes at multiple levels. We show importance of the problem by illustrating residual unfairness if macro-level sensitive attributes are not accounted for, or included without accounting for their multi-level nature. Further, in the context of a real-world task of predicting income based on macro and individual-level attributes, we demonstrate an approach for mitigating unfairness, a result of multi-level sensitive attributes.
KW - fairness
KW - racial justice
KW - social sciences
UR - http://www.scopus.com/inward/record.url?scp=85112470675&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85112470675&partnerID=8YFLogxK
U2 - 10.1145/3461702.3462587
DO - 10.1145/3461702.3462587
M3 - Conference contribution
AN - SCOPUS:85112470675
T3 - AIES 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society
SP - 784
EP - 794
BT - AIES 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society
PB - Association for Computing Machinery, Inc
T2 - 4th AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society, AIES 2021
Y2 - 19 May 2021 through 21 May 2021
ER -