TY - JOUR
T1 - Towards Generalised and Incremental Bias Mitigation in Personality Computing
AU - Jiang, Jian
AU - Manoranjan, Viswonathan
AU - Salam, Hanan
AU - Celiktutan, Oya
N1 - Publisher Copyright:
© 2010-2012 IEEE.
PY - 2024
Y1 - 2024
N2 - Building systems for predicting human socio-emotional states has promising applications; however, if trained on biased data, such systems could inadvertently yield biased decisions. Bias mitigation remains an open problem, which tackles the correction of a model's disparate performance over different groups defined by particular sensitive attributes (e.g., gender, age, and race). In this work, we design a novel fairness loss function named Multi-Group Parity (MGP) to provide a generalised approach for bias mitigation in personality computing. In contrast to existing works in the literature, MGP is generalised as it features four 'multiple' properties (4Mul): multiple tasks, multiple modalities, multiple sensitive attributes, and multi-valued attributes. Moreover, we explore how to incrementally mitigate the biases when more sensitive attributes are taken into consideration sequentially. Towards this problem, we introduce a novel algorithm that utilises an incremental learning framework to mitigate bias against one attribute data at a time without compromising past fairness. Extensive experiments on two large-scale multi-modal personality recognition datasets validate the effectiveness of our approach in achieving superior bias mitigation under the proposed four properties and incremental debiasing settings.
AB - Building systems for predicting human socio-emotional states has promising applications; however, if trained on biased data, such systems could inadvertently yield biased decisions. Bias mitigation remains an open problem, which tackles the correction of a model's disparate performance over different groups defined by particular sensitive attributes (e.g., gender, age, and race). In this work, we design a novel fairness loss function named Multi-Group Parity (MGP) to provide a generalised approach for bias mitigation in personality computing. In contrast to existing works in the literature, MGP is generalised as it features four 'multiple' properties (4Mul): multiple tasks, multiple modalities, multiple sensitive attributes, and multi-valued attributes. Moreover, we explore how to incrementally mitigate the biases when more sensitive attributes are taken into consideration sequentially. Towards this problem, we introduce a novel algorithm that utilises an incremental learning framework to mitigate bias against one attribute data at a time without compromising past fairness. Extensive experiments on two large-scale multi-modal personality recognition datasets validate the effectiveness of our approach in achieving superior bias mitigation under the proposed four properties and incremental debiasing settings.
KW - Bias mitigation
KW - incremental learning
KW - model fairness
KW - multimodal learning
KW - multitask learning
KW - personality computing
UR - http://www.scopus.com/inward/record.url?scp=85195373694&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85195373694&partnerID=8YFLogxK
U2 - 10.1109/TAFFC.2024.3409830
DO - 10.1109/TAFFC.2024.3409830
M3 - Article
AN - SCOPUS:85195373694
SN - 1949-3045
VL - 15
SP - 2192
EP - 2203
JO - IEEE Transactions on Affective Computing
JF - IEEE Transactions on Affective Computing
IS - 4
ER -