TY - GEN
T1 - Investigating Gender Bias in STEM Job Advertisements
AU - Dikshit, Malika
AU - Bouamor, Houda
AU - Habash, Nizar
N1 - Publisher Copyright:
©2024 Association for Computational Linguistics.
PY - 2024
Y1 - 2024
N2 - Gender inequality has been historically prevalent in academia, especially within the fields of Science, Technology, Engineering, and Mathematics (STEM). In this study, we propose to examine gender bias in academic job descriptions in the STEM fields. We go a step further than previous studies that merely identify individual words as masculine-coded and feminine-coded and delve into the contextual language used in academic job advertisements. We design a novel approach to detect gender biases in job descriptions using Natural Language Processing (NLP) techniques. Going beyond binary masculine-feminine stereotypes, we propose three big groups types to understand gender bias in the language of job descriptions, namely agentic, balanced, and communal. We cluster similar information in job descriptions into these three groups using contrastive learning and various clustering techniques. This research contributes to the field of gender bias detection by providing a novel approach and methodology for categorizing gender bias in job descriptions, which can aid more effective and targeted job advertisements that will be equally appealing across all genders.
AB - Gender inequality has been historically prevalent in academia, especially within the fields of Science, Technology, Engineering, and Mathematics (STEM). In this study, we propose to examine gender bias in academic job descriptions in the STEM fields. We go a step further than previous studies that merely identify individual words as masculine-coded and feminine-coded and delve into the contextual language used in academic job advertisements. We design a novel approach to detect gender biases in job descriptions using Natural Language Processing (NLP) techniques. Going beyond binary masculine-feminine stereotypes, we propose three big groups types to understand gender bias in the language of job descriptions, namely agentic, balanced, and communal. We cluster similar information in job descriptions into these three groups using contrastive learning and various clustering techniques. This research contributes to the field of gender bias detection by providing a novel approach and methodology for categorizing gender bias in job descriptions, which can aid more effective and targeted job advertisements that will be equally appealing across all genders.
UR - http://www.scopus.com/inward/record.url?scp=85204368231&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85204368231&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85204368231
T3 - GeBNLP 2024 - 5th Workshop on Gender Bias in Natural Language Processing, Proceedings of the Workshop
SP - 179
EP - 189
BT - GeBNLP 2024 - 5th Workshop on Gender Bias in Natural Language Processing, Proceedings of the Workshop
A2 - Falenska, Agnieszka
A2 - Basta, Christine
A2 - Costa-jussa, Marta
A2 - Goldfarb-Tarrant, Seraphina
A2 - Nozza, Debora
PB - Association for Computational Linguistics (ACL)
T2 - 5th Workshop on Gender Bias in Natural Language Processing, GeBNLP 2024, held in conjunction with the 62nd Annual Meeting of the Association for Computational Linguistics, ACL 2024
Y2 - 16 August 2024
ER -