Bias Conducive Factors#

In brief#

Bias conducive factors are aspects of individuals and institutions that lead to biases in data-driven models by influencing data and tech development. This entry presents a selection of bias conducive factors in algorithmic hiring.

More in Detail#

We present a selection of bias conducive factors distinguishing between 1) overarching factors, 2) institutional biases, 3) individual preferences, and 4) technology blindspots.

Overarching factors#

Overarching factors are fundamental biases interacting with all other factors and leading to worse algorithmic outcomes for disadvantaged groups.

Stereotypes. Stereotypes, shaped by culture, socialization, and experience, are commonly held beliefs about groups based on shared characteristics, impacting their perceived suitability for certain roles. Acquired early and often activated unconsciously [1, 2], stereotypes persistently influence individuals’ lives, both descriptively and prescriptively [3]. Consequently, they mold expectations regarding personal and others’ qualities, priorities, and needs, particularly in the context of work [4, 5]. For instance, men are typically associated with agency (leadership and goal achievement), while women are associated with communion (warmth and caregiving), leading to widespread effects on gender roles and expectations in employment [6, 7].

Proxies. Sensitive attributes are normally not used directly as input features to algorithms. Despite this fact, sensitive attribute information is still available to data driven methods through variables that are strongly correlated with sensitive attributes, referred to as proxies. In algorithmic hiring, for example, video interviews and resumes encode information on gender and race [8, 9].

Institutional biases#

Institutional biases stem from practices, habits, and norms of institutions.

Horizontal segregation. Horizontal segregation, a significant aspect of job allocation, influences hiring decisions and future workforce dynamics. Past experience is a key determinant of job suitability [10]. This segregation involves disparities in employment rates among industries due to factors like gender and race [11, 12] (see also the entry Segregation). Gender biases, rooted in enduring stereotypes about traits like agency and communion, contribute to pronounced gender imbalances across various regions. Field experiments consistently reveal discrimination against individuals in industries dominated by the opposite gender [13, 14].

Vertical segregation. Vertical segregation refers to disparities in career advancement toward leadership roles, traditionally examined through a gender lens [15, 16]. However, recent research extends this concept to non-binary individuals, racial minorities, and intersectional identities [17, 18]. These disparities, when formalized into data, perpetuate wage gaps [19] and the underrepresentation of diverse groups in high-ranking positions.

Individual preferences#

Individual preferences shaping generalized patterns for protected groups are presented in this section. Caveat: Categorizing bias in this manner doesn’t assign individual responsibility or justify discrimination. Instead, it underscores how apparent individual choices often stem from broader recurring patterns linked to protected attributes.

Job satisfaction. Job satisfaction plays a key role in job commitment [20, 21]. However, historically marginalized groups—like transgender, nonbinary, women, Black, and disabled workers—are more prone to workplace discrimination and harassment [21, 22]. This issue can be seen in data sets showing lower job tenure for these groups, potentially leading to biases in algorithms designed to maximize tenure to cut hiring costs and retain talent.

Differences in salary negotiation. Differences in salary negotiation between men and women are well-documented, including variations in their propensity and approach [23, 24]. Explanations range from differences in risk aversion to perceived chances of success [24, 25]. Despite appearing as an individual outcome for female candidates, unsuccessful salary negotiations also reflect an unfair status quo that affects group expectations [24].

Technological blindspots#

Technological blindspots are introduced by biased components unreflectively integrated into larger algorithmic pipelines.

Accessibility challenges and ableist norms. Accessibility challenges and ableist norms can deter disabled individuals from applying for jobs and lead to biased evaluations against them [26, 27]. Asynchronous video interviews can disadvantage candidates with speech impairments, who may give shorter answers, or those with visual impairments, due to misinterpretation of eye contact by algorithmic recruitment systems. This can result in these candidates being rated less favorably [28].

Uneven performance. The uneven performance of language processing and computer vision tools regarding gender, race, and other sensitive attributes has been well-documented [29, 2]. Integrating these off-the-shelf algorithms into hiring processes can lead to worse performance for minority candidates due to biased feature extraction, negatively impacting other algorithms based on these features.

Bibliography#

1

Jilana Jaxon, Ryan F Lei, Reut Shachnai, Eleanor K Chestnut, and Andrei Cimpian. The acquisition of gender stereotypes about intellectual ability: intersections with race. Journal of Social Issues, 75(4):1192–1215, 2019.

2

Gordon B Moskowitz, Jeff Stone, and Amanda Childs. Implicit stereotyping and medical decisions: unconscious stereotype activation in practitioners' thoughts about african americans. American J. of Public Health, 102(5):996–1001, 2012.

3

Naomi Ellemers. Gender stereotypes. Annual Review of Psychology, 69(1):275–298, 2018.

4

Donna Bobbitt-Zeher. Gender discrimination at work: connecting gender stereotypes, institutional policies, and gender composition of workplace. Gender & Society, 25(6):764–786, 2011.

5

United Nations Development Programme. Breaking down gender biases: shifting social norms towards gender equality. 2023. URL: https://hdr.undp.org/system/files/documents/hdp-document/gsni202302pdf.pdf.

6

Alice H Eagly, Christa Nater, David I Miller, Michèle Kaufmann, and Sabine Sczesny. Gender stereotypes have changed: a cross-temporal meta-analysis of US public opinion polls from 1946 to 2018. American Psychologist, 75(3):301–315, 2019.

7

Madeline E Heilman. Gender stereotypes and workplace bias. Research in organizational Behavior, 32:113–135, 2012.

8

Ketki V. Deshpande, Shimei Pan, and James R. Foulds. Mitigating demographic bias in AI-based resume filtering. In UMAP (Adjunct Publication), 268–275. ACM, 2020.

9

Maria De-Arteaga, Alexey Romanov, Hanna M. Wallach, Jennifer T. Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Cem Geyik, Krishnaram Kenthapadi, and Adam Tauman Kalai. Bias in bios: A case study of semantic representation bias in a high-stakes setting. In FAT, 120–128. ACM, 2019.

10

Joseph Fuller, Manjari Raman, Eva Sage-Gavin, and Kristen Hines. Hidden workers: untapped talent. Technical Report, Harvard Business School, 2021.

11

Lotte Bloksgaard. Masculinities, femininities and work–the horizontal gender segregation in the danish labour market. Nordic J. of Working Life Studies, 1(2):5–21, 2011.

12

Rebbeca Tesfai and Kevin JA Thomas. Dimensions of inequality: black immigrants’ occupational segregation in the United States. Sociology of Race and Ethnicity, 6(1):1–21, 2020.

13

Peter A Riach and Judith Rich. Field experiments of discrimination in the market place. The Economic Journal, 112(483):F480–F518, 2002.

14

Judith Rich. What do field experiments of discrimination in markets tell us? A meta analysis of studies conducted since 2000. IZA Discussion paper, 2014.

15

David A Cotter, Joan M Hermsen, Seth Ovadia, and Reeve Vanneman. The glass ceiling effect. Social forces, 80(2):655–681, 2001.

16

missing publisher in eige2020gender

17

Skylar Davidson. Gender inequality: nonbinary transgender people in the workplace. Cogent Social Sciences, 2(1):1236511, 2016.

18

Deepak Hegde, Alexander Ljungqvist, and Manav Raj. Race, glass ceilings, and lower pay for equal work. Technical Report 21-09, Swedish House of Finance, 2022.

19

Clara Rus, Jeffrey Luppes, Harrie Oosterhuis, and Gido H. Schoenmacker. Closing the gender wage gap: adversarial fairness in job recommendation. In HR@RecSys, volume 3218 of CEUR Workshop Proceedings. CEUR-WS.org, 2022.

20

Arthur G Bedeian, Gerald R Ferris, and K Michele Kacmar. Age, tenure, and job satisfaction: a tale of two perspectives. J. of Vocational Behavior, 40(1):33–48, 1992.

21(1,2)

Michael A Shields and Stephen Wheatley Price. Racial harassment, job satisfaction and intentions to quit: Evidence from the british nursing profession. Economica, 69(274):295–326, 2002.

22

Kimberly T Schneider, Suzanne Swan, and Louise F Fitzgerald. Job-related and psychological effects of sexual harassment in the workplace: Empirical evidence from two organizations. J. of Applied Psychology, 82(3):401, 1997.

23

Andreas Leibbrandt and John A List. Do women avoid salary negotiations? Evidence from a large-scale natural field experiment. Management Science, 61(9):2016–2024, 2015.

24(1,2,3)

Kelsey Gray, Angela Neville, Amy H Kaji, Mary Wolfe, Kristine Calhoun, Farin Amersi, Timothy Donahue, Tracy Arnell, Benjamin Jarman, Kenji Inaba, and others. Career goals, salary expectations, and salary negotiation among male and female general surgery residents. JAMA Surgery, 154(11):1023–1029, 2019.

25

Iñigo Hernandez-Arenaz and Nagore Iriberri. A review of gender differences in negotiation. Oxford Research Encyclopedia of Economics and Finance, 2019.

26

Frederike Scholz. Taken for granted: ableist norms embedded in the design of online recruitment practices. The Palgrave Handbook of Disability at Work, pages 451–469, 2020.

27

Nicholas Tilmes. Disability, fairness, and algorithmic bias in AI recruitment. Ethics Inf. Technol., 24(2):21, 2022.

28

missing institution in orcaa2020description

29

Su Lin Blodgett, Lisa Green, and Brendan T. O'Connor. Demographic dialectal variation in social media: A case study of african-american english. In EMNLP, 1119–1130. The Association for Computational Linguistics, 2016.

30

Joy Buolamwini and Timnit Gebru. Gender shades: intersectional accuracy disparities in commercial gender classification. In FAT, volume 81 of Proceedings of Machine Learning Research, 77–91. PMLR, 2018.

This entry was readapted from~\cite[Section 3]{fabris2024fairness} by Alessandro Fabris}\[1ex]