Grounds of Discrimination
Contents
Grounds of Discrimination#
In brief#
International and national laws prohibit discriminating on some explicitly defined grounds, such as race, sex, religion, etc. They can be considered in isolation, or interacting, giving rise to multiple discrimination and intersectional discrimination.
More in Detail#
The Universal Declaration of Human Rights prohibit discrimination in several grounds 1;
race,
skin colour,
sex,
language,
religion,
political or other opinion,
national or social origin,
property, or
birth [2],
although the list is not exhaustive.
By directly addressing these grounds, the Declaration highlights the problematic of considering decisions or regulations on them while leaves the door open to a more extensive view by prohibiting as well discrimination based on other grounds. By doing so, the Declaration implies that any difference in treatment or exercise with respect to the rights encompassed in the Declaration would have legal implications. Therefore, grounds of discrimination should not be considered a closed and fixed list but an enumeration opened to debate and reflection as the circumstances and context require. For example, the African Charter on Human and People’s Rights prohibits discrimination in grounds of fortune, rather than property [3] whereas the American Convention on Human Rights includes economic status [4] and the Charter of Fundamental Rights of the European Union (E.U.) adds the association with a national minority [4].
To this regard, grounds of discrimination encompass three different motives on which decisions and policies should not be based (see also the entry Discrimination & Equity); (1) grounds innate to the individual such as race, gender, age, disability, (2) grounds intrinsic to the individual freedom and autonomy that is political belief or religion, and (3) grounds highly founded on stereotypes or stigma and which are usually irrelevant for social, economic, or politic interactions, as sexual orientation or ethnicity) [6]. The use of any of these grounds is often perceived as a lack of impartiality influenced by negative and prejudiced reasons and emotions towards certain members of the society. Prohibiting discrimination on these grounds aims to ensure that the distribution of social goods and services do not respond to subjective and irrational feelings, whether that turns out to be an advantage or a disadvantage for the individual or group concerned.
Sex refers to a person’s biological status, categorized as male, female, or intersex; gender refers to the attitudes and behaviors that a culture associates with a person’s sex, categorized as masculine, feminine and transgender (gender identity different from sex assigned at birth or non-binary); sexual orientation refers to the sex of those to whom one is sexually and romantically attracted, categorized as homosexual, heterosexual, and bisexual. See [7] for a psychological discussion of the differences between the terms, [8] for a discussion with reference to the United States (U.S.) anti-discrimination law, and [9] for a comparative analysis on anti-discrimination European Law. A country profile report on the legal rights of lesbian, gay, bisexual and transgender and (LGBT) people is published yearly2 by Human Rights Watch. Human-Computer Interaction research is also addressing the extent to which AI systems “express gender and sexuality explicitly and through relation of experience and emotions, mimicking the human language on which they are trained” [10].
Race is a social construct to categorize people into groups. The term is controversial, and with little consensus on its actual meaning. [11] summarizes biological and social concepts of race, and discuss U.S. categorizations of races used for data collection, e.g., in census data.
Ethnicity refers to self-identifying groups based on beliefs concerning shared culture, ancestry and history. The distinction between race and ethnic grounds is, nonetheless, a provocative issue primarily in Europe where after the Second War World the notion of race become some sort of a taboo. By consequence, the lacking of words, academic work, and policies addressing race (un)justice has also resulted on a downplay of race grounds of discrimination and the indistinct use of ethnic origin as race with mislead intentions [12].
Legislations and research studies have evolved with a different focus on vulnerable groups, sometimes restricting themselves to specific settings, including redit and insurance, sale, rental, and financing of housing, personnel selection and wages, access to public accommodation, and education. For instance, discrimination against Afro-Americans is dealt with to a large extent by studies from the U.S., whilst discrimination against Roma people has been mainly considered by E.U. studies.
Although the aforementioned grounds for discrimination are typically considered separately, the interaction of multiple forms of discrimination has been receiving increasing attention [13, 14]. An elderly disabled woman for example, could be discriminated against for being above a certain age, because she is a woman, because she is disabled, or any combination of these. Multiple discrimination comes into play when a person is discriminated against on the basis of different characteristics at different times: each type of discrimination works independently, according to distinct experiences, and multiple discrimination refers to their cumulative impact. When different grounds operate \emph{at the same time}, then this is known as compound or intersectional discrimination. Compound discrimination (sometimes called additive multiple discrimination) occurs when each ground adds to discrimination on other grounds, for example migrant women experiencing both under-employment (such as migrants compared to local residents) and lower pay (such as female workers compared to male workers). Intersectional discrimination occurs when concurrent acts of discrimination result in a specific and distinct form of discrimination [15]. For example, [16] reports the case of Afro-American women stereotypes which when taken in isolation cover neither women nor Afro-Americans.
Grounds of discrimination are key inputs in the design of fair AI systems: fairness metrics, for instance, rely on comparing models’ performances across protected and unprotected groups. We refer to the entries on fairness and fair ML for details.
Here, we concentrate on the problem of faithfully representing grounds of discrimination in data, by distinguishing the coding of human identity in raw data (datafication) and the representativeness of grounds of discrimination in data (representation bias).
For instance, if gender is coded with a binary feature (male/female), then any further discrimination analysis is limited to contrasting only such two groups, excluding non-binary people. There is then the need for a more elaborate representation of human identity in raw data, e.g, using ontologies for concept reasoning [17]. Moreover, the categories used to encode grounds of discrimination may embed forms of structural discrimination, which is hidden when features are considered in isolation, but made apparent when connected with other features in a knowledge graph [18]. The issue of source criticism [19], which is central historical and humanistic disciplines, is still in its infancy in the area of big data and AI. Source criticism attains at the provenance, authenticity, and completeness of data collected, especially in social media platforms. For instance, the mechanisms of social software, such as the option given to users to identify their gender as binary, result into functional biases [20] of the data collected.
Beyond the complexity of datafication, the representiveness of grounds of discrimination in datasets [21] also affects discrimination and diversity analyses, and the fairness of AI models trained over those datasets (see also the entry on Bias).
Most of the grounds of discrimination fall in the category of sensitive personal data whose collection and processing is prohibited under several privacy and data protection laws, unless certain exceptions apply. For example, the grounds of race, ethnic origin, sexual orientation, political stances, religious beliefs, and trade union membership are considered special categories of personal data under the European General Data Protection Regulation [1]. Likewise, the California Privacy Rights Act (CPRA) will include as sensitive attributes, among other, consumer’s racial or ethnic origin, religious or philosophical beliefs [22], while the Virginia Consumer Data Protection Act (VCDPA) will add to those attributes, the ground of mental or physical health, sexual orientation, or citizenship or immigration status [23]. From a regulatory perspective, the restriction towards the collection and processing of sensitive personal data intends to minimize the possibilities of algorithmic systems to discriminate people based on intrinsic or innate attributes of the individual. However, some criticism have arisen towards this perspective as more voices defend the need to use sensitive attributes to ensure the non-discriminatory nature of algorithmic models [24]. The European Proposal for Regulating Artificial Intelligence (Artificial Intelligence Act) seems to have reflect on this position as it will introduce a exception allowing, to the extent that it is strictly necessary for the purposes of ensuring bias monitoring, detection and correction in relation to the high-risk AI systems, the processing of special categories of data [25].
Discrimination grounds in datasets can be be the output of an inference. For instance, gender may be explicitly given (e.g., in a registration form) with consent to a specific usage (e.g., personalization), or it can be inferred using supervised learning [26]. A growing number of AI approaches can infer people’s personality traits [27], to be used e.g., for personalization and recommendation purposes. To some extent, even in cases where the system is blinded to protected attributes, inferences can lead to discriminatory results as the system finds correlations directly related to grounds of discriminatory. Inferences can, therefore, be quite problematic because they can reinforce the historical disadvantage and inequalities suffered by certain members of the society [28]. As the current legal protections rest on the restricted access to data reveling the belonging of an individual to a protected group or prohibition of use of such data to motivate a decision, the access and use of such information indirectly creates a threat to individuals’ rights [29]. For this reason, the correctness of such inferences can be crucial on attributed grounds of discrimination and, consequently, on decisions and fairness analyses. Despite inferences offer new possibilities for biased and invasive decision-making, the legal status of inferred personal, both with respect to data protection and anti-discrimination laws, is quite debated [30].
Bibliography#
- 1
European Parliament & Council. General data protection regulation. 2016. L119, 4/5/2016, p. 1–88.
- 2
UN General Assembly and others. Universal declaration of human rights. UN General Assembly, 302(2):14–25, 1948.
- 3
The African Union. African Charter on Human and Peoples' Rights. 1981.
- 4
Organization of American States. American Convention on Human Rights Pact of San Jose, Costa Rica (B-32). 1969.
- 5
"European Parliament and the Council". Charter of fundamental rights of the european union. 2007.
- 6
Janneke Gerards. The discrimination grounds of article 14 of the european convention on human rights. Human Rights Law Review, 13(1):99–124, 2013.
- 7
American Psychological Association. Guidelines for psychological practice with lesbian, gay, and bisexual clients. 2011. URL: http://www.apa.org/pi/lgbt/resources/guidelines.aspx.
- 8
Mary Anne C. Case. Disaggregating gender from sex and sexual orientation: The effeminate man in the law and feminist jurisprudence. The Yale Law Journal, 105(1):1–105, 1995.
- 9
FRA. Protection against discrimination on grounds of sexual orientation, gender identity and sex characteristics in the EU-comparative legal analysis. 2015.
- 10
Justin Edwards, Leigh Clark, and Allison Perrone. Lgbtq-ai? exploring expressions of gender and sexual orientation in chatbots. In CUI, 2:1–2:4. ACM, 2021.
- 11
Rebecca M. Blank, Marilyn Dabady, and Constance F. Citro, editors. Measuring Racial Discrimination - Panel on Methods for Assessing Discrimination. National Academies Press, 2004.
- 12
Nicolas Kayser-Bril. Europeans can’t talk about racist ai systems. they lack the words. 2021. URL: https://algorithmwatch.org/en/europeans-cant-talk-about-racist-ai-systems-they-lack-the-words/.
- 13
European Commission. Tackling multiple discrimination: Practices, policies and laws. 2007. Directorate General for Employment, Social Affairs and Equal Opportunities, Unit G.4. URL: http://ec.europa.eu/social/main.jsp?catId=738\&pubId=51.
- 14
ENAR. European network against racism, Fact sheet 44: the legal implications of multiple discrimination. 2011. URL: https://www.enar-eu.org/wp-content/uploads/fs44_-_the_legal_implications_of_multiple_discrimination_final_en.pdf.
- 15
European Commission, Directorate-General for Justice and Consumers and Sandra Fredman. Intersectional discrimination in EU gender equality and non-discrimination law. Publications Office, 2016.
- 16
T. Makkonen. Compound and intersectional discrimination: bringing the experiences of the most marginalized to the fore. 2002. Unpublished manuscript, Institute for Human Rights, Abo Alademi University.
- 17
Clair A. Kronk and Judith W. Dexheimer. Development of the gender, sex, and sexual orientation ontology: evaluation and workflow. J. Am. Medical Informatics Assoc., 27(7):1110–1115, 2020.
- 18
Christopher L. Dancy and P. Khalil Saucier. AI and blackness: towards moving beyond bias and representation. CoRR, 2021.
- 19
Gertraud Koch and Katharina Kinder-Kurlanda. Source criticism of data platform logics on the internet. Historical Social Research, 45(3):270–287, 2020.
- 20
Alexandra Olteanu, Carlos Castillo, Fernando Diaz, and Emre Kiciman. Social data: biases, methodological pitfalls, and ethical boundaries. Frontiers Big Data, 2:13, 2019.
- 21
Nima Shahbazi, Yin Lin, Abolfazl Asudeh, and H. V. Jagadish. A survey on techniques for identifying and resolving representation bias in data. CoRR, 2022.
- 22
California Statu Legislature and the Council. California consumer privacy act of 2018 [1798.100 - 1798.199.100]. 2018.
- 23
Virginia Senate. Virginia consumer data protection act. Effective January 1, 2023.
- 24
Indrė Žliobaitė and Bart Custers. Using sensitive personal data may be necessary for avoiding discrimination in data-driven decision models. Artificial Intelligence and Law, 24(2):183–201, 2016.
- 25
European Parliament and the Council. Regulation of the european parliament and of the council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain union legislative acts. 2021.
- 26
Lucía Santamaría and Helena Mihaljevic. Comparison and benchmark of name-to-gender inference services. PeerJ Comput. Sci., 4:e156, 2018.
- 27
Alessandro Vinciarelli and Gelareh Mohammadi. A survey of personality computing. IEEE Trans. Affect. Comput., 5(3):273–291, 2014.
- 28
Raphaele Xenidis. Tuning eu equality law to algorithmic discrimination: three pathways to resilience. Maastricht Journal of European and Comparative Law, 27(6):736–758, 2020.
- 29
Solon Barocas. Data mining and the discourse on discrimination. In Data Ethics Workshop, Conference on Knowledge Discovery and Data Mining, 1–4. 2014.
- 30
Sandra Wachter and Brent Mittelstadt. A right to reasonable inferences: re-thinking data protection law in the age of big data and AI. Columbia Business Law Review, 2019(2):494–620, 2019.
This entry was written by Alejandra Bringas Colmenarejo and Salvatore Ruggieri.
- 1
Protected group, protected grounds and prohibited grounds are also used as synonymous of grounds of discrimination.
- 2
For 2021, see the Human Right Watch World Report.