Fairness, Equity, and Justice by Design#

In brief#

The term fairness is defined as the quality or state of being fair; or a lack of favoritism towards one side. However, fairness can mean different concepts to different peoples, different contexts, and different disciplines [1]. An unfair Artificial Intelligence (AI) model produces results that are biased towards particular individuals or groups. The most relevant case of bias is discrimination against protected-by-law social groups. Equity requires that people are treated according to their needs, which does not mean all people are treated equally [2]. Justice is the “fair and equitable treatment of all individuals under the law” [1].

Abstract#

We first provide motivations and background on fairness, equity and justice in AI. This consists of warnings and legal obligations about potential harms of unscrutinized AI tools, especially in socially sensitive decision making. A taxonomy of fair-AI algorithms is then presented, based on the step of the AI development process in which fairness is checked/controlled for. Next, we summarize the guidelines and draft standards for fair-AI, and the software frameworks supporting the dimension. Finally, the main keywords of the dimension are extensively detailed.

Motivations and background 1#

Increasingly sophisticated algorithms from AI and Machine Learning (ML) support knowledge discovery from big data of human activity. They enable the extraction of patterns and profiles of human behavior which are able to make extremely accurate predictions. Decisions are then being partly or fully delegated to such algorithms for a wide range of socially sensitive tasks: personnel selection and wages, credit scoring, criminal justice, assisted diagnosis in medicine, personalization in schooling, sentiment analysis in texts and images, people monitoring through facial recognition, news recommendation, community bulding in social networks, dynamic pricing of services and products.

The benefits of algorithmic-based decision making cannot be neglected, e.g., procedural regularity – same procedure applied to each data subject. However, automated decisions based on profiling or social sorting may be biased [4] for several reasons. Historical data may contain human (cognitive) bias and discriminatory practices that are endemic, to which the algorithms assign the status of general rules. Also, the usage of AI/ML models reinforces such practices because data about model’s decisions become inputs in subsequent model construction (feedback loops). Algorithms may wrongly interpret spurious correlations in data as causation, making predictions based on ungrounded reasons. Moreover, algorithms pursue the optimization of quality metrics, such as accuracy of predictions, that favor precision over the majority of people against small groups. Finally, the technical process of designing and deploying algorithms is not yet mature and standardized. Rather, it is full of small and big decisions (sometimes, trial and error steps) that may hide bias, such as selecting non-representative data, performing overspecialization of the models, ignoring socio-technical impacts, or using models in deployment contexts they are not tested for. These risks are exacerbated by the fact that the AI/ML models are complex for human understanding, or not even intelligible, sometimes they are based on randomness or time-dependent non-reproducible conditions [5].

Legal restrictions on automated decision-making are provided by the EU General Data Protection Regulation, which states (Article 22) “the right not to be subject to a decision based solely on automated processing”. Moreover, (Recital 71) “in order to ensure fair and transparent processing in respect of the data subject […] the controller should use appropriate mathematical or statistical procedures […] to prevent, inter alia, discriminatory effects on natural persons”.

Fair algorithms are designed with the purpose of preventing biased decisions in algorithmic decision making. Quantitative definitions have been introduced in philosophy, economics, and machine learning in the last 50 years [6, 7, 1], with more than 20 different definitions of fairness appeared thus far in the computer science literature [1, 10]. Four non-mutually exclusive strategies can be devised for fairness-by-design of AI/ML models.

Pre-processing approaches. They consists of a controlled sanitization of the data used to train an AI/ML model with respect to specific biases. Pre-processing approaches allow for obtaining less biased data, which can be used for training AI/ML models. An advantage of pre-processing approaches is that they are independent of the AI/ML model and algorithm at hand.

In-processing approaches. The second strategy is to modify the AI/ML algorithm, by incorporating fairness criteria in model construction, such as regularizing the optimization objective with a fairness measure. There is a fast growing adoption of in-processing approaches in many AI/ML problems other than in the original setting of classification, including ranking, clustering, community detection, influence maximization, distribution/allocation of goods, and models on non-structured data such as natural language texts and images. An area somehow in the middle between pre-processing and in-processing approaches is fair representation learning, where the model inferred from data is not used directly for decision making, but rather as intermediate knowledge.

Post-processing approaches. This strategy consists of post-processing an AI/ML model once it has been computed, so to identify and remove unfair decision paths. This can be achieved also by involving human experts in the exploration and interpretation of the model or of the model’s decisions. Post-processing approaches consist of altering the model’s internals, for instance by correcting the confidence of classification rules, or the probabilities of Bayesian models. Post-processing becomes necessary for tasks for which there is no in-processing approach explicitly designed for the fairness requirement at hand.

Prediction-time approaches. The last strategy assumes no change in the construction of AI/ML models, but rather correcting their predictions at run-time. Proposed approaches include promoting, demoting or rejecting predictions close to the decision boundary, differentiating the decision boundary itself over different social groups, or wrapping a fair classifier on top of a black-box base classifier. Such approaches may be applied to legacy software, including non-AI/ML algorithms, that cannot be replaced by in-processing approaches or changed by post-processing approaches.

Standards and guidelines#

Several initiatives have started to audit, standardize and certify algorithmic fairness, such as the ICO Draft on AI Auditing Framework, the draft IEEE P7003™ Standard on Algorithmic Bias Considerations, the IEEE Ethics Certification Program for Autonomous and Intelligent Systems, and the ISO/IEC TR 24027:2021 Bias in AI systems and AI aided decision making (see also the entry on Auditing AI). Regarding the issue of equality data collection, the European Union High Level Group on Non-discrimination, Equality and Diversity has set up “Guidelines on improving the collection and use of equality data”, and the European Union Agency for Fundamental Rights (FRA) maintains a list of promising practices for equality data collection.

Very few scientific works attempt at investigating the practical applicability of fairness in AI [11, 12]. This issue is challenging, and likely to require domain-specific approaches [13]. On the educational side, however, there are hundreds of university courses on technology ethics [14], many of which cover fairness in AI.

Software frameworks supporting dimension#

The landscape of software libraries and tools is very large. Existing proposals cover almost every step of the data-friven AI development process (data collection, data processing, model development, model deployment, model monitoring), every type of AI models (classification, regression, clustering, ranking, community detection, influence maximization, distribution/allocation of goods), and every type of data (tabular, text, images, videos). Reviews and critical discussions of gaps for a few fairness toolkits can be found in [1, 16].

Main keywords#

  • Auditing AI: Auditing AI aims to identify and address possible risks and impacts while ensuring robust and trustworthy (see Accountability).

  • Bias: Bias refers to an inclination towards or against a particular individual, group, or sub-groups. AI models may inherit biases from training data or introduce new forms of bias.

  • Discrimination & Equity: Forms of bias that count as discrimination against social groups or individuals should be avoided, both from legal and ethical perspectives. Discrimination can be direct or indirect, intentional or unintentional.

  • Fairness notions and metrics: The term fairness is defined as the quality or state of being fair; or a lack of favoritism towards one side. The notions of fairness, and quantitative measures of them (fairness metrics), can be distinguished based on the focus on individuals, groups and sub-groups.

  • Fair Machine Learning: Fair Machine Learning models take into account the issues of bias and fairness. Approaches can be categorized as pre-processig, which transform the input data, as in-processing, which modify the learning algorithm, and post-processing, which alter models’ internals or their decisions.

  • Grounds of Discrimination: International and national laws prohibit discriminating on some explicitly defined grounds, such as race, sex, religion, etc. They can be considered in isolation, or interacting, giving rise to multiple discrimination and intersectional discrimination.

  • Justice: Justice encompasses three different perspectives: (1) fairness understood as the fair treatment of people, (2) rightness as the quality of being fair or reasonable, and (3) a legal system, the scheme or system of law. Justice can be distinguished between substantive and procedural.

  • Segregation: Social segregation refers to the separation of groups on the grounds of personal or cultural traits. Separation can be physical (e.g., in schools or neighborhoods) or virtual (e.g., in social networks).

Bibliography#

1

Deirdre K Mulligan, Joshua A Kroll, Nitin Kohli, and Richmond Y Wong. This thing called fairness: disciplinary confusion realizing a value in technology. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW):1–36, 2019.

2

Martha Minow. Equality vs. Equity. American Journal of Law and Equality, 1:167–193, 2021.

3

Jeffrey Lehman, Shirelle Phelps, and others. West's encyclopedia of American law. Thomson/Gale, 2004.

4

Eirini Ntoutsi, Pavlos Fafalios, Ujwal Gadiraju, Vasileios Iosifidis, Wolfgang Nejdl, Maria-Esther Vidal, Salvatore Ruggieri, Franco Turini, Symeon Papadopoulos, Emmanouil Krasanakis, Ioannis Kompatsiaris, Katharina Kinder-Kurlanda, Claudia Wagner, Fariba Karimi, Miriam Fernández, Harith Alani, Bettina Berendt, Tina Kruegel, Christian Heinze, Klaus Broelemann, Gjergji Kasneci, Thanassis Tiropanis, and Steffen Staab. Bias in data-driven artificial intelligence systems - an introductory survey. WIREs Data Mining Knowl. Discov., 2020.

5

Joshua A. Kroll, Joanna Huey, Solon Barocas, Edward W. Felten, Joel R. Reidenberg, David G. Robinson, and Harlan Yu. Accountable algorithms. U. of Penn. Law Review, 165:633–705, 2017.

6

Ben Hutchinson and Margaret Mitchell. 50 years of test (un)fairness: lessons for machine learning. In FAT, 49–58. ACM, 2019.

7

Reuben Binns. Fairness in machine learning: lessons from political philosophy. In FAT, volume 81 of Proceedings of Machine Learning Research, 149–159. PMLR, 2018.

8

Andrea Romei and Salvatore Ruggieri. A multidisciplinary survey on discrimination analysis. Knowl. Eng. Rev., 29(5):582–638, 2014.

9

Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. A survey on bias and fairness in machine learning. ACM Comput. Surv., 54(6):115:1–115:35, 2021.

10

Indre Zliobaite. Measuring discrimination in algorithmic decision making. Data Min. Knowl. Discov., 31(4):1060–1089, 2017.

11

Karima Makhlouf, Sami Zhioua, and Catuscia Palamidessi. On the applicability of machine learning fairness notions. SIGKDD Explor., 23(1):14–23, 2021.

12

Alex Beutel, Jilin Chen, Tulsee Doshi, Hai Qian, Allison Woodruff, Christine Luu, Pierre Kreitmann, Jonathan Bischof, and Ed H. Chi. Putting fairness principles into practice: challenges, metrics, and improvements. In AIES, 453–459. ACM, 2019.

13

Michelle Seng Ah Lee and Luciano Floridi. Algorithmic fairness in mortgage lending: from absolute conditions to relational trade-offs. Minds Mach., 31(1):165–191, 2021.

14

Casey Fiesler, Natalie Garrett, and Nathan Beard. What do we teach when we teach tech ethics?: A syllabi analysis. In SIGCSE, 289–295. ACM, 2020.

15

Michelle Seng Ah Lee and Jatinder Singh. The landscape and gaps in open source fairness toolkits. In CHI, 699:1–699:13. ACM, 2021.

16

Brianna Richardson and Juan E. Gilbert. A framework for fairness: A systematic review of existing fair AI solutions. CoRR, 2021.

17

Salvatore Ruggieri. Algorithmic fairness. In Elgar Encyclopedia of Law and Data Science. Edward Elgar Publishing Limited, 2022.

This entry was written by Salvatore Ruggieri.


1

This Section was readapted from [17].