Human Agency and Oversight
Contents
Human Agency and Oversight#
In brief#
AI systems should support human autonomy and decision-making, as prescribed by the principle of respect for human autonomy. This requires that AI systems should both act as enablers to a democratic, flourishing and equitable society by supporting the user’s agency and foster fundamental rights, and allow for human oversight [1].
Abstract#
We first provide motivations and background on Human Agency and Human oversight, explaining the various shades of this ethical dimension, such as the *”human-in-the-loop” paradigm. Next, we summarize the guidelines and draft standards for human agency, human control, and human oversight, and the software frameworks supporting the dimension.
Motivations and background#
AI Actors should take necessary measures to preserve the autonomy and free will of human in the process of decision making, their right to choose, as well as preserve human intellectual abilities in general as an intrinsic value and a system forming factor of modern civilization. Also, AI Actors should forecast possible negative consequences for the development of human cognitive abilities at the earliest stages of AI systems creation and refrain from the development of AI systems that purposefully cause such consequences [1].
According to the Ethical Guidelines of the High Level Expert Group on AI [1], there are three different aspects to be considered when we talk about this ethical dimension. We will analyze them one by one.
Fundamental Rights. Like many technologies, AI systems can enable and hamper fundamental rights. They might benefit people for instance by helping them track their personal data, or by increasing the accessibility to education, hence supporting their right to education. However, given the reach, capacity, and opacity of many AI systems, they can also negatively affect fundamental rights. In situations where such risks exist, a fundamental rights impact assessment should be undertaken. This should be done prior to the system’s development and include an evaluation of whether those risks can be reduced or justified as necessary in a democratic society in order to respect the rights and freedoms of others. Design approaches such as value sensitive and participatory design can support bridging the gap between societal context and perspective from stakeholders who are impact by a given technology with technical design requirements [2]. Moreover, mechanisms should be put into place to receive external feedback regarding AI systems that potentially infringe on fundamental rights.
Human agency. Users should be able to make informed autonomous decisions regarding AI systems. They should be given the knowledge and tools to comprehend and interact with AI systems to a satisfactory degree and, where possible, be enabled to reasonably self-assess or challenge the system. AI systems should support individuals in making better, more informed choices in accordance with their goals. These requirements have been studied in different forms under the concepts of Meaningful human control over AI systems [3], contestable AI \cite{alfrink2022contestable}, and reflection machines [4]. AI systems can sometimes be deployed to shape and influence human behaviour through mechanisms that may be difficult to detect, since they may harness sub-conscious processes, including various forms of unfair manipulation, deception, herding and conditioning, all of which may threaten individual autonomy. Besides technical aspects, institutional and societal considerations related to the context and stakeholders interacting with the AI system must be accounted for in order to get the full picture of potential effects of AI systems on human autonomy [5]. To achieve trustworthy AI, the overall principle of user autonomy must be central to the system’s functionality. Key to this is the right not to be subject to a decision based solely on automated processing when this produces legal effects on users or similarly significantly affects them.
Human oversight. Human oversight helps ensuring that an AI system does not undermine human autonomy or causes other adverse effects. Oversight may be achieved through governance mechanisms such as a “human-in-the-loop” (HITL), “human-on-the-loop” (HOTL), or “human-in-command” (HIC) approach. HITL refers to the capability for human intervention in every decision cycle of the system, which in many cases is neither possible nor desirable. HOTL refers to the capability for human intervention during the design cycle of the system and monitoring the system’s operation. HIC refers to the capability to oversee the overall activity of the AI system (including its broader economic, societal, legal and ethical impact) and the ability to decide when and how to use the system in any particular situation. This can include the decision not to use an AI system in a particular situation, to establish levels of human discretion during the use of the system, or to ensure the ability to override a decision made by a system. Moreover, it must be ensured that public enforcers have the ability to exercise oversight in line with their mandate. Oversight mechanisms can be required in varying degrees to support other safety and control measures, depending on the AI system’s application area and potential risk. All other things being equal, the less oversight a human can exercise over an AI system, the more extensive testing and stricter governance is required.
Standards and guidelines#
Many AI practitioners already have existing assessment tools and software development processes in place to ensure compliance also with non-legal standards. In addidion, the European Commission has emphasised the importance of adopting Artificial Intelligence (AI) systems with a human-centric approach to ensure their safe deployment. This human-centric approach requires implementing AI systems safely and reliably to benefit humanity, with the aim of protecting human rights and dignity by keeping a “human-in-the-loop”.
One of the most famous tool for the self-assessment of AI systems is the Assessment List for Trustworthy Artificial Intelligence (ALTAI) tool, where 19 questions are posed regarding the human agency and the human oversight, dealing with the effect of AI systems that are aimed at guiding, influencing or supporting humans in decision making processes, with the effect on human perception and expectation when confronted with AI systems that “act” like humans, and with the effect of AI systems on human affection, trust and (in)dependence. In the document, it explicitly stated that the assessment list should not necessarily be carried out as a stand-alone exercise, but can be incorporated into such existing practices.
Other important standards are the ISO norms, in particular the ISO/IEC TS 8200, which norms the controllability of automated artificial intelligence systems, and the ISO/IEC AWI 42105, which will be specifically published for providing guidance on human control and monitoring of AI systems (i.e., human oversight).
Last but not least, the Article 14 of the AI Act is related to the human oversight, with the aim at preventing or minimising the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse. However, an unforeseen consequence pointed out in some critical articles [6] is that Article 14 (1) of the AI Act could create a legal loophole to justify the shifting of responsibilities and accountabilities from one party (users of AI systems working as human overseers) to another (designers of AI systems). This ambiguity could create legal challenges as human overseers could argue that Article 14 of the AI ACT is not intended to regulate them. Moreover, authors of [6] highlighted how Article 14 of the EU AI Act provides little detail on the human overseers’ responsibilities. This lack of clear guidelines on the responsibility of human overseers or what constitutes meaningful human oversight under the proposal arguably undermines a human-centric approach.
Software frameworks supporting dimension#
To the best of our knowledge, there are no software framework supporting human autonomy and oversight. However, beside the aforementioned guidelines, there are other guidelines and recommendations, both from the academia and the private sector, such as the ones provided by:
Digital Future Society. In this report, authors first put the human oversight dimension in context, defining different typologies of human-algorithm interaction, then reported some case studies to explain better the problem, and finally listed some policy recommendations, like defining the minimum human involvement, being aware of automation context-dependency, and adequately training developers and operators.
IBM. In this report, author reported the major standards on human oversight, a great variety of case studies in which AI systems can be applied and the ethical risks (e.g., overtrust) that can occur, and a list of best practices to be applied to mitigate such risks.
Danone, Datacraft, where authors defined some guidelines depending on the phase (specifically, conception or use) of the AI system lifecyle.
Main Keywords#
Meaningful human control: Meaningful human control is the notion that aims to generalize the traditional concept of operational control over technological artifacts to artificial intelligent systems. It implies that artificial systems should not make morally consequential decisions on their own, without appropriate control from responsible humans.
./Casual_responsability: Causal responsibility is the notion of responsibility that is concerned with actual causation.
Bibliography#
- 1
International Research Center for AI Ethics and n Chinese Academy of Sciences Governance, Institute of Automatio. Linking artificial intelligence principles (laip). URL: https://www.linking-ai-principles.org/term/174 (visited on 2024-04-23).
- 2
Evgeni Aizenberg and Jeroen Van Den Hoven. Designing for human rights in ai. Big Data & Society, 7(2):2053951720949566, 2020.
- 3
Luciano Cavalcante Siebert, Maria Luce Lupetti, Evgeni Aizenberg, Niek Beckers, Arkady Zgonnikov, Herman Veluwenkamp, David Abbink, Elisa Giaccardi, Geert-Jan Houben, Catholijn M Jonker, and others. Meaningful human control: actionable properties for ai system development. AI and Ethics, 3(1):241–255, 2023.
- 4
NAJ Cornelissen, RJM van Eerdt, HK Schraffenberger, and Willem FG Haselager. Reflection machines: increasing meaningful human control over decision support systems. Ethics and Information Technology, 24(2):19, 2022.
- 5
Arto Laitinen and Otto Sahlgren. Ai systems and respect for human autonomy. Frontiers in artificial intelligence, 4:151, 2021.
- 6(1,2)
Holistic AI. Key issues: human oversight. 2024. Last Accessed: 2024-05-20. URL: https://www.euaiact.com/key-issue/4.
This entry was re-adapted from the Ethics guidelines for Trustworthy AI of the High-Level Expert Group on Artificial Intelligence by Francesca Pratesi and Luciano C Siebert.