Meaningful human control#

In brief#

Meaningful human control is the notion that aims to generalize the traditional concept of operational control over technological artifacts to artificial intelligent systems. It implies that artificial systems should not make morally consequential decisions on their own, without appropriate control from responsible humans.

More in Detail#

The notion of meaningful human control has its origins in the discussions on lethal autonomous weapon systems (LAWS), specifically in regards to life-or-death decisions that such systems could in principle make. Avoiding ethical issues related to autonomous decision making by artificial agents requires that humans, and only humans, have control of and are accountable for the use of lethal force [1]. The concrete implications of this requirement are still debated, with proposals range from calls for a full ban of LAWS [2] to suggestions on governance, implementation, and use of such systems that can contribute to meaningful human control (e.g. [3], [4]). While ethical issues associated with the lack of human control are perhaps most apparent for autonomous weapon systems, they extend far beyond the military domain, to a wider class of human-AI systems that make decisions with moral implications. At the time of writing, researchers have approached meaningful human control in the contexts of automated driving systems [5] [6], medical decision support systems [7], unmanned aerial vehicles [8], among other domains. Many of these domain-specific operationalizations rely on a philosophical account of meaningful human control proposed by [9]. This account builds on the concept of “guidance control” [10] and provides two necessary conditions for meaningful human control. The tracking condition requires that the decision-making system tracks and responds to all human reasons (i.e., values, norms, intentions) relevant in given circumstances. The tracing condition requires that any action/decision of the human-AI system should be traceable to at least one human within the system who has proper moral understanding of the situation and the effects of the system in that situation.

Tracking and tracing, as well as several alternative domain-specific accounts, provide conceptual frameworks for meaningful human control. Making these concepts less vague and more relatable to design and engineering practice is however very challenging [11]. In [12] an attempt to close this gap between theory and practice is made by proposing four actionable properties that can be addressed throughout the system’s lifecycle:

  • Property 1: A system in which humans and AI algorithms interact should have an explicitly defined domain of morally loaded situations within which the system ought to operate.

  • Property 2: Humans and AI agents within the system should have appropriate and mutually compatible representations.

  • Property 3: Responsibility attributed to a human should be commensurate with that human’s ability and authority to control the system.

  • Property 4: There should be explicit links between the actions of the AI agents and actions of humans who are aware of their moral responsibility.



Article 36. Key areas for debate on autonomous weapons systems: memorandum for delegates at the convention on certain conventional weapons (ccw) meeting of experts on lethal autonomous weapons systems. Technical Report, Article 36, 2014. URL:


Article 36. Killing by machine: key issues for understanding meaningful human control. Technical Report, Article 36, 2015. URL:


Heather M Roff and Richard Moyes. Meaningful human control, artificial intelligence and autonomous weapons. In Briefing Paper Prepared for the Informal Meeting of Experts on Lethal Au-Tonomous Weapons Systems, UN Convention on Certain Conventional Weapons. 2016.


M.C. Horowitz and P Scharre. Meaningful human control in weapon systems: a primer. Technical Report, Center for a New American Security, 2015.


Daniël D Heikoop, Marjan Hagenzieker, Giulio Mecacci, Simeon Calvert, Filippo Santoni De Sio, and Bart van Arem. Human behaviour with automated driving systems: a quantitative framework for meaningful human control. Theoretical issues in ergonomics science, 20(6):711–730, 2019.


Simeon C Calvert, Bart van Arem, Daniël D Heikoop, Marjan Hagenzieker, Giulio Mecacci, and Filippo Santoni de Sio. Gaps in the control of automated vehicles on roads. IEEE intelligent transportation systems magazine, 13(4):146–153, 2020.


Matthias Braun, Patrik Hummel, Susanne Beck, and Peter Dabrock. Primer on an ethics of ai-based decision support systems in the clinic. Journal of medical ethics, 47(12):e3–e3, 2021.


Marc Steen, Jurriaan van Diggelen, Tjerk Timan, and Nanda van der Stap. Meaningful human control of drones: exploring human–machine teaming, informed by four different ethical perspectives. AI and Ethics, pages 1–13, 2022.


Filippo Santoni de Sio and Jeroen Van den Hoven. Meaningful human control over autonomous systems: a philosophical account. Frontiers in Robotics and AI, pages 15, 2018.


John Martin Fischer and Mark Ravizza. Responsibility and control: A theory of moral responsibility. Cambridge university press, 1998.


Rebecca Crootof. A meaningful floor for meaningful human control. Temp. Int'l & Comp. LJ, 30:53, 2016.


Luciano Cavalcante Siebert, Maria Luce Lupetti, Evgeni Aizenberg, Niek Beckers, Arkady Zgonnikov, Herman Veluwenkamp, David Abbink, Elisa Giaccardi, Geert-Jan Houben, Catholijn M Jonker, and others. Meaningful human control: actionable properties for ai system development. AI and Ethics, pages 1–15, 2022.

This entry was written by Arkady Zgonnikov and Luciano C Siebert.