Causal Responsibility#

In brief#

Causal responsibility is the notion of responsibility that is concerned with actual causation [1, 2, 3].

More in detail#

Causal responsibility is a notion of responsibility that captures the causal influence an event, or an agent’s action or omission has on a particular outcome or state of affairs [1, 2, 3]. In the context of human-AI systems, human causal responsibility captures the actual causal influence the human has on an outcome while interacting with the AI system.

Apart from causal responsibility, [1] identifies five other notions of responsibility: capacity responsibility, role responsibility, outcome responsibility, virtue responsibility and legal liability. She goes on to explain how causal responsibility is a prerequisite for outcome responsibility which is in turn a prerequisite for legal liability. To ascribe praise, blame or moral responsibility to the actions of an agent, causal responsibility is a necessary condition [4, 5]. When ascribing praise or blame to an an agent’s actions, in addition to causal influence, considerations about the intentions, epistemic conditions and roles of the agent are taken into account. Nonetheless, causal responsibility plays a crucial role in debates about responsibility and legal liability [6, 7].

Counterfactual reasoning which is predominantly associated with determining actual causality has also been popular in approaches for evaluating causal responsibility [2, 3, 8, 9, 10]. In counterfactual reasoning, we check whether an agent’s action is pivotal for the outcome, i.e., whether changing the agent’s action would prevent an outcome from happening. In [2] was defined a graded metric for causal responsibility based on the number of changes that have to be made to make an agent’s action pivotal for an outcome, while in [11] was proposed a notion of causal responsibility in spatial settings based on how one agent restricts the feasible action space of another agent in a concurrent game setting. Related models of group responsibility have also been suggested which are primarily focused on the ability of groups of agents to cause or prevent an outcome or state of affairs [12, 13]. Concerning human-AI interaction, Douer and Meyer have opted an information theoretic approach to propose a metric for human causal responsibility based on how human actions affect the distribution of the outcomes of the human-AI system [].

Disentangling causal responsibility is tricky when it comes to complex human-AI systems [14]. Nonetheless, understanding the causal influences of different agents is crucial for identifying who should change their behaviour for better satisfying relevant human values and ethical principles. For systems to be under meaningful human control, we should be able to the trace the responsibility to the right human(s) and to ensure that the right humans have control over the outcomes [15, 16]. Also teaching AI systems to reason about responsibility is crucial for making AI systems trustworthy [17].

Bibliography#

1(1,2,3)

Nicole A. Vincent. A Structured taxonomy of responsibility concepts. In Nicole A. Vincent, Ibo van de Poel, and Jeroen van den Hoven, editors, Moral Responsibility, Library of Ethics and Applied Philosophy, pages 15–35. Springer, Springer Nature, United States, 2011. doi:10.1007/978-94-007-1878-4_2.

2(1,2,3,4)

Hana Chockler and Joseph Y. Halpern. Responsibility and Blame: A Structural-Model Approach. jair, 22:93–115, October 2004. doi:10.1613/jair.1391.

3(1,2,3)

Florian Engl. A Theory of Causal Responsibility Attribution. SSRN Journal, 2018. doi:10.2139/ssrn.2932769.

4

M. Braham and M. Van Hees. An Anatomy of Moral Responsibility. Mind, 121(483):601–634, July 2012. doi:10.1093/mind/fzs081.

5

Ibo R. van de Poel and Lambèr M.M. Royakkers. Ethics, Technology, and Engineering : An Introduction. Wiley-Blackwell, United States, 2011. ISBN 978-1-4443-3095-3.

6

H.L.A. Hart and John Gardner. Punishment and Responsibility. Oxford University Press, March 2008. ISBN 978-0-19-953477-7. doi:10.1093/acprof:oso/9780199534777.001.0001.

7

H. L. A. Hart and Tony Honoré. Causation in the Law. Oxford University Press, May 1985. ISBN 978-0-19-825474-4. doi:10.1093/acprof:oso/9780198254744.001.0001.

8

Hein Duijf. A Logical Study of Moral Responsibility. Erkenn, September 2023. doi:10.1007/s10670-023-00730-2.

9

Stelios Triantafyllou, Adish Singla, and Goran Radanovic. Actual Causality and Responsibility Attribution in Decentralized Partially Observable Markov Decision Processes. August 2022. arXiv:2204.00302.

10

E. Lorini, D. Longin, and E. Mayor. A logical analysis of responsibility attribution: emotions, individuals and collectives. Journal of Logic and Computation, 24(6):1313–1339, December 2014. doi:10.1093/logcom/ext072.

11

Ashwin George, Luciano Cavalcante Siebert, David Abbink, and Arkady Zgonnikov. Feasible Action-Space Reduction as a Metric of Causal Responsibility in Multi-Agent Spatial Interactions. 2023 (ECAI 2023 Accepted). arXiv:2305.15003.

12

Vahid Yazdanpanah and Mehdi Dastani. Distant Group Responsibility in Multi-agent Systems. In Matteo Baldoni, Amit K. Chopra, Tran Cao Son, Katsutoshi Hirayama, and Paolo Torroni, editors, PRIMA 2016: Princiles and Practice of Multi-Agent Systems, volume 9862, pages 261–278. Springer International Publishing, Cham, 2016. doi:10.1007/978-3-319-44832-9_16.

13

Vahid Yazdanpanah, Sebastian Stein, Enrico H. Gerding, and Nicholas R. Jennings. Applying strategic reasoning for accountability ascription in multiagent teams. In Huáscar Espinoza, John A. McDermid, Xiaowei Huang, Mauricio Castillo-Effen, Xin Cynthia Chen, José Hernández-Orallo, Seán Ó hÉigeartaigh, Richard Mallah, and Gabriel Pedroza, editors, Proceedings of the Workshop on Artificial Intelligence Safety 2021 Co-Located with the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI 2021), Virtual, August, 2021, volume 2916 of CEUR Workshop Proceedings. CEUR-WS.org, 2021.

14

Virginia Dignum. Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Artificial Intelligence: Foundations, Theory, and Algorithms. Springer International Publishing, Cham, 2019. ISBN 978-3-030-30370-9 978-3-030-30371-6. doi:10.1007/978-3-030-30371-6.

15

Filippo Santoni de Sio and Jeroen van den Hoven. Meaningful Human Control over Autonomous Systems: A Philosophical Account. Front. Robot. AI, 5:15, February 2018. doi:10.3389/frobt.2018.00015.

16

Frank Flemisch, Matthias Heesen, Tobias Hesse, Johann Kelsch, Anna Schieben, and Johannes Beller. Towards a dynamic balance between humans and automation: authority, ability, responsibility and control in shared and cooperative control situations. Cogn Tech Work, 14(1):3–18, March 2012. doi:10.1007/s10111-011-0191-6.

17

Mehdi Dastani and Vahid Yazdanpanah. Responsibility of AI Systems. AI & Soc, 38(2):843–852, April 2023. doi:10.1007/s00146-022-01481-4.

This entry was written by George Ashwin.