Accountability#

In Brief#

Accountability is an ethical aspect studied in the TAILOR project to ensure that a given actor or actors can render an account of the actions of an AI system. The accountability concept is strictly related to the concept of responsibility.

Abstract#

According to [1], the requirement of accountability complements the other ethical dimensions, and is closely linked to the principle of fairness. It necessitates that mechanisms be put in place to ensure responsibility and accountability for AI systems and their outcomes, both before and after their development, deployment and use.

Motivation and Background#

Whenever something goes wrong, there is often a call to define who is responsible for this wrongdoing. Responsibility is a broader topic that might have different conceptualizations. However, in this sense, it usually means one’s obligation to render an account of your actions and the consequences of these, i.e. accountability. Accountability can be defined as a form of “passive responsibility” (or backward looking responsibility) in the sense of being held to account for or justify towards others a given action or consequence that happened in the past [3]. Although accountability implies having to account for one’s actions, if the account given is considered insufficient, then one might still be considered blameworthy and thus deserving of censure or blame [4].

AI systems bring particular concerns with respect to accountability, as understanding how the systems work can be challenging, and commercial considerations can conceal broader organisations processes [2]. Although one might “understand” the inner workings of the algorithms used, the outcomes might still not be predictable, complicating accountability even further [5].

Two often discussed examples to explain the accountability setting are related to autonomous driving cars and medical decisions.

Indeed, imagine a self-driving car that hits a pedestrian. Who should account for or justify the system’s actions? The person within the car that might not have been able or willing to supervise the system? The manufacturer, that designed the systems and thus should be the only responsible of the behaviour of its products? The programmer that did not correctly implement all the necessary checks? The manufacture of the sensor that did not detect the pedestrian? The person who conducted the test that did not foresee that particular circumstance?

Or, again, in the case of a wrong diagnosis following an MRI. Do we expect an account from the doctor that did not see the error or the AI system (at all the same possible levels we saw in the previous example)?

Given the difficulties, a lot of effort put in the definition of accountability regards the Auditability principle: [1]

Auditability entails the enablement of the assessment of algorithms, data and design processes. This does not necessarily imply that information about business models and intellectual property related to the AI system must always be openly available. Evaluation by internal and external auditors, and the availability of such evaluation reports, can contribute to the trustworthiness of the technology. In applications affecting fundamental rights, including safety-critical applications, AI systems should be able to be independently audited.

Other aspects took into consideration in the High Level Expert Group report [1] are:

  • Minimisation and reporting of negative impacts, i.e., assessing, documenting and minimising the potential negative impacts of AI systems, even thanks to the use of impact assessments both prior to and during the development, deployment and use of AI systems.

  • Trade-offs to tackle tensions that may arise between requirements. If conflict arises, trade-offs should be explicitly acknowledged and evaluated in terms of their risk to ethical principles, including fundamental rights. Any decision about which trade-off to make should be reasoned and properly documented. Whether no ethically acceptable trade-offs can be identified, [1] clearly states that the development, deployment and use of the AI system should not proceed in that form.

  • Redress must be ensured when things go wrong, with particular attention to vulnerable persons or groups. The importance of the redress is advocated also by the European Union Agency for Fundamental Rights [6], where particular emphasis is posed to collective redress (i.e., collective redress, a way in which victims can join forces to overcome obstacles), and by the Council of Europe [7], where is specified that a citizen should not necessarily have to pursue legal action straight away and seeking remedies should be available, known, accessible, affordable and capable of providing appropriate redress.

Guidelines#

Several guidelines and checklists have been proposed to increase accountability over the actions of AI systems, both from EU authorities and industries, such as:

  • The Assessment List for Trustworthy Artificial Intelligence (ALTAI) [8]
    This Assessment List (ALTAI) is firmly grounded in the protection of people’s fundamental rights exposed by the High Level Expert Group report [1]. It is probably the most complete one so far and the reference point for all other checklists.
    The ALTAI checklist helps organisations understand what Trustworthy AI is, in particular what risks an AI system might generate, and how to minimize those risks while maximising the benefit of AI. It is intended to help organisations identify how proposed AI systems might generate risks, and to identify whether and what kind of active measures may need to be taken to avoid and minimise those risks. It aims at raising awareness of the potential impact of AI on society, the environment, consumers, workers and citizens (in particular children and people belonging to marginalised groups) and at encouraging the multidisciplinarity and the involvement of all relevant stakeholders. It helps to gain insight on whether meaningful and appropriate solutions or processes to accomplish adherence to the seven requirements (as outlined above) are already in place or need to be put in place. This could be achieved through internal guidelines, governance processes, etc.
    For each requirement, this Assessment List for Trustworthy AI (ALTAI) provides introductory guidance and relevant definitions in the Glossary. The online version of this assessment list contains additional explanatory notes for many of the questions.

  • Getting the Future Right [9]
    The European Union Agency for Fundamental Rights published a report where a fundamental rights-based analysis of concrete ‘use cases’ is provided. The report illustrates some of the ways that companies and the public sector in the EU are looking to use AI to support their work, and whether – and how – they are taking fundamental rights considerations into account. In this way, it contributes empirical evidence, analysed from a fundamental rights perspective, that can inform EU and national policymaking efforts to regulate the use of AI tools.

  • ICO’s guidance on the use of artificial intelligence [10, 11]
    The UK data protection authority has been very active on all the topics related to accountability, publishing guidance that is constantly updated. In [10], the focus of accountability is on being compliant with data protection law and being capable of minimise risks. It explores some important aspects such as Leadership and oversight (e.g., the structure of the analyzed organization), Transparency, Privacy, and Security; a whole section is dedicated to the Data Protection Impact Assessment.
    In [11], a short checklist is presented, even if in the document itself is highlighted that “Accountability is not a box-ticking exercise” but rather taking responsibility for what you are doing with personal data, considering this as an opportunity to develop and sustain people’s trust.

  • IBM’s FactSheets [12]
    This document starts from Supplier’s Declarations of Conformity (SDoCs), which are documents largely used by many industries even if they are usually not legally required documents, to describe the lineage of a product along with the safety and performance testing it has undergone. SDoCs aims at capturing and quantifying various aspects of the product and its development to make it worthy of consumers’ trust. The chechlist proposed in [12] should help increasing trust in AI services. We envision such documents to contain purpose, performance, safety, security, and provenance information to be completed by AI service providers for examination by consumers.
    A FactSheet will contain sections on all relevant attributes of an AI service, such as intended use, performance (including appropriate accuracy or risk measures along with timing information), safety, explainability, algorithmic fairness, and security and robustness. Moreover, the FactSheet should help in listing how the service was created, trained, and deployed along with what scenarios it was tested on, how it may respond to untested scenarios, guidelines that specify what tasks it should and should not be used for, and any ethical concerns of its use. Hence, FactSheets help prevent overgeneralization and unintended use of AI services by solidly grounding them with metrics and usage scenarios. FactSheet is a quite interesting example because in this case a private company highlights the need of ethical procedures, standards, and certifications.

  • Microsoft’s guideline for human-AI interaction [13]
    In this paper, authors identified 18 question, related to different phases of the use of an AI system, and 10 different kinds of application, ranging from e-commerce recommender systems to route planning systems, from automatic photo organizers to social network feed filtering systems. Then, authors empirically evaluated both the clarity and the relevance of various questions in the various domains, highlighting potential criticality (e.g., reporting a violation to the question ``Make clear why the system did what it did’’ if a recommender system did not non give any explanations of the reason why a certain product was suggested).

Possible Taxonomy of terms#

Boven [14] defines accountability as

a relationship between an actor and a forum, in which the actor has an obligation to explain and to justify his or her conduct, the forum can pose questions and pass judgement, and the actor may face consequences.

Wiering [15] presented a thorough systematic literature review on algorithmic accountability structured on the five points identified by Boven in his definition, which we briefly summarize below:

  • Arguments on the actor: Involves a broader discussion on who is responsible for the harm that the system may inflict when it is working correctly, and who is responsible when it is working incorrectly. It involves different levels of actors (e.g. individuals, teams, department, organizations) with different roles and possibly also third-parties. Due to these multiple levels and actors, situations known as ``the problem of many hands’’ might occur, in which the collective can reasonably be held responsible for an outcome, while none of the individuals can be reasonably held responsible for that outcome [3]. In these situations, to be “in the loop” is not enough, calling for a more meaningful ability to control the design and operation process, in other words calling for meaningful human control.

  • The forum: To whom a given account is directed. The forum might take different shapes such as political, legal, administrative, professional, and towards the civil society. Examples of forum include General Data Protection Regulation (GDPR), the proposed EU AI Act, or even the guidelines for trustworthy AI proposed by the European Commission [1], as discussed in the previous section.

  • The relationship between the actor and the forum: This relationship comes in different forms and shapes, according to all other four points. Nevertheless, they are usually mapped in three phases: the information phase, the deliberation and discussion phase, and the final phase, where consequences can be imposed on the actor by the forum.

  • The content and criteria of the account: Although ex ante analysis, such as impact assessment and simulations, can be helpful, they are limited as they cannot foresee all possible behavior and consequences. The importance of an accountability relationship should depend not only on such ex ante factors, but also on the extent to which a given system impacts society and individuals.

  • The consequences which may result from the account: In situations where there is a more “vertical” accountability relationship between actor and forum (e.g., accountability through legal standers), consequences are usually made more tangible. In more “horizontal” settings (e.g., self-regulation of organizations), consequences are defined based more on a moral imperative.

Accountability gaps and their implications#

As AI systems, especially systems with learning abilities, are deployed “in the wild”, human control and prediction over their behaviour are very difficult if not impossible, leading to so-called “accountability gaps”. Santoni de Sio & Mecacci [16] divided such gaps in public accountability gap and moral accountability gaps. Public accountability gaps relate to citizens not being able to get an explanation for decisions taken by public agencies, while moral accountability gap refers to the reduction of human agents’ capacity to make sense of – and explain to each other the behaviour of AI systems.

Accountability gaps point to the fact that accounting requires knowledge and some ability to control [16]. Designers and developers of AI systems can only tackle this challenge by acknowledging that this is not a matter of fortuitous allocation of praise or blame, and that systems should be developed in a manner that allows for stakeholders to be held accountable. Among other things, this relates to the social context where these systems are deployed (and whether a given solution or formulation can be argued), how the questions and criteria for accountability are framed, and the level of understanding and control that a given actor might have. In this encyclopedia, we discuss these three interrelated concepts, respectively, in the entries Wicked problems, The Frame Problem, and Meaningful human control.

Bibliography#

1(1,2,3,4,5,6)

High-Level Expert Group on Artificial Intelligence. Ethics Guidelines for Trustworthy AI. 2019. URL: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai (visited on 2022-02-16).

2

Jennifer Cobbe, Michelle Seng Ah Lee, and Jatinder Singh. Reviewable automated decision-making: a framework for accountable algorithmic systems. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 598–609. 2021.

3(1,2)

Ibo R Van de Poel and Lambèr MM Royakkers. Ethics, technology, and engineering: An introduction. Wiley-Blackwell, 2011.

4

Ibo van de Poel. The relation between forward-looking and backward-looking responsibility. In Moral responsibility, pages 37–52. Springer, 2011.

5

Marijn Janssen and George Kuk. The challenges and limits of big data algorithms in technocratic governance. 2016.

6

European Union Agency for Fundamental Rights. Improving access to remedy in the area of business and human rights at the EU leve. 2016. URL: https://fra.europa.eu/en/opinion/2017/business-human-rights (visited on 2022-05-10).

7

Council of Europe. Guide to human rights for internet users: effective remedies and redress. 2014. URL: http://www.coe.int/en/web/internet-users-rights/guide (visited on 2022-05-10).

8

High Level Expert Group on AI. The Assessment List for Trustworthy Artificial Intelligence (ALTAI). 2020. URL: https://digital-strategy.ec.europa.eu/en/library/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment (visited on 2022-05-10).

9

European Union Agency for Fundamental Rights. Getting the future right - Artificial Intelligence and Fundamental Rights. 2020. URL: https://fra.europa.eu/en/publication/2020/artificial-intelligence-and-fundamental-rights (visited on 2022-05-10).

10(1,2)

Information Commissioner's Office (ICO). Accountability framework. URL: https://ico.org.uk/for-organisations/accountability-framework/ (visited on 2022-05-25).

11(1,2)

Information Commissioner's Office (ICO). Accountability and governance. URL: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/accountability-and-governance/ (visited on 2022-05-25).

12(1,2)

Matthew Arnold, Rachel K. E. Bellamy, Michael Hind, Stephanie Houde, Sameep Mehta, Aleksandra Mojsilovic, Ravi Nair, Karthikeyan Natesan Ramamurthy, Darrell Reimer, Alexandra Olteanu, David Piorkowski, Jason Tsay, and Kush R. Varshney. FactSheets: Increasing Trust in AI Services through Supplier's Declarations of Conformity. 2019. arXiv:1808.07261v2.

13

Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul N. Bennett, Kori Inkpen, Jaime Teevan, Ruth Kikin-Gil, and Eric Horvitz. Guidelines for human-ai interaction. In ACM International Conference of Human-Computer Interaction (CHI). 2019.

14

Mark Bovens. Analysing and assessing accountability: a conceptual framework 1. European law journal, 13(4):447–468, 2007.

15

Maranke Wieringa. What to account for when accounting for algorithms: a systematic literature review on algorithmic accountability. In Proceedings of the 2020 conference on fairness, accountability, and transparency, 1–18. 2020.

16(1,2)

Filippo Santoni de Sio and Giulio Mecacci. Four responsibility gaps with artificial intelligence: why they matter and how to address them. Philosophy & Technology, pages 1–28, 2021.

This entry was written by Luciano C Siebert and Francesca Pratesi.