Trustworthy AI
Contents
Trustworthy AI#
In the last years, several definition on what Trustworthy AI is are given by the scientific communities. Each definition can vary on the dimension where they focus more, but there is a general consensus in Europe to use as official definition the one given by the High Level Expert Group on AI in its Ethical Guidelines for Trustworthy AI [1].
Indeed, according to this document, Trustworthy AI has three components, which should be met throughout the system’s entire life cycle. Indeed, it should be:
lawful, complying with all applicable laws and regulations;
ethical, ensuring adherence to ethical principles and values;
robust, both from a technical and social perspective since, even with good intentions, AI systems can cause unintentional harm.
Each component in itself is necessary but not sufficient for the achievement of Trustworthy AI.
We will provide some additional details on what it is intended for each of these three components.
Lawful AI#
AI systems do not operate in a lawless world. A number of legally binding rules at European, national, and international levels already apply or are relevant to the development, deployment, and use of AI systems today. Legal sources include, but are not limited to: EU primary law (the Treaties of the European Union and its Charter of Fundamental Rights), EU secondary law (such as the General Data Protection Regulation (GDPR) [1], the Product Liability Directive, the Regulation on the Free Flow of Non-Personal Data, anti-discrimination Directives, consumer law and Safety and Health at Work Directives, the UN Human Rights treaties and the Council of Europe conventions (such as the European Convention on Human Rights), and numerous EU Member State laws. Besides horizontally applicable rules, various domain-specific rules exist that apply to particular AI applications (such as, for instance, the Medical Device Regulation in the healthcare sector).
The law provides both positive and negative obligations, which means that it should not only be interpreted with reference to what cannot be done, but also with reference to what should be done and what may be done. The law not only prohibits certain actions but also enables others. In this regard, it can be noted that the EU Charter contains articles on the ‘’freedom to conduct a business’’ and the ‘’freedom of the arts and sciences’’, alongside articles addressing areas that we are more familiar with when looking to ensure AI’s trustworthiness, such as for instance data protection and non-discrimination.
Ethical AI#
Achieving Trustworthy AI requires not only compliance with the law, which is but one of its three components. Laws are not always up to speed with technological developments, can at times be out of step with ethical norms or may simply not be well suited to addressing certain issues. For AI systems to be trustworthy, they should hence also be ethical, ensuring alignment with ethical norms. Regarding ethics, some philosophical currents (such as Floridi’s work [4]) have been discriminating between hard and soft ethics.
Hard ethics is what we usually have in mind when discussing values, rights, duties, and responsibilities— or, more broadly, what is morally right or wrong, and what ought or ought not to be done—in the course of formulating new regulations or challenging existing ones. In short, hard ethics is what makes or shapes the law. %Thus, in the best scenario, lobbying in favour of some good legislation or to improve that which already exists can be a case of hard ethics. For example, hard ethics helped to dismantle apartheid in South Africa and supported the approval of legislation in Iceland that requires public and private businesses to prove that they offer equal pay to employees, irrespective of their gender (the gender pay gap continues to be a scandal in most countries).
Soft ethics covers the same normative ground as hard ethics, but it does so by considering what ought and ought not to be done over and above the existing regulation, not against it, or despite its scope, or to change it, or to by-pass it (e.g. in terms of self-regulation). In other words, soft ethics is post-compliance ethics: in this case, ‘’ought implies may’’.
Robust AI#
Even if an ethical purpose is ensured, individuals and society must also be confident that AI systems will not cause any unintentional harm. Such systems should perform in a safe, secure and reliable manner, and safeguards should be foreseen to prevent any unintended adverse impacts. It is therefore important to ensure that AI systems are robust. This is needed both from a technical perspective (ensuring the system’s technical robustness as appropriate in a given context, such as the application domain or life cycle phase), and from a social perspective (in due consideration of the context and environment in which the system operates).
The European Legal Framework#
In these pages, we focus on the European framework only, mostly relying on two main sources, which are described below and in the linked pages.
The first document, at least chronologically speaking, we refer to is the Ethical Guidelines for Trustworthy AI [1], which, as the name itself suggests is not a law or a legal obligation. Nevertheless, it is commonly recognized as the most relevant document in the field of Trustworthy AI. Here, as we already mentioned, there are listed a definition of Trustworthy AI, the foundation of Trustworthy AI, the seven key requirements that AI systems should implement and meet throughout their entire life cycle, and a concrete assessment list to operationalize the requirements.
Then, the other fundamental source is the world’s first comprehensive law on Artificial Intelligence (AI): the EU AI Act [2]. The text provides a classification of AI systems using a risk-based approach; four levels of risk were identified, and different obligations are listed for the different categories of AI systems to be compliant with this law.
The TAILOR Handbook of Trustworthy AI#
In order to provide a better understanding on Trustworthy AI, within the TAILOR project we have been developing an Handbook on Trustworthy AI. Here, you can find an overview of all the ethical dimensions highlighted in the Ethical Guidelines for Trustworthy AI [1] and a deeper analysis of concepts that are related to and are going to contribute to each dimension.
Bibliography#
- 1(1,2,3)
High-Level Expert Group on Artificial Intelligence. Ethics Guidelines for Trustworthy AI. URL: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai (visited on 2024-04-23).
- 2
Artificial Intelligence Act, European Parliament legislative resolution of 13 March 2024 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD)), P9_TA(2024)0138. 2024. URL: https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.pdf (visited on 2024-04-23).
- 3
European Parliament & Council. General data protection regulation. 2016. L119, 4/5/2016, p. 1–88.
- 4
Luciano Floridi. Soft ethics and the governance of the digital. Philosophy and Technology, 2018. doi:https://doi.org/10.1007/s13347-018-0303-9.
This entry was re-adapted from the Ethics guidelines for Trustworthy AI of the High-Level Expert Group on Artificial Intelligence by Francesca Pratesi and Umberto Straccia.