The TAILOR Handbook of Trustworthy AI

An encyclopedia of the major scientific and technical terms related to Trustworthy Artificial Intelligence

In the last years, several definition on what Trustworthy AI is are given by the scientific communities. Each definition can vary on the dimension where they focus more, but there is a general consensus in Europe to use as official definition the one given by the High Level Expert Group on AI in its Ethical Guidelines for Trustworthy AI.

Trustworthy AI

According the Ethical Guidelines for Trustworthy AI, Trustworthy AI has three components, which should be met throughout the system's entire life cycle. Indeed, it should be:
1. lawful, complying with all applicable laws and regulations;
2. ethical, ensuring adherence to ethical principles and values;
3. robust, both from a technical and social perspective since, even with good intentions, AI systems can cause unintentional harm.
Each component in itself is necessary but not sufficient for the achievement of Trustworthy AI.
We will provide some additional details on what it is intended for each of these three components.

Lawful AI

AI systems do not operate in a lawless world. A number of legally binding rules at European, national, and international levels already apply or are relevant to the development, deployment, and use of AI systems today. Legal sources include, but are not limited to: EU primary law (the Treaties of the European Union and its Charter of Fundamental Rights), EU secondary law (such as the General Data Protection Regulation (GDPR), the Product Liability Directive, the Regulation on the Free Flow of Non-Personal Data, anti-discrimination Directives, consumer law and Safety and Health at Work Directives, the UN Human Rights treaties and the Council of Europe conventions (such as the European Convention on Human Rights), and numerous EU Member State laws. Besides horizontally applicable rules, various domain-specific rules exist that apply to particular AI applications (such as, for instance, the Medical Device Regulation in the healthcare sector). The law provides both positive and negative obligations, which means that it should not only be interpreted with reference to what cannot be done, but also with reference to what should be done and what may be done. The law not only prohibits certain actions but also enables others. In this regard, it can be noted that the EU Charter contains articles on the "freedom to conduct a business" and the "freedom of the arts and sciences", alongside articles addressing areas that we are more familiar with when looking to ensure AI's trustworthiness, such as for instance data protection and non-discrimination.

Ethical AI

Achieving Trustworthy AI requires not only compliance with the law, which is but one of its three components. Laws are not always up to speed with technological developments, can at times be out of step with ethical norms or may simply not be well suited to addressing certain issues. For AI systems to be trustworthy, they should hence also be ethical, ensuring alignment with ethical norms.

Robust AI

Even if an ethical purpose is ensured, individuals and society must also be confident that AI systems will not cause any unintentional harm. Such systems should perform in a safe, secure and reliable manner, and safeguards should be foreseen to prevent any unintended adverse impacts. It is therefore important to ensure that AI systems are robust. This is needed both from a technical perspective (ensuring the system's technical robustness as appropriate in a given context, such as the application domain or life cycle phase), and from a social perspective (in due consideration of the context and environment in which the system operates).

Executive abstract

The main goal of the Handbook of Trustworthy AI (HTAI) is to provide to non-experts, researchers and students, an overview of the problem related to the developing of ethical and trustworthy AI systems. In particular, the HTAI aims at:
- providing the ethical and legal background to understand better the European framework
- providing an overview of the main dimensions of trustworthiness, starting with a understandable explanation of the dimension itself, and then presenting the characterization of the problems (starting with a brief summary and the explanation of the importance of the dimension, presenting a taxonomy and some guidelines, if they are available and consolidated), summarizing what are the major challenges and solutions in the field, and concluding with what are the latest research developments.
Each entry will be correlated with a bibliography, allowing the reader to go more in depth with a specific topic if interested knowing more about it.


All the entries have a list of authors that have directly contributed to the writing of the HTAI (some of them are already external to the TAILOR consortium), while the complete list of contributors can be found here.
This research was partially supported by TAILOR, a project funded by EU Horizon 2020 research and innovation programme under GA No 952215
You can find more information about the TAILOR project here.

Start the navigation of the TAILOR Handbook of Trustworthy AI!