logo

The TAILOR Handbook of Trustworthy AI

  • The TAILOR Handbook of Trustworthy AI
  • The Ethical and Legal Framework
    • Ethics Guidelines for Trustworthy AI by High-Level Expert Group on Artificial Intelligence
    • The EU AI Act
      • Prohibited AI Practices
      • High Risk AI Systems
  • Trustworthy AI
    • Human Agency and Oversight
      • Meaningful human control
      • Causal Responsibility
    • Transparency
      • Dimensions of Explanations
        • Black Box Explanation vs Explanation by Design
        • Model-Specific vs Model-Agnostic Explainers
        • Global vs Local Explanations
      • Explainable AI
        • Kinds of Explanations
    • Technical Robustness and Safety
      • Alignment
      • Robustness
      • Reliability
      • Evaluation
      • Negative side effects
      • Distributional shift
      • Security
      • Adversarial Attack
      • Data Poisoning
      • Uncertainty
    • Diversity, Non-Discrimination, and Fairness
      • Auditing AI
      • Bias
      • Bias Conducive Factors
      • Bias and Fairness in LLMs
      • Discrimination & Equity
      • Fairness notions and metrics
      • Fair Machine Learning
      • Grounds of Discrimination
      • Intersectionality
      • Justice
      • Segregation
    • Accountability
      • Accountability
        • Wicked problems
        • The Frame Problem
        • The Problem of Many Hands
      • Reproducibility
      • Traceability
        • Provenance Tracking
        • Continuous Performance Monitoring
    • Privacy and Data Governance
      • Data Anonymization (and Pseudonymization)
        • Pseudonymization
      • Privacy Models
        • Randomization Methods
        • Differential Privacy
        • Anonymity by Indistinguishability
        • Federated Learning
      • Attacks on anonymization schemes
        • Re-identification Attack
    • Societal and Environmental Wellbeing
      • Sustainable AI
        • Green AI
        • Cloud Computing
        • Edge Computing
        • Data Centre
        • Cradle-to-cradle Design
        • Resource Prediction
        • Resource Allocation
      • Social Impact of AI Systems
        • AI human interaction
        • AI Impact on the Workforce
      • Society and Democracy
        • AI for social scoring
        • AI for propaganda
  • The TAILOR project
  • Complete List of Contributors
  • Index
    • 2CC2
    • Accountability
    • AI for propaganda
    • AI for social scoring
    • AI human interaction
    • AI Impact on the Workforce
    • Alignment
    • Adversarial Attack
    • Adversarial Example
    • Adversarial Input
    • Anonymity by Indistinguishability
    • Ante-hoc Explanation
    • Assessment
    • Attacks on Anonymization Schema
    • Attacks on Pseudonymised Data
    • Auditing
    • Bias
    • Bias Conducive Factors
    • Bias and Fairness in LLMs
    • Black-box Explanation
    • Brittleness
    • C2C
    • Causal Responsibility
    • Cloud Computing
    • Continuous Performance Monitoring
    • Counterexemplars
    • Counterfactuals
    • Cradle 2 cradle
    • Cradle-to-cradle Design
    • Data Anonymization
    • Data Center
    • Data Poisoning
    • Dependability
    • Differential Privacy Models
    • Emotional Impact
    • ( \(\epsilon\) , \(\delta\) )-Differential Privacy
    • \(\epsilon\) -Differential Privacy
    • \(\epsilon\) -Indistinguishability
    • Distributional Shift
    • Dimensions of Explanations
    • Grounds of Discrimination
    • Distributional Shift
    • Direct Behaviour
    • Edge Computing
    • Energy-aware Computing
    • Energy-efficient Computing
    • Evaluation
    • Equity
    • Exemplars
    • Explainable AI
    • Explanation by Design
    • Fair Machine Learning
    • Fairness
    • Feature Importance
    • Federated Learning
    • The Frame Problem
    • Fog Computing
    • Model Agnostic
    • Global Explanations
    • Green AI
    • Green Computing
    • Green IT
    • ICT sustainability
    • Intended Behaviour
    • Intersectionality
    • Justice
    • K-anonymity
    • l-diversity
    • Linking Attack
    • Local Explanations
    • Local Rule-based Explanation
    • Meaningful Human Control
    • Measurement
    • Mesh Computing
    • Misdirect Behaviour
    • Model Agnostic
    • Model Specific
    • Negative Side Effects
    • Model Specific
    • Achiving Differential Privacy
    • Post-hoc Explanation
    • Power-aware Computing
    • Privacy models
    • Problem of Many Hands
    • Prototypes
    • Provenance Tracking
    • Pseudonymization
    • Randomization Methods
    • Re-identification Attack
    • Regenerative Design
    • Reliability
    • Repeatability
    • Replicability
    • Reproducibility
    • Resource Allocation
    • Resource Prediction
    • Resource Scheduling
    • Robustness
    • Rules List and Rules Set
    • Saliency Maps
    • Security
    • Segregation
    • Self-identification of AI
    • Single Tree Approxiamation
    • Social Impact of AI Systems
    • Society and Democracy
    • t-closeness
    • Testing
    • Traceability
    • Transparency
    • Unintended Behaviour
    • XAI
    • Wicked Problems
    • Workload Forecast
    • Workload Prediction
  • repository
  • open issue
  • .md

The Ethical and Legal Framework

The Ethical and Legal Framework#

In these pages, we focus on the European framework only, mostly relying on two main sources, which are described below and in the linked pages.

The first document, at least chronologically speaking, we refer to is the Ethical Guidelines for Trustworthy AI [1], which, as the name itself suggests is not a law or a legal obligation. Nevertheless, it is commonly recognized as the most relevant document in the field of Trustworthy AI. Here, as we already mentioned, there are listed a definition of Trustworthy AI, the foundation of Trustworthy AI, the seven key requirements that AI systems should implement and meet throughout their entire life cycle, and a concrete assessment list to operationalize the requirements.

Then, the other fundamental source is the world’s first comprehensive law on Artificial Intelligence (AI): the EU AI Act [2]. The text provides a classification of AI systems using a risk-based approach; four levels of risk were identified, and different obligations are listed for the different categories of AI systems to be compliant with this law.

1

High-Level Expert Group on Artificial Intelligence. Ethics Guidelines for Trustworthy AI. URL: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai (visited on 2024-04-23).

2

Artificial Intelligence Act, European Parliament legislative resolution of 13 March 2024 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD)), P9_TA(2024)0138. 2024. URL: https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.pdf (visited on 2024-04-23).

This entry was written by Francesca Pratesi and Umberto Straccia.

previous

The TAILOR Handbook of Trustworthy AI

next

Ethics Guidelines for Trustworthy AI by High-Level Expert Group on Artificial Intelligence

By varii auctores; see here for the complete list of contributors. This research was partially supported by TAILOR, a project funded by EU Horizon 2020 research and innovation programme under GA No 952215
© Copyright 2022.