# Explainable AI ## In Brief **Explainable AI** (often shortened to **XAI**) is one of the ethical dimensions that is studied in the TAILOR project. The origin of XAI dates back to the entering into force of the General Data Protection Regulation (GDPR). The GDPR {cite}`gdpr`, in its Recital 71, also mentions the right to explanation, as a suitable safeguard to ensure fair and transparent processing in respect of data subjects. It is defined as the right "to obtain an explanation of the decision reached after profiling". ## More in details According to the High-Level Expert Group on Artificial Intelligence - Ethics Guidelines for Trustworthy AI, explainability topic is included in the broader [transparency](./Transparency.md) dimension. Explainability concerns the ability to explain both the technical processes of an AI system and the related human (e.g., application areas of a system). This aspect is analyzed also in the {doc}`blackbox_transparent` entry. The goal of this task is explaining how the decision system returned certain outcomes (the so-called *Black Box Explanation*). Moreover, the Black Box Explanation problem can be further divided among *Model Explanation* when the explanation involves the whole logic of the obscure classifier, *Outcome Explanation* when the target is to understand the reasons for the decisions on a given object, and *Model Inspection* when the target to understand how internally the black box behaves changing the input. ```{figure} ./xai_taxonomy.png --- name: T3.1taxonomy31 width: 600px align: center --- A possible taxonomy about solutions to the Open the Black-Box problem {cite}`guidotti_survey`. ``` On a different dimension, a lot of effort has been put into defining what are the possible techniques (e.g., we can discriminate between {doc}`./model_specific`), what is the explained outcome (i.e., {doc}`./global_local`), and to understand the {doc}`./feature_importance`. Then, it is important to note that a variety of different kinds of explanations can be provided, such as {doc}`./single_tree`, {doc}`./feature_importance`, {doc}`./saliency_maps`, Factual and Counterfactual, Exemplars and Counter-Exemplars, and Rules List and Rules Sets. On a different dimension, a lot of effort has been put into defining what are the possible techniques (e.g., we can discriminate between {doc}`./model_specific`), the requirements to provide good explanations (see guidelines), how to evaluate explanations, and to understand the {doc}`./feature_importance`. Then, it is important to note that a variety of different kinds of explanations can be provided, such as {doc}`./single_tree`, {doc}`./feature_importance`, {doc}`./saliency_maps`, [Factual and Counterfactual](./counterfactual.md), [exemplars and counter-exemplars](./prototypes.md), [Rules List and Rules Sets](./rules.md). ## Bibliography ```{bibliography} :style: unsrt :filter: docname in docnames ``` > This entry was written by Francesca Pratesi.