Kinds of Explanations#

In brief#

Explanations returned by an AI system depend on various factors (such as the task or the available data); generally speaking, each kind of explanations serves better a specific context.

More in detail#

Increasing research on XAI is bringing to light a wide list of explanations and explanation methods for “opening” black box models. The explanations returned depend on various factors, such as:

  • the type of task they are needed for,

  • on which kind of data the AI system acts,

  • who is the final user of the explanation,

  • if they allow to explain the whole behavior of the AI system (global explanations) or reveal the reasons for the decision only for a particular instance (local explanations),

  • the business perspective, i.e., which are the implication of companies in having explainable and interpretable systems and models, in terms of business strategies and secrecy,

  • the fact that, in a decentralized node, an explanation could require information that is nor directly available on site.

In this part of the Encyclopedia, we review a subset of the most used types of explanations and show how some state-of-the-art explanation methods can return them. The interested reader can refer to [2], [1] for a complete review of XAI literature.

Bibliography#

1

R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, and D. Pedreschi. A survey of methods for explaining black box models. ACM computing surveys (CSUR), 2018.

2

Amina Adadi and Mohammed Berrada. Peeking inside the black-box: a survey on explainable artificial intelligence (xai). IEEE Access, 6:52138–52160, 2018.

This entry was readapted from Guidotti, Monreale, Pedreschi, Giannotti. Principles of Explainable Artificial Intelligence. Springer International Publishing (2021) by Francesca Pratesi and Riccardo Guidotti.