Self-identification of AI#

In Brief#

AI systems which are presented to be both high in intelligence and have a good quality design tend to cause humans to anthropomorphize the system; i.e. assign human-like traits to it which can lead to emotional bonding with the system [1]. This is dangerous, especially for vulnerable groups such as older generations or those less acquainted with the mechanisms behind the AI. In addition to the threat of emotional attachment, AI systems have been found to frequently provide false or partially false information and portrait it as fact, due to a phenomenon referred to as AI hallucinations. As such it is important that users can clearly identify an artificial intelligence as such, in order to understand the system’s limitations and prevent emotional attachment and misinformation. The remaining part of this chapter will first explain which AI systems are required to be clearly identifiable and then elaborate on how these criteria should be met.

More in Details#

Under the EU AI act, guidelines for the identification of AI are provided for developers. Systems which are classified to pose at least limited risk to the user, must be clearly identifiable as AI. This aims to prevent both misunderstandings and accidental misinformation due to AI hallucinations. Any system which interacts with humans is considered limited risk. In addition, systems including recommender systems, generative models and emotion-recognition systems are also considered to be limited risk and must consequently be identifiable as AI. The ALTAI self-assessment checklist can be used to assess the risk level posed by an algorithm and the consequential precautions which should be implemented.

Making the system identifiable#

If a system is considered to pose at least limited risk, it must be clearly identifiable as AI by the end-user. In order to meet this criterion, developers should consult the following points and ensure that the system is in compliance with all:

  1. The AI system must be identifiable as an AI system.

    • This should not be hidden in jargon such as ‘large language model’ but communicated in simple language

  2. When needed to comply with fundamental rights, the option to decide against AI interaction in favor of human interaction should be provided.

  3. The AI system’s capabilities and limitations should be communicated to AI practitioners or end-users in a manner appropriate to the use case at hand

    • This may include the AI system’s level of accuracy and its limitations

Guidelines#

Under the EU AI act, guidelines for the identification of AI are provided for developers. Systems which are classified to pose at least limited risk to the user, must be clearly identifiable as AI. This is aimed to prevent both misunderstandings and accidental misinformation due to AI hallucinations. Any system which interacts with humans is considered limited risk. In addition, systems including recommender systems, generative models and emotion-recognition systems are also considered to be limited risk and must consequently be identifiable as AI. The ALTAI self-assessment checklist can be used to assess the risk level posed by an algorithm and the consequential precautions which should be implemented.

Key Words#

  • AI Hallucinations: AI hallucinations refer to instances where artificial intelligence systems, particularly generative models like GPT-4, produce incorrect, misleading, or nonsensical information that is presented as factual.

  • Recommender Systems: Recommender systems are a type of information filtering system that seeks to predict the preferences or interests of users and make personalized suggestions accordingly. Commonly used in online platforms such as Netflix, Amazon, and Spotify, these systems analyze user behavior and preferences to recommend movies, products, music, or other content.

  • Generative Models: Generative models are a class of machine learning models that can generate new data instances similar to a given set of training data. These models learn the underlying patterns and distribution of the training data and can create new content, such as text, images, or audio, that mimics the original data.

  • Emotion-Recognition Systems: Emotion-recognition systems are AI systems designed to identify and interpret human emotions from various data sources, such as facial expressions, voice tones, text, or physiological signals.

  • Large Language Model: A large language model is a type of artificial intelligence model that has been trained on vast amounts of text data to understand and generate human language. These models, like GPT-4, use deep learning techniques to learn the complexities of language, including grammar, context, and nuances, enabling them to perform tasks such as translation, summarizing, and conversation generation with a high degree of fluency and coherence.

Bibliography#

1

Joohee Kim and Il Im. Anthropomorphic response: understanding interactions between humans and artificial intelligence agents. Computers in Human Behavior, 139:107512, 2023.

This entry was written by Nicola Rossberg and Andrea Visentin.