Emotional Impact#

In Brief#

Due to the fast development of AI systems, their behavior, in some aspects, has become akin to humans. Especially with the introduction of large language models like chatGPT, it has become increasingly difficult to differentiate human and AI generated language. When coupled with human-like design, this cognitive ability has been found to lead to emotional attachment by users. While emotional impact cannot be prevented entirely, it is important to assess the extent to which the AI encourages human attachment and ensure clear signaling to the end-user that they are interacting with an AI.

More in Details#

With the rapid development of AI in recent years, especially in the realm of human-language generation, it has become increasingly difficult to distinguish human and AI generated text. Furthermore, with AI becoming increasingly integrated into daily life, in the form of chat-bots and virtual assistants (such as Siri or Alexa), humans are regularly confronted with artificial intelligence agents mimicking human interaction. Due to these factors, it is important to consider the potential emotional ramifications of the integration of AI into everyday life. Special attention should be paid to potentially vulnerable populations who may not understand the capabilities of AI. The following will discuss previous research on emotional attachment of AI and introduce the European Union guidelines relating to the clear identification of AI systems to avoid emotional attachment.

Current State of Research#

Past research has shown that high quality human-like design (e.g. the different voices available for Siri) coupled with high cognitive intelligence can cause users to assign human-like traits to the system and form emotional bonds. Depending on the anthropomorphism tendencies (AT) of the user, systems with good design (for users with high AT) or good cognitive intelligence (for users with low AT) are considered especially human-like. While the formation of attachments may be favorable in certain areas (e.g. when integrating care-robots into hospitals and social-care facilities) it also bears dangers as it may make users more susceptible to potentially harmful suggestions from the system. One prominent example is the tragic suicide of a young man whose attachment to an AI agent resulted in severe anxiety and depression which drove him to take his own life [1]. Based on such cases, it is important to clearly communicate the nature of AI agents to users as well as clarify the shortcomings of these agents (see also Self-identification of AI.

Guidelines Relating to Emotional Attachments#

Emotional attachments are most likely to be formed during direct contact between the AI and the end-user. This scenario takes place most frequently during user interactions with chat-bots such as ChatGPT. These types of systems are classified as posing ‘limited risk’ by the European Union. As such, legislation dictates that systems must be clearly identifiable as AI in order to prevent emotional attachment or misinformation. Prior to the piloting of such an AI system, the following two questions should be assessed in the respective field:

  1. Did you assess whether the AI system encourages humans to develop attachment and empathy towards the system?

  2. Did you ensure that the AI system clearly signals that its social interaction is simulated and that it has no capacities of “understanding” and “feeling”?

While most AI systems which may encourage emotional attachment are considered limited risk, it is important to note that this may not necessarily always be the case. Developers should always assess the risk of their developed system prior to implementation and follow legislation to ensure user safety. While not all attachment can be prevented, partially due to the high quality and fast development of AI, measures must be taken to prevent unhealthy and dangerous attachment.

Bibliography#

1

Lauren Walker. Belgian man dies by suicide following exchanges with chatbot. Brussel Times, 2023. URL: https://www.brusselstimes.com/430098/belgian-man-commits-suicide-following-exchanges-with-chatgpt.

This entry was written by Nicola Rossberg and Andrea Visentin.