Social Impact of AI Systems#

In Brief#

Artificial intelligence (AI) is quickly changing the way we work and the way we live. Indeed, the societal impact of AI is on most people’s minds.

More in Detail#

AI development has increased rapidly in recent years, leading to the onset of what experts now consider the AI revolution. Similar to previous periods of rapid development such as the industrial revolution, the AI revolution is expected to impact society in a myriad of ways, altering the current state of the art in many domains. While some consequences of the AI revolution are considered to be well-understood such as the impact on certain repetitive, easily-automated tasks, others are still unknown. This is especially important as experts consider the AI revolution to still be in infancy and expect it to continue to rapidly develop throughout the next 100 years [1]. In order to gauge the potential harm of any new AI system, the European Union has proposed a risk-based system of assessing the safety of new AI models to ensure that potential negative consequences are known and targeted prior to the implementation of the system. This system is designed to protect society from the potential harms of AI and risk levels are designated based on the perceived harm of society. Each level of risk and the associated AI models are discussed in greater detail below. In addition we recommend a self-assessment tool to gauge the level of risk of a given AI system.

SPAM Filters and Video Games: Minimal Risk AI#

At the lowest risk category, “Minimal Risk”, the European Union targets systems which pose virtually no harm to the privacy, safety or rights of users. These systems are subject to no regulation as they are considered to be harmless to end-users and can therefore operate freely in the European market. This risk category includes SPAM filters such as those commonplace in most email providers and video games, which may use AI for procedural-content generation throughout the game, adjusting ambient conditions depending on user action. However, some expert groups expect harsher regulation in these areas in the future as AI holds the potential to generate highly-realistic harmful content, which may endanger the safety and privacy of citizens.

Generative Models and ChatBots: Limited Risk AI#

Legislation becomes relevant at the second lowest risk-tier “Limited Risk”, where AI systems are considered to pose minimal risk to user safety, rights and privacy but nonetheless require user awareness. As such, developers are obligated to ensure that users are aware that they are interacting with an AI in order to prevent misunderstandings as well as potential unhealthy attachments (See also AI human interaction} and Self-identification of AI). At this tier of risk, generative models such as DALL-E are found, which utilize AI to generate content. In addition, chat bots including ChatGPT are considered to pose Limited Risk and therefore need to be clearly identifiable as AI by end users.

Self-Driving Cars and Medicine: High Risk AI#

At the highest acceptable tier of Risk, the European Union designates systems, which may under proper legislation be beneficial to end users but pose considerable harm to the privacy, security and rights of society. As such, strict legislation is implemented for these systems, requiring thorough risk assessments of the AI, implemented safeguards to prevent bias and protect privacy and human oversight to prevent misuse. Additionally, the AI needs to be transparent in order to make it auditable and gauge the cause of potentially harmful decisions. At this level, developments such as self-driving cars are found, as malfunction within the AI could cause considerable harm to society and, as such, rigorous testing is necessary before the product can be implemented. In addition, medical AI mechanisms are considered high risk as they could lead to discrimination or oversight if not properly tested and used alongside human supervision.

Social Scoring and Exploitation: Unacceptable Risk AI#

At the highest tier, classed ‘Unacceptable Risk’, AI systems which pose non-justifiable risk to society are found. These AI systems are outlawed by the European Union as they are considered to pose unacceptable risk to the rights, privacy and security of citizens without yielding a corresponding positive outcome. In this category, systems implementing social scoring are found (see also Section \ref{section: AI for social scoring}). In addition, systems which exploit vulnerable groups such as AI generated targeted ads for gambling are considered unacceptable.

ALTAI: Self-Assessment of AI Risk#

When designing a new AI system, developers must assess the level of risk they are operating at and implement the corresponding safety measures as designated by the European Union. In order to ease this process, a self-assessment questionnaire was designed that allows developers to easily determine the level of risk of their AI. The questionnaire can be found here: Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self-assessment.

Bibliography#

1

Spyros Makridakis. The forthcoming artificial intelligence (ai) revolution: its impact on society and firms. Futures, 90:46–60, 2017.

This entry was written by Nicola Rossberg and Andrea Visentin.