AI for social scoring#

In Brief#

Social Scoring is the concept that all daily actions taken by individuals are monitored and scored for their benefit to society as a whole. Based on this scoring system, you are assigned a given value which determines your access to education, healthcare and other public goods. AI lends itself to social scoring as it excels at facial recognition and movement identification. However this poses several technical and ethical problems. The obvious ethical issue is that the goodness of actions is subjective and the ability of individuals to access goods and services necessary for survival should not depend on their perceived ability to contribute to society. Furthermore, the technical issue with social scoring is that the quality of AI evaluation of movement hinges on the quality of its training data, with biased training data leading to biased evaluation by the AI. Social scoring has currently only been proposed by the Chinese government but other countries including the United Kingdom are using automatic facial recognition in surveillance tapes to search for fugitives.

More in Details#

Within the EU legislation, AI systems for social scoring are labeled as marking ‘clear harm’ and are banned within the European Union. Social Scoring, as employed by the Chinese government, is a wide-ranging method of evaluating and assigning value to citizen’s daily actions. Through this, citizens are assigned a social score which determines their access to vital goods such as health care. This poses several obvious ethical problems and has been outlawed as part of the EU AI act. As such, social scoring to this degree will not be further discussed in this section. However to a lesser degree, social scoring exists within daily western life already. Apps like Uber use scoring to evaluate customer and driver behavior and create black-lists of users. Insurance companies in New York are allowed to adjust prices based on information about their users they have gathered from social media. While these systems operate on a smaller scale and are at times beneficial to e.g. protect Uber drivers from customers who have behaved poorly in the past, social scoring becomes dangerous when automated through AI.

Problems with Social Scoring and AI#

While AI systems excel at pattern recognition and identification and Large Multimodal Models especially allow for the successful identification of actions and behavior based on video footage, AI systems only perform as well as the data they are trained on. However, due to biased collection procedures and the difficulty of collecting data sets large enough to train data-hungry AI models, it is difficult to obtain a non-biased training dataset. As a consequence the AI mechanisms trained on this data develop biased classification mechanisms, which systematically disadvantage certain groups, leading to wide-reaching negative consequences. As such, next to the ethical and moral concerns about the implementation of social scoring systems, AI is also not suitable for this task as it would most likely further perpetuate already present biases.

Guidelines#

State-run social scoring falls under the ‘Unacceptable’ level of risk proposed by the EU AI act and as such is outlawed by the European Union. However smaller-level social scoring such as the ratings of customers by apps may be classed as high risk as it determines access to services and benefits. As such, someone with a poor rating on an app like Uber may lose access to the app’s transportation services. High risk applications are strictly legislated by the EU and require thorough assessment prior to implementation. This includes assessing potential risks posed by the app as well as the implementation of transparency to allow auditing of decision mechanisms by the algorithm. If an algorithm is auditable, justifications for given decisions can be provided, and users can understand why they may have been e.g. blacklisted for a given service.

This entry was written by Nicola Rossberg and Andrea Visentin.