Society and Democracy
Contents
Society and Democracy#
In Brief#
The rapid increase of technological development is impacting society in new and partially unpredictable ways. With AI being integrated into all facets of life, ranging from recommender systems for movies to automatic passport control and facial recognition in airports, it is important to understand the negative ramifications this technology may have on society and democracy as a whole. Previously, the development of technologies was frequently guided by the interests of tech giants with profit motivating research [2]. While this line of development has produced some impressive systems, there is little consideration for the protection of citizens and the regulation of these technologies for the greater societal good. In order to combat this trend, the European Union introduced the AI act, stipulating legislation for new AI systems depending on the level or risk posed by the system. However, with this increase in legislation, fears are rising about Europe falling behind other, non-legislated countries, in the development of new software. It is such that legislators need to find the balance between regulation and development, to protect democracy and society without adversely affecting the development of the European AI research sector.
More in Details#
While there is currently only limited research on the impact of AI on society and democracy, many experts are concerned about potential adverse outcomes if AI is not properly legislated. The AI act introduced by the European Union is the first of its kind to regulate AI in order to protect both end-users and society as a whole. The AI act takes a risk-based approach to the classification of AI, gauging the risk posed by a proposed system to determine the suitable legislation for the system. The following will first briefly discuss the levels of risk proposed by the AI act. In addition, we introduce two sets of supplementary considerations proposed by Catelijne Muller, one of the members of the EU High Level Expert Group on Artificial Intelligence. The first are a series of human rights established by the European Union which should not be endangered by any proposed system. The second consideration is a series of red-lines which should in no event be crossed by a proposed application. These considerations may be used as a supplement to the AI act in order to ensure further system safety. However it is important to note that the AI act must be consulted and abided by in all scenarios.
First, a risk assessment of the developed system must be conducte. Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self-assessment provides a tool which allows a self-assessment of the expected risk of the algorithm. Depending on the assessed risk-level different legislation may apply to the AI system, as reported in the ./../main/Ethical\_Legal\_Framework/AI\_ACT entry.
Human-Right Considerations#
Next to the official EU guidelines, developers should consider the impact of the developed technology in the context of its proposed application. If the current guidelines do not cover this area of deployment but nonetheless a high risk is found to exist with the deployment of the AI, developers should collaborate with local legislators to ensure that the algorithm can be safely implemented. The following human rights might be considered in the assessment of AI safety in addition to the EU guidelines (Source: Catelijne Muller, one of the members of the EU High Level Expert Group on Artificial Intelligence)
A right to human autonomy, agency and oversight over AI.
A right to transparency / (explainability)(../Transparency/XAI.md) of AI outcomes, including the right to an explanation of how the AI functions, what logic it follows, and how its use affects the interests of the individual concerned, even if the AI-system does not process personal data, in which case there is already a right to such information under the GDPR [1].
A separate right to physical, psychological and moral Integrity in light of AI profiling, affect recognition.
A strengthened right to privacy to protect against AI-driven mass surveillance.
Adapting the right to data privacy to protect against indiscriminate, society-wide online tracking of individuals, using personal and non-personal data (which often serves as a proxy for personal identification) Diverging from these rights in exceptional circumstances such as for security purposes should only be allowed under strict conditions and in a proportionate manner.
Red-Lines#
Finally, all developers should be aware of red lines in the development and deployment of AI. While moral and ethical standards may differ between cultures and individuals, the following considerations are proposed by Catelijne Muller, one of the members of the EU High Level Expert Group on Artificial Intelligence:
Indiscriminate use of facial recognition and other forms of bio-metric recognition either by state actors or by private actors.
AI-powered mass surveillance (using facial/bio-metric recognition but also other forms of AI tracking and/or identification such as through location services, online behaviour, etc.).
Personal, physical or mental tracking, assessment, profiling, scoring and nudging through bio-metric and behaviour recognition.
AI-enabled Social Scoring.
Covert AI systems and deep fakes.
Human-AI interfaces 20 Exceptional use of such technologies, such as for national security purposes or medical treatment or diagnosis, should be evidence based, necessary and proportionate and only be allowed in controlled environments and (if applicable) for limited periods of time.
Bibliography#
- 1
European Parliament & Council. General data protection regulation. 2016. L119, 4/5/2016, p. 1–88.
- 2
Paul Nemitz. Constitutional democracy and technology in the age of artificial intelligence. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133):20180089, 2018.
This entry was written by Nicola Rossberg and Andrea Visentin.