High Risk AI Systems#

For an AI system to be classified into the “high-risk” category, there are two different criteria to be considered:

  • an AI system is considered “high-risk” if is intended to be used as a safety component of a product or is itself a product covered by the Union harmonization legislation listed in Annex II of the EU AI Act [2] (which includes, for example, the Directive on the safety of toys, the Regulations on medical devices, the Regulation on personal protective equipment, …), and the product is required to undergo a third-party conformity assessment pursuant to the legislation listed in Annex II;

  • even if the first criterion does not apply, an AI system is considered ``high-risk’’ if it is included in the systems referred to in Annex III of the EU AI Act [2]. This criterion does not apply to systems that do not pose significant risk of harm to the health, safety, or fundamental rights of natural persons according to the criteria listed in Article 6 of the EU AI Act [2], unless the system performs profiling of natural persons.

Thus, it is important to look into the Annex III of the EU AI Act [2], where high-risk systems are listed. In particular, high-risk systems are systems used in the following areas:

  • Biometrics, for example, remote biometric identification systems and emotion recognition AI.

  • Critical infrastructure, for example, AI systems used as safety component of a water supply infrastructure or for road traffic.

  • Education and vocational training, for example, systems used to determine access to an educational institution, proctoring systems, AI used to evaluate learning outcomes.

  • Employment, workers management and access to self-employment, for example, systems used for recruitment or to make decisions related to promotions.

  • Access to and enjoyment of essential private services and essential public services and benefits, for example, systems used to evaluate the eligibility of natural persons for healthcare services and systems used to establish a person’s credit score.

  • Law enforcement, for example, polygraphs and systems used by authorities to evaluate the reliability of evidence during an investigation.

  • Migration, asylum, and border control management, for example, systems used by authorities to assess the risk of irregular migration posed by a natural person who intends to enter into the territory of a Member State.

  • Administration of justice and democratic processes, for example, systems used to assist a judicial authority in applying the law to a concrete set of facts, systems used to influence the outcome of an election.

This entry was re-adapted from the Artificial Intelligence Act by Francesca Pratesi and Umberto Straccia.