CE Europäische Union
  • 35 - Informationstechnik, Büromaschinen

Software, applications of information technology (for both: included when embedded in hardware); AI as a component of products subjects to product safety legislation (any code); Software (ICS 35.080), Applications of information technology (ICS 35.240)

Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence act) and amending certain union legislative acts (COM(2021) 206 final) (108 page(s), in English; 17 page(s), in English)

The draft Regulation defines an 'artificial intelligence system' (AI system) as meaning software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.

The draft Regulation introduces provisions that will regulate the development, marketing and placing on the European Union market of AI systems. These rules are limited to the minimum necessary requirements to protect the safety and fundamental rights of persons considering the risks and challenges posed by AI systems, without unduly constraining or hindering technological development or otherwise disproportionately increasing the cost of placing AI solutions on the market. The draft rules differ according to the degree of risk AI systems are considered to present.

AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned in the European Union. This includes AI systems or applications that manipulate human behaviour to circumvent certain users' free will (e.g. toys using voice assistance encouraging dangerous behaviour of minors) and systems that allow 'social scoring' by governments. The prohibitions cover practices that have a significant potential to manipulate persons through subliminal techniques beyond their consciousness or exploit vulnerabilities of specific vulnerable groups such as children or persons with disabilities in order to materially distort their behaviour in a manner that is likely to cause them or another person psychological or physical harm. In addition, the 'real time' use of remote biometric identification systems in publicly accessible spaces for law enforcement purposes is prohibited in principle. Narrow exceptions are strictly defined and regulated (such as where strictly necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal offence). When permitted, such use is subject to authorisation by a judicial or other independent body and to appropriate limits in time, geographic reach and the data bases searched.

High-risk AI systems will be subject to strict obligations before they can be placed on the European Union market.

AI systems identified as high-risk include AI technology listed in the Annexes of the proposal and used in the following broad areas:

·         Real time and ex post remote biometric identification systems;

·         Critical infrastructures (e.g. transport), that could put the life and health of people at risk;

·         Educational or vocational training, that may determine the access to education and professional course of someone's life (e.g. scoring of exams);

·         Safety components of regulated products subject to third party conformity assessment under EU sectoral product safety legislation (e.g. AI application in robot-assisted surgery);

·         Employment, workers management and access to self-employment (e.g. CV-sorting software for recruitment procedures);

·         Essential private and public services (e.g. credit scoring denying people the opportunity to obtain a loan);

·         Law enforcement that may interfere with people's fundamental rights (e.g. evaluation of the reliability of evidence);

·         Migration, asylum and border control management (e.g. verification of authenticity of travel documents);

·         Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).

Requirements for high-risk AI systems concern:

·         Adequate risk assessment and mitigation systems;

·         High quality of the datasets feeding the system to minimise risks and discriminatory outcomes;

·         Logging of activity to ensure traceability of results;

·         Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;

·         Clear and adequate information to the user;

·         Appropriate human oversight measures to minimise risk;

·         High level of robustness, security and accuracy.

Public and private providers of such high-risk AI systems will have to follow conformity assessment procedures to demonstrate compliance with the abovementioned requirements before those systems can be placed on the Union market or used in the Union. To facilitate this, the draft Regulation includes provisions relating to the future harmonisation and drawing up of harmonised technical standards to be adopted by the European standardisation organisations (CEN/CENELEC and ETSI) based on a mandate from the European Commission. Predictable, proportionate and clear obligations are also placed on providers and users of high-risk AI systems to ensure safety and respect of existing legislation protecting fundamental rights throughout the whole AI systems' lifecycle.

AI systems with specific transparency obligations: When using AI systems such as chatbots, people should be aware that they are interacting with a machine so they can take an informed decision to continue or step back. People should also be informed in case they are exposed to emotion recognition and biometric categorisation systems as well as deepfakes.

Minimal risk: Subject to compliance with existing legislation, the proposal allows the free use of all other AI applications that are not prohibited, qualified as 'high-risk' or requiring specific transparency obligations. The vast majority of AI systems fall into this category. The draft Regulation does not intervene here, as these AI systems represent only minimal or no risk for peoples' rights or safety, but providers of non-high-risk AI systems can voluntarily adhere to voluntary codes of conduct to demonstrate trustworthiness of their applications.

In terms of governance, the Commission proposes that national competent market surveillance authorities supervise the new rules, while the creation of a European Artificial Intelligence Board will facilitate their implementation, as well as drive the development of standards for AI. Additionally, specific measures are proposed to support small-scale providers and users as well as regulatory sandboxes to facilitate responsible innovation.