19 December 23 | Lisboa
TOL NEWS 55, AI
Artificial Intelligence (AI) Act

Council and the European Parliament reached a provisional agreement on the Artificial Intelligence (AI) Act

On 9th December, the Council and the European Parliament reached a provisional agreement on the Artificial Intelligence (AI) Act, a proposal regarding the first rules for AI in the world.

This act strives to ensure that AI systems placed on the European market and used in the EU are safe and comply with EU values and fundamental rights. Additionally, the proposal aims to stimulate investment and innovation in the field of AI in Europe.

Firstly, this agreement harmonises the definition of an AI system with the approach proposed by the Organisation for Economic Co-operation and Development (OECD), to ascertain clear criteria for distinguishing AI from simpler software systems.

As for the scope of application, the agreement emphasizes it must not affect member states' competences in matters of national security and will not apply to systems used exclusively for military or defence purposes. 

The act will not apply either to AI systems used solely for research and innovation purposes or to the use of AI for non-professional reasons.

Generally speaking, AI is regulated on the basis of the risk it poses, which means the greater the risk of harm to society, the stricter the applicable rules are.

AI systems presenting a limited risk are subject to light transparency obligations, having to disclose the content was generated by AI. On the other hand, certain uses of AI are prohibited, including:

  • Cognitive behavioural manipulation
  • The untargeted scraping of facial images from the Internet or CCTV footage
  • Emotion recognition in the workplace and educational institutions
  • Social scoring
  • Biometric categorisation to infer sensitive data, such as sexual orientation or religious beliefs
  • Some cases of predictive policing of individuals.

Additionally, several changes were made to the Commission's initial proposal regarding the use of AI systems for law enforcement purposes by the responsible authorities. These changes include an emergency procedure that allows law enforcement authorities to deploy a high-risk AI tool which has not been properly approved, in exceptional situations.

Rules have also been defined for foundation models which are large systems capable of performing a wide range of different tasks, such as generating video, text and images and generating computer code. These models are subject to specific transparency obligations before they can be placed on the market.

Furthermore, an AI Office is created to oversee the most advanced AI models, to promote standards and testing practices and to ensure the common rules are enforced in all member states. 

This AI Office will be advised by a scientific panel of independent experts, namely concerning the emergence of "high impact" foundation models (foundation models trained with a large amount of data and which perform above the average) and monitoring their safety risks.

The AI Board will continue to operate as a coordination platform and an advisory body to the Commission. Finally, an advisory forum for stakeholders (such as industry representatives, small and medium-sized enterprises and academia) will be set up to provide technical expertise to the AI Board.

Where penalties are concerned, violations of the AI Act are subject to fines set as a percentage of the offending company's global annual turnover in the previous financial year or a predetermined amount, whichever is higher.

Following this provisional agreement, the act must now be submitted to the representatives of the member states for approval. The AI Act is expected to be applicable two years after its entry into force, except for some provisions.

Please note, your browser is out of date.
For a good browsing experience we recommend using the latest version of Chrome, Firefox, Safari, Opera or Internet Explorer.