Adrien Basdevant
February 22, 2024

AI Act, what do we do now?

Version française disponible ici

A legal framework for responsible innovation: this is how the European Union presents the AI Act. Not as a set of constraints, but as an opportunity to build trust in technologies, both for those developed or used internally, and those integrated into products and services. Companies are therefore invited to take a proactive approach, to comply not only with the regulation, but also to position themselves as leaders in responsible innovation.

The multiple paradoxes of AI

Ironically, while some of the world's leading engineers and computer scientists agree that "Artificial intelligence doesn't exist" (Luc Julia, Editions First, 2019), this marketing terminology is about to become a legal qualification. The draft regulation proposed in April 2021 was unanimously agreed on 2 February 2024 by the Permanent Representatives Committee (Coreper) of the 27 Member States of the European Union. In practice, this regulation will therefore be adopted in its final version by summer 2024. The regulation’s entry into force will be gradual, depending on the level of risk of AI systems, scheduled for 2025 and 2026.

The basic principle of this text, which is to regulate uses according to the risks they pose rather than the technology itself, will not be revisited here. However, it should be noted that this principle has not been respected for a very fashionable subset that are "general-purpose models", which refer to so-called "generative" AI as well as so-called "foundation" models such as LLMs ("Large language models").

Regularly updating AI training to include regulatory aspects

Faced with the panic caused just a year ago by the sudden increase of ChatGPT users, these models will not be apprehended according to their uses, but on the basis of technical criteria, such as the number of floating point operations per second . A bad idea, a flop? The merit of the European Union lies in its willingness to set binding standards, rather than to merely be satisfied with codes of conduct. Only time will tell if the EU has rushed to regulate some of the most innovative models, without carrying out an impact assessment first. By way of illustration, many experts believe that taking a unit of measurement of the computational speed of a computer system as a legal criterion is not necessarily adequate, particularly as it rules out factors such as the conditions under which microprocessors operate.Other commentators argue that such factors could penalize European companies.For the latter, which are less endowed financially and in terms of human resources than larger players, documentary compliance represents a more complex obstacle.

In this tension where the cursor between regulation and innovation is so delicate to position, we are not far from a paradox. In any event, this regulation, which in the legislative sense is a text that does not need to be transposed into national law, will see many variations. The structure is far from finalized, as the Commission will still have to publish guidelines, standards, etc. as well as about twenty pieces of secondary legislation. The AI Office, which will oversee AI models posing a “systemic risk”, will also have to publish templates, which will constitute the procedures to be followed, such as those for summaries (model cards, data sheets, red-teaming processes), relating to model training.

Categorization of AI Systems: Identifying your playground

Once the stage is set, what concrete steps can you take to prepare your business? First, you should, identify and categorize AI systems according to the risk level outlined by the AI Act. High-risk applications, such as assisted medicine, automatic CV sorting, exam scoring, to name a few, will require special attention. This includes technical documentation, testing procedures, and demonstrating compliance with transparency, data security, and societal impact requirements. This should be considered at the level of each AI system or model, rather than at the company level as a whole. To verify this, France and other Member States will probably designate several supervisory authorities. Among other attributions, these authorities will be tasked with ensuring the smooth application of the AI Act, overseeing test environments (sandboxes), exercising controls and imposing sanctions.

This AI regulation will not operate in isolation. It will need to coexist with existing EU texts such as the GDPR, cybersecurity regulations and digital services provisions, as well as sector-specific regulations. A comprehensive assessment of the interaction between these laws is needed to identify and rationalize possible conflicts and overlaps.

Investing in the transparency and explainability of AI systems

Particular attention should be paid to clarifying how AI interacts with GDPR to avoid ambiguities. As such, the active participation of the EU and its businesses in global forums and processes, such as the G7Hiroshima and the EU-US Trade and Technology Council, is crucial to foster regulatory alignment and compatibility.

It falls upon leaders to proactively engage with national and European regulators. Upstream consultations – such as those currently launched by the French data protection authority (CNIL) on the use of data in the context of model training – can help navigate these emerging fields and adjust processes accordingly. Regulators also need feedback from practical experiences to refine their guidelines in a way that is as close as possible to operational reality. This is especially true in a field where technological developments are rapid.

Documentation: building your compliance file

Documentation plays a key role. It should detail the design, deployment, and operation of business-specific AI systems, including the measures taken to mitigate identified risks. This starts with establishing an AI governance committee to oversee regulatory compliance, assess ethical risks, and ensure that the principles of integrity and transparency are enshrined in all AI projects. Raising awareness and training staff on AI issues and the requirements of the AI Act is also essential. This includes technical training for developers and ethical awareness for all employees. Rather than banning the use of these tools internally, it is more relevant to start from use cases and business needs, to refine a risk matrix and create specific processes.

Need to know more?
view our expertise 
EN
FR
LinkedIn
Legal notice
Linkedinmentions legales