Navigating the EU AI Act: a game-changer for tech regulation
09 August 2024The European Union is at the forefront of regulating artificial intelligence (AI), with the new AI Act (the Act) being one of the first significant legislative initiatives addressing AI globally. The Act entered into force on 1 August 2024, with the majority of its rules becoming effective from 2 August 2026. It introduces a risk-based approach to the regulation of AI, classifying different systems of AI according to the risk they pose to users. The focus is on AI systems that present a "high" risk.
“By guaranteeing the safety and fundamental rights of people and businesses, the Act will support human-centric, transparent and responsible development, deployment and take-up of AI in the EU”. Statement by President von der Leyen on the AI Act.
The Act's extraterritorial reach is particularly noteworthy, affecting not only AI developers and users within the EU but also international organisations offering AI systems in the EU, or whose AI systems affect individuals within the EU.
Risk-based rules: a quick guide
The Act imposes compliance obligations on providers, deployers, importers, distributors and product manufacturers of AI systems. In a nutshell, these are:
Risk categorisation | Description and examples | Key compliance obligations |
---|---|---|
Unacceptable risk | Systems that are considered a threat to people, including those involving cognitive behaviour manipulation (e.g. voice-activated toys that encourage dangerous behaviour in children), or social scoring leading to disproportionate treatment. | N/A - AI systems that present an unacceptable risk are banned outright. |
High risk | Systems falling within the following categories:
| Providers of “high” risk AI systems will be required to run a conformity assessment procedure before the system can be placed on the market, and throughout their lifecycle will need to comply with a range of requirements on testing, data training, cybersecurity, transparency, and in some cases fundamental rights. |
Limited risk | Systems intended to interact with natural persons or to generate content (e.g. chatbots). | Certain information and transparency obligations apply, including to ensure that users are aware they are interacting with chatbots and to disclose that content has been generated or manipulated by an AI system. |
Low or minimal risk | Common AI systems, e.g. spam filters and recommender systems. | No specific obligations or requirements in the Act, but other obligations (e.g data protection laws) will continue to apply. |
The Act also contains additional obligations for general purpose AI systems (GPAI) – an AI system, often trained on a significant amount of data using self-supervision, that is capable of performing a wide range of distinct tasks. In recognition of the potentially high risks inherent in such systems, the Act imposes stricter criteria for these systems, as well as an overarching requirement for “transparency” (i.e. for the individuals concerned to know they are interacting with an AI system) and “explainability” (i.e. the provision of a clear explanation of the model’s decision-making processes).
These concepts are by no means new to the AI discussion, having featured in the European Commission’s 2020 White Paper on Artificial Intelligence, which recommends a requirement for “clear information to be provided as to the AI system’s capabilities and limitations, in particular the purpose for which the systems are intended, the conditions under which they can be expected to function as intended and the expected level of accuracy in achieving the specific purpose”.
Potential penalties under the Act are hefty, and could result in a fine of up to €35m or 7% of global turnover, depending on the nature of the non-compliance.
Get AI-ready: a proactive blueprint
In preparation, organisations with a link to the EU market should take proactive steps to ensure that AI is developed and deployed responsibly and legally. This includes:
- mapping AI systems: conducting a mapping exercise of all AI systems currently in use or in development to determine whether they would be captured by the Act and, if so, identifying their risk category and applicable compliance obligations;
- implementing compliance measures: implementing policies, procedures and training to ensure that only compliant systems are used and provided. In particular, developing and implementing a robust compliance strategy for “high” risk AI systems and GPIA, which includes appropriate data governance, accuracy, cybersecurity, transparency and explainability measures;
- monitoring developments: staying informed about any developments to the AI Act, as well as the approach to AI regulation globally;
- data governance: reviewing and strengthening data governance policies, as the Act emphasises the quality of data sets used by AI, which must be free from biases and be compliant with data protection and copyright legislation; and
- documentation and record-keeping: maintaining thorough documentation and records of AI system assessments, compliance measures, and data sources to demonstrate adherence to the Act's requirements.
For further information on developing your AI compliance strategy, please reach out to our Commercial team.
Get in touch