AI Act
Bird's-AI view
Tuesday, Nov 19, 2024 • 10 - 20 minutes • Melle van der Heijden
AI is the talk of the town. A 2019 study by MMC found 1 in 12 European startups put AI at the heart of their value proposition. Interestingly, the same study finds only 60% of these startups use AI technology in any significant way. This was before September 2020, when the hype really kicked off in wake of the GPT-3 launch. Asking the average Joe about AI, thoughts will go to chatbots or text-to-image AI. Undoubtedly, you are aware AI is already intertwined with our everyday lives in a myriad of ways; through insurance, healthcare, and finance, just to name just a few.
The bird's-eye view
Let's begin with some general AI Act wisdom. July 2024, the AI Act was published. In the next few years, legislative measures stipulated within the AI Act will go into effect in a phased manner, starting with the prohibited practices this February. In this article I aim to present a complete yet concise overview of all essential components of the AI Act. For each of the major components, I will try to give a conceptual summary of the actions needed to comply to its legislative articles. I want to convey a general sense of the scope, effort and expertise required for by this major piece of legislation. This article will primarily focus on the measures contained within the AI Act and pass over much of the institutional structures concerning reporting and enforcement practices.
The AI Act applies to all suppliers involved with AI systems. To start, we need to know what 'AI systems' are, the AI Act defines them as such:
“‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
To get an understanding for what this means for your business, we have to start by defining your role within the AI Act framework. Possible roles are provider, deployer, importer, or distributor. Each of these is responsible for a different set of measures. Multiple roles may apply to you, in this case you are responsible for their combined set of measures.
For the AI system you’re involved with, there’s a framework to classify the risk it may pose. Minimal risk AI Systems get off easy: these are not subjected to any legislation whatsoever. Limited risk AI systems are only subject to transparency laws. High risk AI systems form the brunt AI Act, including human oversight. At Xaiva, we firmly believe human oversight is vital for responsible AI practices. We wrote a separate article diving into more details. Finally, unacceptable risk AI systems are prohibited altogether.
Independent of the risk classification framework, general purpose AI systems are subject to extra measures. All AI systems in this category need to adhere to a strict record-keeping & transparency practices. An exception is made for open source systems. So called 'foundation models', regardless of being open- or closed source, are kept close to the chest; these models are subject to strict testing & reporting measures. This allows for a level of pre-emptive adaptiveness fitting to their general nature. The definition for these foundation models, general purpose AI systems with systemic risk, is mainly based on the amount of resources required to train them.
Non-compliance with legislation can lead to fines ranging from €15 million or 3% of turnover to €35 million or 7% of global turnover, depending on the violation and the size of the company. This is why it is vital for organizations to fully understand the EU AI regulation and have the right tools in place that ensure compliance.
Looking closely at the Act, a dubious exception can be found for the defence industry. AI systems produced by either private or public entities that are intended for military, defence or national security purposes are entirely exempt from the AI Act.
“This Regulation does not apply to AI systems where and in so far they are placed on the market, put into service, or used with or without modification exclusively for military, defence or national security purposes, regardless of the type of entity carrying out those activities.”
As if to counterbalance, A similar exception is made for academic institutions.
“This Regulation does not apply to AI systems or AI models, including their output, specifically developed and put into service for the sole purpose of scientific research and development.”
Timeline
To give affected parties time to prepare, and provide national bodies space to set up the right institutions and protocols streamlining ease of compliance, AI Act's legislative measures will be implemented in stages. This timeline shows when different components of the AI Act will go into effect. It's important to note legislation for 'high risk AI systems' will come into effect in two phases; a particular subgroup of these systems won't be subjected to the AI Act until October 2026.
February, 2025
- Prohibition of unacceptable risk AI practices
August, 2025
- General purpose AI systems
- Institutions: Notifying authorities, AI Office & European Artificial Intelligence Board
February, 2026
- Everything excluding:high-risk AI systems classified as such due to membership of the 'Union Harmonisation Legislation' industries listed in Annex I
Determining your role & AI system
The first two questions we need to answer are ‘What is your role in the production chain?’ and ‘Which risk and purpose classifications fit your AI system?’ The answer to these two questions will provide the lens you need for exploring the AI Act. This allows you to skip over entire chunks, and stick to the parts that are relevant to you.
Role
The AI Act describes 4 different roles: provider, deployer, distributor and importer. As main developers of AI models, providers are by far the biggest players. Naturally, they have to deal with most of the legislation within the AI Act. The deployer is responsible for practicing human oversight. Importers & distributors are responsible for assessment of the supply chain and verification of an AI system’s conformity to the AI Act.
In most cases, the provider will be the entity developing and training the AI model.
“‘provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge.”
The deployer is a public or private institution using the outputs of an AI system, or functionally offering the system to it's customers.
“‘deployer’ means a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.”
The importer imports the unaltered AI system from a deployer of a third country and places it on the Union market.
“‘importer’ means a natural or legal person located or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established in a third country.”
The distributor is a party other than the provider or deployer, who makes an AI system of Union origin available on the European Union market.
“‘distributor’ means a natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market.”
It’s important to note that these definitions may overlap. It is especially common for an entity to be both the provider and deployer of an AI system. There are also exceptions in which entities transition into the role of provider, by making alterations to an AI system for example. To get you acquainted with the different roles, we sketched out 4 hypothetical scenarios. We visualized them a map of Europe to make everything all pretty looking. Below that, you’ll find collapsable windows detailing the definition for individual roles.
Transition into provider
As highlighted in Article 25; if any of the following scenarios occurs, any operator (such as a distributor or deployer) may transition into a provider and, consequently, be obligated to fulfill the responsibilities associated with high-risk AI and general-purpose AI as a provider:
- If they associate their name or trademark with a high-risk AI system that has already been introduced to the market or put into service. However, contractual exemptions can be applied.
- If they make substantial modifications to a high-risk AI system that has already been placed on the market, and it continues to pose a high risk in its new use.
- If they alter the intended purpose of an AI system or general-purpose AI that was not originally classified as high-risk AI but becomes high-risk AI due to the new modifications.
Risk & Purpose
Risk classifications
Minimal risk
Unless classified otherwise, AI systems are considered posing minimal risk. Examples are systems optimizing production processes or predictive maintenance.
Limited risk
These systems are allowed but are subject to minor transparency obligations. Examples of these systems are chatbots and text-to-image models.
High risk
AI systems are considered high risk when they fall under a specific set of applications or when they are used as safety components of products used within a specific set of industries.
Unacceptable risk
AI practices are prohibited for certain applications like real-time biometric identification, social scoring and manipulation of human behaviour, opinion and decisions.
Purpose classifications
Narrow AI
All AI systems that cannot be classified as ‘general purpose AI’ are considered ‘narrow’.
General purpose AI
An AI model that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market.
-
No systemic risk
Any other model that does not meet the requirements for foundation models is classified as having no systemic risk.
-
Systemic risk
These are the so-called foundation models, classification is largely based on computation costs and performance benchmarks.
Risk classifications
Every AI system will be classified into one of four possible risk categories. The risk classification is based on the intended application and sector of the AI system. Together, the obligations set out under the different classifications form the backbone of the AI Act.
The definition of limited risk AI systems, as well as their obligations, are entirely encapsulated in Article 50. It centers around the idea that users need to be aware when interacting directly with AI systems or the content generated by these systems. Included are AI systems:
- Intended to interact directly with persons
- Synthetically manipulating or generating audio, image, video, or text content
- Generating or manipulating image, audio, or video content constituting deep fakes
- Generating or manipulating text for public information purposes
- Used for emotion recognition & biometric categorization purposes
High risk AI systems carry the brunt of legislation, mostly for providers. This includes the topics: documentation & record keeping, data governance, risk management system, post-market monitoring, human oversight, metrics and transparency - including the legislation set out under limited risk AI. As per chance, the most convoluted set of measures falls onto the most convoluted class definition. A system is classified as high risk via membership in one of two separate lists; it either falls under a set of applications or it is part of a set of industries and products. The set of applications can be found in Annex III and includes the following:
- Biometrics
Exceptional applications of remote biometric identification, biometric categorisation and emotion recognition systems that are not prohibited by Article V. - Critical infrastructure
Essential services for the maintenance of vital societal functions or economic activities. These are defined in the Critical Entities Resilience Directive (CER). - Education
Admissions, assessments, evaluating & steering learning outcomes or the monitoring & detection of prohibited behavoir during tests. - Employment
For purposes of recruitment, promotion or termination, including targeted advertisement; deciding on work-related contractual relationshaps or the allocation of tasks; used to monitor & evaluate the performance & behavoir of a worker. - Essential services
Evaluating eligibility for essential public assistance benefits and services; risk assessment & pricing in case of life & health insurance; evaluation of creditworthiness or establishment of credit score; evaluation & classification of emergency calls as well as establishment of emergency priorities. - Law enforcement
Used in polygraphs or similar tools; assessment of risk to become a victim of criminal offences; evaluation of evidence reliability & prosecution of criminal offences; profiling during detection, investigation & prosecution of criminal offences; risk assessment of (re-) offending based on profiling and assessment of behavioral & criminal traits. - Migration control
Used in polygraphs or similar tools; assessment of security risks; also if used for examination of asylum, visa & residence permits applications or detection, recognition & identification of individuals. - Democracy and judicature
Research and interpretation of facts & law; application of law; influence of voting behavior & outcome, excluding those not directly interacting with voters.
The AI system may also be used within several sectors and products already affected by Union Harmonisation Legislation, either as a safety component or as the product itself. These are listed in the less accessible Annex I. Thankfully, Deloitte has presented this information in a more readable format, which we’ll use here. This is the final piece of legislation to come into effect in October of 2026
- Civil aviation
All airports or parts of airports that are not exclusively used for military purposes as well as all operators, including air carriers, providing services at airports or parts of airports that are not exclusively used for military purpose. - Agricultural and forestry vehicles
Tractors as well as trailers and interchangeable towed equipment. - Two-, three-wheel vehicles and quadricycles
Get ready for a unicycle boom. - Marine equipment
On board of EU ships - Personal protection
Personal protective equipment designed and manufactured to be worn or held by a person for protection against one or more risks to that person’s health or safety, as well as some interchangeable components and connection systems for the equipment. - Appliances burning gaseous fuels
Appliances burning gaseous fuels used for, amongst others, cooking, refrigeration, air-conditioning, space heating, hot water production, lighting or washing; also, all fittings that are saftey devices or controlling devices incorporated into an applicance. - Rail system
Rail system including vehicles, infrastructure, energy and signaling system. - Motor vehicles and trailers; systems, components and separate technical units intended for such vehicles
Motor vehicles and their trailers; including, amongst others, autonomous driving. - Civil aviation, european air space, aerodomes
The design and production of products, parts and equipment to control aircraft remotely by a natural or legal person, as well as the design, production, maintenance and operation of aircrafts. - Medical devices
Medical devices for human use and accessories for such devices as well as clinical investigations concerning such medical devices and accessories. - In vitro diagnostic medical devices
In vitro diagnostic medical devices for human use and accessories for such devices. - Machinery
Machinery, interchangeable equipments and lifting accessories (e.g., robots). - Toys
Products designed or intended for use in play by children under 14 years of age (e.g., connected toys and IoT devices). - Recreational craft and personal watercraft
Recreational craft as well as propulsion engines installed on watercraft. - Lifts
Lifts permanently serving buildings and constructions for mainly the transport of persons with or without goods. - Equipment and protective systems intended for use in potentially explosive atmospheres
Equipment and protective systems intended for use in potentially explosive atmo- spheres as well as components incorporated within the equipment or protective system. - Pressure equipment
The design, manufacture and conformity assessment of pressure equipment and assemblies. - Radio equipment
Any kind of radio equipment that is anything connected via radio waves (e.g., WiFi, Bluetooth, 5G in laptops, phones, IoT devices) - Cableway installations
New cableway installations designed to transport persons, to modifications of cableway installations requiring a new authorization, and to subsystems and safety components for cableway installations.
Prohibited practices are detailed in Article 5. The formulation of this text can get rather precise regarding specifics and exceptions. We will give a summary largely based off of the same Deloitte article. When in doubt, please consult the official AI Act document. The following practices shall be prohibited starting the 2nd of february, 2025.
- AI-enabled manipulative techniques
Persuasion to engage in unwanted behaviours, nudging for subversion and impairment of autonomy, decision-making and free choices causing or reasonably likely to cause harm.
Excluding: common and legitimate commercial practices, e.g., advertising. - Biometric categorisation
Use of individual biometric data as face or fingerprint, to deduce or infer political opinion, trade union membership, religious or philosophical beliefs, race, sex life or sexual orientation.
Excluding: labelling, filtering or categorisation of lawfully acquired biometric datasets. As well as law enforcement purposes. - Real-time remote biometric indentification
In publicly accessible spaces for the purpose of law enforcement. For targeted victim search or terrorist threat mitigation. Persons may be targeted when suspected of any of the crimes mentioned in Annex II. May only be used by secret services or with prior authorisation granted by a judicial authority and only to confirm the identity of persons. - Social scoring
Valuation or classification of persons or groups based on multiple data points related to social behaviour and leading to negative treatment.
Excluding: lawful evaluation practices of persons done for a specific purpose in compliance with national and Union law. - Risk assessments
Assessing persons traits and characteristics to predict the risk of committing a criminal offence. Unless in support of a human assessment of the involvement of a person objectively and verifiably linked to a crime. - Facial recognition
Creation or expansion of facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage. - Identifying emotions
Identifying or inferring emotions or intentions on the basis of their biometric data elated to places of work and education institutions.
Excluding: medical reasons such as therapy and safety reasons such as driver awareness assessment.
Purpose classifications
Classifications of ‘general purpose AI’ come on top of the four risk classifications and have to do with the ‘generality’ of the AI system. This is synonymous with the scope of different tasks it is able to perform. A general purpose AI system may be further classified as ‘general purpose AI system with systemic risk’. This classification is assigned to models with high training resource cost and is targeted at what are commonly called foundation models. Most of the class definitions are amendable and will be subject to change as technology develops.
Your AI system is consideredgeneral purpose if it is capable of performing a broad range of tasks. Due to their broad scope, systems in this category may pose unforeseeable risks. Providers of these systems are subjected to an extra set of testing and transparency measures. These measures will supplement any measures already mandated by the AI system’s risk classification
“An AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications. This does not cover AI models that are used before release on the market for research, development, and prototyping activities.”
Exceptions are made for open source models, as well as models that are used in a research or development setting.
“The obligations set out in paragraph 1, points (a) and (b), shall not apply to providers of AI models that are released under a free and open-source licence that allows for the access, usage, modification, and distribution of the model, and whose parameters, including the weights, the information on the model architecture, and the information on model usage, are made publicly available. This exception shall not apply to general-purpose AI models with systemic risks.”
There’s a little bonus for any collector among you who wants to tick all the boxes, albeit a little expensive. This last box goes out to models commonly referred to as foundation models. Your general purpose AI System may get this extra ‘systemic risk’ mark when it meets any of the following conditions:
- “When the cumulative amount of computation used for its training measured in floating point operations is greater than 1025.”
- “It has high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including indicators and benchmarks.”
- “Based on a decision of the Commission, ex officio or following a qualified alert from the scientific panel [ruling impact or capabilities to be compareble to models set out in paragraph 1(a) based on criteria set out in Annex XIII].”
The criteria set out in Annex XIII mainly indicate training costs. The following features are mentioned: number of parameters, quality & size of the dataset, floating point operations and other training time indicators, number of input and output modalities, existing benchmarks, market impact and user count. New models may be classified as systemic risk when they compare with currently registered systemic risk models based on these criteria.
Risk Obligations
To quickly summarize the four different risk levels: minimal risk AI systems can freely be brought to market, without any legislative obligations. Unacceptable risk AI systems are downright prohibited. This means only two risk classification are subjected to legislative measures. Limited risk is subjected to limited measures, mostly concerning transparency of information. High risk AI systems are subjected to extensive legislation across different topics. Keep in mind these classifications are separate from legislative measures introduced for general purpose AI systems, any of the three permissible risk classes may still be subject to legislatition discussed in the next chapter.
Limited risk
The transparency obligations for AI generated content is independant from other obligations set out in the AI Act; AI systems in other risk and purpose categories are also required to comply with these transparency rules in addition to their specific regulatory requirements. A few things to mention beforehand. All information is to be provided ex-ante and in a clear, distinguishable manner, conforming to accessibility requirements. The AI Office - a new European Union institution - will encourage and facilitate the drawing up of codes of practice at Union level to facilitate the effective implementation. Each of these obligations have exceptions for law enforcement purposes.
- Direct interaction
AI systems shall be designed and developed in such a way to ensure persons concerned are informed that they are directly interacting with an AI system. - Marked content
AI generated audio, image, video or text shall be marked in a machine-readable format and detectable as artificially generated or manipulated. Excluded are AI systems that perform an assistive function for standard editing or do not substantially alter the input data or the semantics thereof.
- Emotion recognition & biometric categorization systems
Insofar as these systems are legal, persons exposed to them need to be properly informed about how they are affected these systems. - Full disclosure
Is required for artificially generated or manipulated image, audio or video content constituting deep fakes. As well as generated or altered text with the purpose of informing public on matters of public interest, except were content has undergone human review and a natural or legal person holds editorial responsibility. Where content forms part of an evidently artistic, creative, satirical, fictional or analogous work or programme it’s sufficient to disclose the existence of such content in ‘an appropriate manner that does not hamper the display or enjoyment of the work’.
High risk
Naturally, AI systems falling under the highest risk category are most heavily regulated. Most of the measures are reserved for the providers of AI systems. Deployers are responsible for executing human oversight measures. Importers and distributors are responsible for verifying the supplier complies to the AI Act. Risk classification measures stack; high risk AI systems also have to conply to transparency measures detailed in the section 'Limited risk'.
Measures for providers of high risk AI systems are by far the most expansive, We've ordered them into 3 sections; Information Management, Quality Management, and Human Oversight. Some of the AI Act measures may overlap with other Union legislation. The Union Harmonisation Legislation and the Union Financial Services Law are often mentioned throughout the AI Act. You are encouraged to integrate measures and procedures from different legislative pieces into a single comprehensive protocol.
Information Management
Keep technical documentation and logs. Records may be required for the proper implementation of the risk management & post-market management systems discussed below. Or, depending on the type of information, it is to made available downstream to deployers , who in turn will share it with distributors and importers. Alternatively, the information may need to be made available to a Union authorized representative
upon request.
- General description of AI system
This includes its intended purpose, requirements, a product overview and instructions for use. - Detailed description of AI system
How the AI systems was developed, what key design choices were made, what the system is designed to optimize, computational resources used, datasheets detailing which data was used in what way, validation & testing procedures and cybersecurity measures. - Detailed information about the Quality Management System
Detailed descriptions of the model’s inputs, the risk management system, the post-market monitoring system, data management protocols, human oversight measures, as well as assessments of potential risk and test & validation reports; dated and signed. - Copy of declaration of conformity
This document is part of the conformity assessment procedure which is the be executed prior to the product being placed on the market. - Automated recording of events
Keep records of events during the lifetime of the AI system, these records are to be used in risk management and post-market monitoring and have to be shared with authorized representatives upon reasonable request. - Metrics
Performance in accuracy, benchmarks, robustness & cybersecurity
Quality Management
Providers are required to take these measures in order to anticipate and contain potential risks involved with their AI system. In essence, these measures force providers to adopt a methodical approach to risk mitigation. This includes documenting common sense reasoning & analyses, setting up protocols, stringent testing & evaluation procedures and scrutinizing data collected through monitoring practices.
- Data governance
Most parties already comply to many of these standard good practices. They include, among others: detect, prevent and mitigate data biases and data gaps and ensure datasets are sufficiently representative. - Risk management system
A continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system. This helps companies to identify, analyse and evaluate foreseeable risks, using information generated through the post-market monitoring system, as well as the performance, robustness & cybersecurity metrics. It shall include appropriate measures to mitigate or nullify foreseeable risks. - Post-market monitoring system
Actively and systematically collect, document and analyse relevant data on the performance of high-risk AI systems throughout their lifetime, this allows providers to evaluate the continuous compliance of AI systems. Include an analysis of the interaction with other AI systems. - Performance, robustness & cybersecurity
Ensure appropriate & consistent performance in accuracy, robustness & cybersecurity. Mitigate feedback loops. Ensure technical redundancy. Protect against data- and model poisoning, adversarial examples or model evasion, confidentiality attacks or model flaws.
Human Oversight
The measures ensure AI systems are designed and developed so that they can be effectively overseen by employees of the deployer - for instance: an ‘AI Compliance Officer’, with aim to prevent or minimize foreseeable risks to health, safety or fundamental rights. To comply to measures concerning Human Oversight providers shall appropriately combine measures built into the AI system, as well as measures to be implemented by the deployer, identified a priori by the provider.
- Enable deployers to properly understand system capacities
And monitor its operations: detect and address anomalies, dysfunctions and unexpected performances. - Enable deployers to correctly interpret the AI system’s output
Use explainable AI tools to provide explanations annotating model outputs. This can be done through Xaiva, or feature importance techniques, which we wrote in more detail about here. - Mitigate automation bias
Increase awareness and mitigate risks of automation bias in your AI system design choices. - Enable disregard, override or reversal of AI system output
Any AI system’s output shall be fully adaptable and reversable by the AI compliance officer, before being actuated. - Stop button
Deployers need an accessible method ‘that allows the system to come to a halt in a safe state’.
To use AI models responsibly, Human involvement is essential to check model decisions and verify that they behave as they should. The AI Act mandates deployers to adopt this human-in-the-loop approach. In this article we refer to these future governance roles as ‘AI Compliance Officers’. Providers of high-risk AI systems are responsible for facilitating human oversight. To learn what you, as a deployer, have to expect from your provider, read the ‘Human Oversight’ section under high-risk obligations for providers. These are the responsibilities set out for deployers.
- Implement human oversight measures
Identified, facilitated and instructed by the provider. Assign AI compliance officers with necessary competence, training, authority and support. - Control over inputs
If able to exercise control over input data, make sure that data is relevant and sufficiently representatitve. Keep automatically generated logs for at least 6 months. - Minimize risks to health, safety or fundamental rights
in accordance with its intended purpose or under conditions of reasonably foreseeable misuse. - Monitor operation
Enable AI compliance officers to properly understand the relevant capacities and limitations in view of detecting and addressing anomalies, dysfunctions and unexpected performance. - Inform parties
When suspecting risks to health, safety or fundamental rights of persons, inform the provider followed by the distributor and relevant market surveillance authority and suspend the use of the system. - Inform stakeholders
That they are subject to the use of a high-risk AI system that makes decisions or assists in making decisions - Amend outputs
Decide not to use the high-risk AI system or to otherwise disregard, override or reverse the output of the high-risk AI system. - Stop!
Press the ‘stop’ button, allowing the system to come to a halt in safe state. - Biometric identification
When using biometrics to confirm the claimant’s identity, at least 2 persons need to verify and confirm system output. This measure does not apply for law enforcement, migration, border control and asylum purposes.
Obligations for importers and distributors largely overlap. In essence, they are responsible for verifying conformity of the entire supply chain. They are able to do this due to transparency requirements set out by the AI Act. Importers and distributors are able to verify all the documentation provided to them by the provider, deployer or importer they are dealing with. Some of these measures have to be performed before putting a product to market, others require contiuous attention after the product is marketed. I’ll start with the measures that apply to both roles before naming the exceptions.
- Ex-ante
Instructions for use and other documentation are in proper order. The product bears a CE marking and there’s a declaration of conformity. Verify product name & trademark. - Ex-post
Do not make available, correct, withdraw or recall the product when suspecting noncompliance. Immediately inform provider or importer and competent authorities when system is presenting risk. Ensure storage or transport does not effect compliance.
Distributor
- Ex-ante
Ensure quality management system is in place and compliant.
Importer
- Ex-ante
Appoint authorized representative. Ensure provider completed conformity assessment procedure. Be watchful for falsification. State registered trade name or trademark and contact address on package or accompanying documentation. - Ex-post
Ensure all necessary information to demonstrate conformity can be made available to competent authorities in a language which can easily be understood by them. Keep documentation for 10 years after product has been placed on market or put into service.
Our take
Human oversight is a crucial part of the operational safety of AI systems, and should be taken very seriously. That’s why we wrote an article on this subject specifically. Here, we discuss how human oversight is best facilitated regarding ease of compliance, feeling secure as well as making regulation work to your advantage. Learning crucial insights about your models will help you boost your their performance.
The Xaiva human-oversight platform enables you visually analyze any model, and ensure AI models are working how they are intended to work, to circumvent aforementioned risks.
General purpose obligations
When classified as a AI system your model has to comply to legislation introduced on bases of its risk classification as well as the legislation discussed in this chapter. On top of that, being designated as 'systemic risk' will further expand the set of legislative measures your system is subjected to. These additional measures only concern the providers of general purpose AI systems
General purpose AI
The generality of these models makes them harder to capture in rigorous frameworks. That’s why providers of these systems are kept closer to the chest. All providers of general purpose AI systems are subjected to additional obligations. These involve:
- Model cards
Detailing all model-specific information listed in Annex 11. These include a comprehensive description of its architecture, training and evaluation procedures and results, key design choices made, the type and the provenance of data used, curation methodologies employed, computational resources used, energy consumption and internal and/or external adversarial testing methodologies.Exception: Open source models are exempt from this obligation. - Transparency of information to downstream providers
Enable providers of AI systems to have a good understanding of the capabilities and limitations of the general-purpose AI model and to comply with their obligations. Draw up, keep up to date and share the information listed in Annex 12 with downstream providers.Exception: Open source models are exempt from this obligation. - Comply with copyright and related rights
Specifically related to data mining of publicly available content. - Make public a detailed summary of training data
According to a template to be supplied by the AI Office. - Codes of Practice
Will be drawn up by the AI Office. Providers may demonstrate compliance by adhering to these measures. Otherwise, providers shall demonstrate alternative adequate means of compliance for assessment by the Commission. - Authorized representative
Providers established in third countries shall appoint by written mandate an authorised representative established in the Union. Serving as a point of contact, the representative shall ensure the provider is compliant with its obligations and cooperate with the AI office and other competent authorities.Exception: Open source models are exempt from this obligation.
Delegated acts
In light of evolving technological developments, the Commission is empowered to adopt delegated acts amending both Annex 11 and Annex 12, In particular with regards to computational resource and energy consumption measurements and calculation methodologies.
Systemic risk
Providers of general purpose AI systems with systemic risk are subjected to additional stringent measures regarding testing, tracking and reporting.
- Standardized testing
Use standardised protocols and tools reflecting the state of the art, to identify and mitigate systemic risk. This includes adversarial testing. - Assess and mitigate systemic risks
This concerns risks at Union level which may stem from development, placing on market or use of the system. - Report incidents
Keep track of, document and report serious incidents and possible corrective measures. - Cybersecurity
Ensure an adequate level of cybersecurity protection. - Codes of Practice
Will be drawn up by the AI Office. Providers may demonstrate compliance by adhering to these measures. Otherwise, providers shall demonstrate alternative adequate means of compliance for assessment by the Commission.
Confidentiality
Any information or documentation shared with the Commission, market surveillance authorities and notified bodies and any other natural or legal person are protected by confidentiality obligations set out in Article 78.
Further study
We hope to leave you properly informed. If there are any lingering questions, please consider the following sources:
The official AI Act website offers some useful tools for understanding the AI Act. It comes with its own high level summary. It has a conveniant way to explore the entire AI Act document. And, it has a compliance checker: a 10 minute questionairre which will help you define your role and AI system and quickly identify the relevant parts of the Act.
Algorithm Audit is a leading European knowledge center on algorithm ethics and safety. It is a nonprofit organization performing independant audits of real algorithms or data-analysis tools. The public sharing of the problem statement and normative advice is what they call algoprudence.
The Deloitte article we referenced before employs a different lens to distill the AI Act. If you are curious about the institutional structures used to enable and enforce compliance with the Act, this will proof to be an invalueble read.
The NeN will play a role in adopting the AI Act into a set of norms and codes of practice for use in the Netherlands.