*/
HANDBOOK ON THE EU AI ACT DEVELOPED BY
KAMRAN JAVED & WORAWITCHA PATTANAKIJ
TABLE OF CONTENTS
S. No.
Particulars
Page No.
1
List of Abbreviations
1-3
2
Status of AI Systems/Models Already on the Market or in Service
4
3
AI Literacy
5
Different Categories of AI Systems / Models
4
Classification as Prohibited AI Systems
6-7
5
High Risk AI Systems
8
6
Compliance Requirements for High Risk AI Systems
9-14
7
General Purpose AI Models with Systemic Risks
15
Actors
8
Provider
16
9
Providers’ Obligations: High Risk AI Systems
17
10
Additional Obligations for Providers of High Risk AI Systems
18-20
11
Providers’ Obligations: GPAI Models
21
12
Providers’ Obligations: GPAI with Systemic Risks
22
13
Deployer
23
14
Deployers’ Obligations: High Risk AI Systems
23
15
Additional Obligations for Deployers of High Risk AI Systems
24
16
Transparency Obligations for Providers and Deployers of Limited Risk AI systems
25-26
17
Authorised Representatives
27
18
Authorised Representatives’ Obligations: High Risk AI Systems
27
19
Authorised Representatives’ Obligations: GPAI
28
20
Importer
29
21
Importers’ Obligations: High Risk AI Systems
29
22
Distributer
30
23
Distributors’ Obligations: High Risk AI Systems
30
24
3rd Party Suppliers
31
25
3rd Party Suppliers’ Obligations (Art. 25(4)): High Risk AI Systems
31
26
Other Important Definitions in the Act
32
27
Operator
32
28
Downstream Provider
32
29
Other actors
33
30
Conformity Assessment Body (CAB)
33
31
Notified Body
33
32
Notifying Authority
33
33
Market Surveillance Authority
34
34
Procedures Related to Notified Bodies, Notifying Authority & Conformity Assessment Bodies
35-46
Miscellaneous
35
Serious Incidents: High Risk AI Systems
47
36
AI System Falling under Annex III but Classified as Non-high Risk
48
37
EU Database for High Risk AI Systems in Annex III
49
38
Standards, Conformity Assessment, Certificates, and Registration
50-61
39
Measures in Support of Innovation
62-75
40
Codes of Practice: GPAI
76
Annex
41
Annex III: High-Risk AI Systems According to Scenario 2
77
42
Annex IV: Technical Documentation of High-Risk AI Systems
78
43
Annex V: EU Declaration of Conformity
79
44
Annex IX: Information to Be Submitted Regarding Testing of High-Risk AI Systems in Real-World Conditions
80
45
Annex XI: Technical Documentation of GPAI
81-82
46
Annex XII: Transparency Information by GPAI Providers for Downstream Providers That Integrate GPAI in AI Systems
83
LIST OF ABBREVIATIONS
& And
Acc. – According
A/c – Account
Addl. – Additional
Admin. Administrative
AI – Artificial Intelligence
Alt. – Alternative
AR – Authorised Representative
Art. – Article
Auth. Authority
Auto Automatic/Automatically
B/w – Between
CAB Conformity Assessment Body
Descript. – Description
Dev. – Development
Doc – Document
Docx – Documentation
DSM Directive – Directive (EU) 2019/790 on Copyright in the Digital Single Market
Ed. – Education
Esp. – Especially
EU – European Union
Euro European
1
Exam – Examination
FLOPs – Floating-point Operations Per Second
FR – Fundamental Rights
FW – Firmware
GPAI – General Purpose AI Models
HSFR – Health, Safety or Fundamental Rights
HW – Hardware
ID – Identification
Imp. – Important
Incl. Including
Info – Information
Infra Infrastructure
K – Thousand
LLMs – Large Language Models
Max. – Maximum
Mgmt. – Management
Min. – Minimum
MS – Member States
Natl. – National
No. – Number
Op – Operation
Org – Organisational
QMS – Quality Management System
Rep. – Representative
Req. – Requirements
2
RMS – Risk Management System
Sec. – Section
Specs Specifications
SW – Software
Sys – System
Tech – Technical
Thru Through
TM Trademark
u/ – under
UI – User Interface
Union-wide Unique ID – Union-wide Unique Single Identification Number
w/ – with
w/o – without
Yrs – Years
3
Sys as component of large-scale IT sys
established by legal acts in Annex 10 &
put on market before 02.08.2027 –
comply by 31.12.2030
High risk sys put on market before
02.08.2026 – dont comply unless
significant changes in design
Public auth. to use high risk
sys – operators must comply
by 02.08.2030
02.08.2025 – providers to
comply by 02.08.2027
BUT
STATUS OF AI SYSTEMS/MODELS ALREADY ON
THE MARKET OR IN SERVICE (Art. 111)
4
To be complied with by providers and deployers of any kind of AI system
Staff & persons who operate &
use sys – provider & deployer
ensure adequate AI literacy
Take into a/c – tech knowledge,
experience, ed., training & context
Consider persons & groups on
whom to be used
AI LITERACY (Art. 4)
5
OR
1) Subliminal techniques
beyond person’s consciousness
2) Purposefully manipulative/
deceptive technique
3) Exploits vulnerabilities of age, disability or specific social/econ.
situation with object/effect of material distortion of their behaviour &
causing significant harm.
4) Evaluate/ classify persons/ groups on social
behaviour or known/ inferred/ predicted
personal/ personality characteristics & social
score leading to:
Objective/ Effect: Materially distorts behaviour by appreciably impairing
ability to make informed decision leading to decision won’t otherwise
take & causes significant harm to self/ others.
Detrimental/unfavourable
treatment in social contexts
unrelated to context in which
data originally generated.
Detrimental/unfavourable
treatment = unjustified/
disproportionate to social
behaviour or its gravity.
5) Assess/ predicts risk of persons committing crimes
solely by profiling/ assessing their personality traits.
Systems to support human assessment of a person’s
involvement in a crime based on objective & verifiable facts
directly linked to a crime.
6) Creates/ expands face recognition database by
untargeted scraping of facial images from
internet/CCTV.
7) Infers emotions in
workplace/educational institutes.
For Medical/
safety reasons.
Excepon
Excepon
Classification as Prohibited AI Systems (Art. 5)
6
8) Biometric categorisation systems categorise persons
on biometric data to infer race, political opinions, trade
union membership, religious beliefs, sex life/orientation.
Label/filter lawfully acquired biometric datasets based
on data or its categorisation in law enforcement.
9) Real-time remote biometric ID systems in
publicly accessible spaces for law
enforcement.
1. Target search for abduction, trafficking, sexual
exploitation victims, or missing persons.
2. Prevent specific, substantial or imminent life/ safety
threat, or genuine & present/ foreseeable terrorist attack
threat.
3. Identify suspected criminals for investigation,
prosecution or executing penalty for offences in Annex II
& with min. 4 years sentence.
Only to identify specific target, & take account of following:-
Nature of situation, i.e., seriousness, probability &
scale of harm if not deployed
Seriousness, probability & scale of impact on
rights & freedoms of all.
Necessary & Proportionate safeguards, esp. temporal, geographic
& personal limitations.
Must complete FR impact assessment & register it in EU database
Prior authorisation by judicial or independent admin. authority.
Excepon
Excepon
7
Scenario 1
Scenario 2
Exceptions
Remarks
Classification
(Art. 6)
Safety Component/ Product
covered by Annex I legislation
Undergo 3rd party conformity
assessment.
Fall under Annex III
No significant risk to health, safety or FR
(“HSFR”).
- Narrow procedural task
- Improve result of previous human activity
- detect decision-making
patterns/deviations
- preparatory task
Profiling of
natural persons :
always high risk.
Document its assessment
before selling/ using
Fulfil registration obligations
Give to natl. auth. on
request
Provider
HIGH RISK AI SYSTEMS (Chapter III)
8
Risk Mgmt. System
(“RMS”) : Art. 9
Continuous iterative process run throughout its life.
Regular Review & Update
Used for intended purpose: Identify & analyse known & foreseeable risks to HSFR.
Adopt appropriate & targeted risk mgmt.
measures to address them.
Estimate & evaluate risks when used for intended purpose & u/ conditions of
foreseeable misuse.
Evaluate other risks based on data from post-market monitoring systems.
Consider likely adverse impact on minors & other vulnerable groups when implementing
RMS.
Compliance Requirements for High Risk AI Systems (Chapter III, Sec. 2)
9
Data Governance
(Art. 10)
Training, validation
& testing data sets
Subject to appropriate data governance & mgmt.
practices
Relevant, representative, error free & complete
Appropriate statistical properties
Take a/c of characteristics/elements particular to geographical,
contextual, behavioural & functional setting in which to be used.
Design choices
Data collection & origin
Original purpose, if personal data
Data prep processing ops
Assumptions formulation
Assess needed data setsavailability, quantity &
suitability
Examine possible biases affecting HSFR
Apt measures to detect, prevent & mitigate possible biases
Identify & ways to address data gaps preventing compliance
10
Technical Documentation (Art. 11)
Draw up before placing on market
Keep up-to-date
Contain elements in Annex IV
Record Keeping (Art. 12)
Technically allow auto log recording
Logging capabilities
Identify situations
presenting risks to
HSFR or
substantial
modification
Facilitate post-
market
monitoring
Enable monitoring
operations of
systems affecting
HSFR
Logging capabilities for Remote Biometric ID Systems
Record each use period
Reference database against which input data checked
Input data
Identify persons involved in verification of results
11
Transparency Obligations
Towards Deployers (Art. 13)
Operation sufficiently transparent for
interpreting output
Have instructions for use in appropriate format
with concise, clear, comprehensive & relevant
info
Provider’s identity & contact details
Characteristics, capabilities & limitations
Predetermined changes to system & its performance
Human oversight measures, esp. tech measures to facilitate outputs
interpretation
Required computational & hardware resources, expected lifetime, & necessary
maintenance measures
Descript. of mechanisms allowing to properly collect, store &
interpret logs
Intended purpose
Level of accuracy & foreseeable circumstances that may
impact it
Risks to HSFR when used for intended purpose & u/
conditions of foreseeable misuse.
Tech capabilities to give relevant info to explain output
Performance regarding specific persons/groups on which
intended to be used
Specs for input data or other relevant info regarding
training, validation & testing data sets used
Info to enable deployers to interpret output & use
appropriately
12
&/OR
Human Oversight Measures (Art. 14)
Designed & developed so
can be effectively overseen
by natural persons
Aim: prevent/ minimise
risks to HSFR when used
for intended purpose & u/
conditions of foreseeable
misuse
Measures: commensurate
w/ risks, autonomy level &
context of use
Identified & built into it before placing on market
Identified before placing on market & apt for
deployer to implement
Properly understand capacities & limits, & able to monitor its
operation
Remain aware of automation bias, esp. for systems that give info/
recommendations for decision by persons.
Correctly interpret output
If needed, decide not to use or disregard/ reverse output
Intervene/ interrupt in a way that it safely comes to a halt.
Remote Biometric ID Systems:
Action/ Decision taken on ID by
system be separately verified &
confirmed by 2 persons w/
necessary competence, training &
authority.
13
Accuracy, Robustness & Cybersecurity (Art. 15)
Designed & developed so has apt
accuracy, robustness &
cybersecurity levels & maintains it
throughout
Tech & org measures so resilient to
errors or inconsistencies that occur
in system or its environment, esp.
from interaction w/ persons or other
systems
Systems that continue to learn
eliminate/reduce biased
feedback loops risks, & are
addressed w/ apt mitigation
measures
Resilient to
attempts to exploit
vulnerabilities to
alter use, output or
performance
Tech solutions for
cybersecurity – apt
for relevant
circumstances & risks
Declare accuracy levels
in instructions for use
Can achieve robustness thru
tech redundancy solutions like
backup/ fail-safe plans.
Include measures
to prevent, detect,
respond, resolve &
control attacks –
data poisoning,
model poisoning,
model evasion,
confidentiality
attacks or model
flaws.
14
Scenario 1
Scenario 2
Scenario 3
Scenario 4
Classification
High impact capabilities evaluated
thru apt tech tools & methodologies
Decision of Commission has such capabilities. Criteria to
determine (Annex 13):-
Model’s no. of parameters
Data set quality/size
Amount of computation for training measured in FLOPs
or combination of other variables like estimate training cost,
training time or energy used for training
Input & output modalities, like:
o LLMs
o text-to-image
o multi-modality
o state of art threshold to determine each modality’s
capabilities
o specific type of inputs & outputs
benchmarks & evaluations of capabilities, like:
o no. of task w/o addl. training
o adaptable to learn new, distinct tasks
o autonomy level & scalability
o tools has access to
if high impact on internal market due to reach – presumed if
supplied to at least 10k business users
no. of registered end users
Trained with more than
10^25 FLOPs
Decision of Commission
poses concrete identifiable
risk at Union level.
Be based on Annex 13
criteria.
General Purpose AI Models with Systemic Risks (Art. 51)
15
Developing or having it developed
Under its own name or trademark
Provider
Placing it on the market
Putting into service
Whether for payment or free of charge
AI System
General –
Purpose AI Model
AND
OR
16
ProvidersObligations (Art. 16):
High Risk AI Systems
Ensure compliance requirements fulfilled
Indicate name, trade name/TM & address on system/
package/ docx
Have Quality Mgmt. System (“QMS”)
Keep documentation
Keep auto generated logs, if under
their control
Put thru conformity
assessment procedure
Draw up EU Declaration of Conformity
Affix CE
marking
Fulfil registration obligations
If needed, Corrective actions & inform
Demonstrate when requested
Compliance w/ Euro Accessibility Act
For min. 6 months
Appoint by written mandate & empower AR to carry out obligations
17
Post-Market Monitoring (Art. 72)
Establish & document monitoring sys
proportionate to nature of tech & risks
Collect, doc & analyse data on sysperformance
given by deployer or collected thru others
Allow her to evaluate continuous compliance
Analysis of interaction w/ other sys, if any
Exception: Sensitive operational data of
enforcement authorities
Based on post-market monitoring plan
Plan – part of tech docx
Duty to Market Surveillance Authority (Art. 74)
If necessary – full access to docx
& data sets used to develop sys
On request, access to source
code if both conditions met:
Access imp. to assess
conformity
Testing/auditing process & verifications
based on data & docx given exhausted &
insufficient
Additional Obligations for Providers of High Risk AI Systems
18
include
Quality Mgmt. System (“QMS”): Art. 17
Systematically Documented – written
policies, procedures & instructions
Regulatory compliance strategy
Techniques, procedures & systematic
actions for
Exam, test & validation procedures to be
carried out & its frequency
Tech specs to be applied
Systems & procedures for data mgmt.
RMS
Setting-up, implementation & maintenance of post-market
monitoring system
Procedure to report serious incident
Handling of communication w/ relevant authorities
Systems & procedures for record keeping
Resource mgmt.
Design, design
control &
verification
Development,
quality control
& assurance
Accountability framework setting out
responsibilities of mgmt. & other staff
19
inform
Documentation (Art. 18)
Technical documentation
Documentation on QMS
Documentation on approved changes
Decisions & docx issued by notified bodies
EU declaration of conformity
Corrective Actions & Duty to Inform (Art. 20)
Immediately take corrective
actions to bring in conformity,
if consider not in conformity
When risk to HSFR
& becomes aware –
investigate causes
Market surveillance authorities
Notified bodies
Inform – nature of non-
compliance & corrective
action taken
Cooperation with Authorities (Art. 21)
Upon reasoned request by a
competent authority
Provide all info & docx necessary to prove conformity w/
compliance requirements – simple language
Give access to auto generated log when under their control
Keep for 10 years after placing
system on the market
Or withdraw,
disable, recall
Distributors, deployers,
authorized rep. (AR), importers
Ensure corrective action
for all sys on market
within deadline by
market auth.
20
Apply to all General Purpose AI Models
(“GPAI”) – incl. w/ systemic risks
Draw up & keep
updated tech docx –
incl. training & testing
process & evaluation
results – & at least info
per Annex 11
Provide to AI
Office & natl.
auth. on request
Draw up, keep updated &
give to providers of AI sys
who integrate GPAI in sys
– info & docx.
Enable AI sys providers to
comply w/ their duties &
have good understanding
of its abilities & limits
Have at least info
per Annex 12
Put in place policy to
comply w/ EU’s copyright
& related rights lawesp.
to identify & comply w/
rights reserved per DSM
Directive
Draw up & make public
detailed summary of
content used for training
Template by AI Office
Cooperate w/
Commission &
national
authorities
Exception: free &
open source models
No exception for
models w/
systemic risks
Can rely on codes of practice until
harmonised standard published
Compliance w/ harmonised standards
conformity presumption for obligations covered
by standards
If don’t follow codes of practice or
harmonised standards – show alt.
adequate means of compliance
3
rd
country providers –
appoint authorised
representatives (AR) in EU
& empower to perform all
tasks in mandate
ProvidersObligations (Art. 53): GPAI Models
21
Max 2 weeks after
becomes aware – has high
impact capabilities – notify
Commission
Include in notification
– info imp. to show
req. met
Commission becomes
aware but not notified –
may designate itself.
May include substantiated
arguments – req. met but due
to specific characteristics no
systemic risks
Final decision –
Commission’s
If designated – can
request & Commission
may reassess.
Include in request
objective, detailed &
new reasons arisen after
designation.
If designation
maintained – can
request again after 6
months
In addition to obligations for all GPAI
Perform model evaluation
in acc. w/ state of art tools
& protocols – incl.
adversarial test to identify &
mitigate systemic risks
Assess & mitigate
Union level
systemic risks, incl.
sources
Track, doc & report
to AI Office & natl.
auth. info about
serious incidents &
corrective measures
to address
Adequate
cybersecurity
protection for
model & physical
infra
Can rely on codes of practice until
harmonised standard published
Compliance w/ harmonised standards –
conformity presumption for obligations covered
by standards
If don’t follow codes of practice or
harmonised standards – show alt.
adequate means of compliance
ProvidersObligations (Artt. 52 & 55):
GPAI with Systemic Risks
22
Deployer
Using an AI system under its authority
Except for personal non-professional
activity
Human oversight to persons w/ necessary
competence, training, authority & support
If controls input data – ensure relevant &
representative enough for intended purpose
Considers if used acc. to instructions – risk to HSFR – suspend
use & inform provider/ distributor & market authorities
If serious incident identified – inform provider,
importer/ distributor & market authority
Keep auto generated logs, if under their control
Deployers = employers & use at work inform workers’ & their rep.
Cooperate w/ action by authorities
For min. 6 months
If decision based on output & legal effects or risk to HSFR of a
person – meaningfully explain to her on request role of sys in decision
DeployersObligations (Art. 26):
High Risk AI Systems
23
Fundament Rights (“FR”) Impact Assessment
(Art. 27)
Deployers = public bodies/private entities providing public services & deploy
sys in Annex 3, except critical infra – do FR impact assessment that include:
Descript. of processes in which to
be used & intended purpose
Period of time & frequency of use
Categories of persons & groups will affect
& risks of harm likely to impact them
Human oversight measures implemented
Measures will take if risks materialise, esp. internal
governance & complaint mechanisms arrangements
Notify market authority of results after assessment
Also includes:
deployers of sys
that do credit
scoring/ risk
assessment &
pricing for life &
health insurance
Additional Obligations for Deployers of High Risk AI Systems
24
Transparency Obligations for Providers and Deployers of Limited Risk AI systems (Art 50)
Information to be clearly provided to individuals at the
first interaction and meeting accessibility requirements
Not affecting requirements of high - risk AI systems (chapter III) or
other transparency obligations under Union or national law
AI system with
human
interaction
Designed and developed to inform users
they are interacting with AI, unless it is
obvious
Provider
Exception: used for crime detection, prevention, investigation,
or prosecution — if authorised by law, with safeguards for
third-party rights, unless the system is public-facing for crime
reporting (usual transparency obligation applies)
AI systems, including general-
purpose AI systems, generating
synthetic audio, image, video or
text content
Labelling in a machine-readable format
and detectable as artificially generated or
manipulated
Markings must be effective, interoperable,
robust, and reliable — as far as technically
feasible, considering content type, cost,
and technical standards.
Exception: AI assisting with standard
editing, doesn’t substantially alter
input/meaning, or is legally used for
criminal detection/investigation
25
Disclosing that the content is
artificially generated /
manipulated
AI Office
Commission
Deployer
AI system generating or
manipulating image, audio, or
video content constituting a deep
fake
An emotion recognition system or
a biometric categorisation system
AI system generating or
manipulating text which is
published with the purpose of
informing the public on matters of
public interest
Informing the natural person exposed to
the system
Personal data being processed in compliant with
- GDPR (EU 2016/679)
- Data protection regulations for EU institutions (EU 2018/1725) AND
- Law Enforcement data protection directive (EU 2016/680)
Exception: when permitted by law to
detect, prevent, or investigate criminal
offenses, as long as appropriate safeguards
are in place and comply with Union law
Disclosing that the content is
artificially generated /
manipulated
Exception: when authorised by law to detect,
prevent, or investigate criminal offenses OR for
artistic, creative, satirical, fictional, or similar works,
only requires disclosing in a way that doesn’t affect the
work’s display or enjoyment
Exception: when authorised by law to detect,
prevent, or investigate criminal offenses OR when the
content reviewed or published under editorial
responsibility of a natural or legal person
Promoting Union-level codes of practice for detecting and labelling AI-generated content
Approving the codes of practice via implementing acts under Art 56(6), OR, if inadequate, set common
rules using the procedure in Art 98(2)
26
A natural or legal person
Having received and accepted a written mandate from
To perform and carry out on its behalf the
obligations and procedures established by this
Regulation
Authorised
Representative
(“AR”)
Located or established in the Union
Provider
Perform tasks in mandate
Provide mandate’s copy to market
surveillance authority on request
Terminate mandate if it considers
provider acting contrary to obligations
Inform market surveillance authority &
notified body & its reasons
Verify EU declaration of conformity & tech docx drawn up
Be addressed in addition to/ instead of provider
on compliance issues
Conformity assessment procedure done by provider
Keep for 10 yrs for authorities
Provide authority on request w/ all info & docx
necessary to prove compliance
Cooperate w/ action by authorities, esp. to mitigate risks
Comply w/ registration obligations
Provider’s contact details
EU declaration of conformity
Tech docx
If certificate issued by notified
body
Authorised RepresentativesObligations
(Art. 22): High Risk AI Systems
27
Provide mandate’s copy to AI
Office on request
Mandate – empower AR to
do these tasks
Tech docx drawn
up & all duties
fulfilled by provider
Keep contact details & tech
docx copy for AI Office &
national authorities for 10
yrs after GPAI placed on
market
Give all info & docx to
AI Office on request to
show compliance
Cooperate w/ AI Office &
authorities in actions they take,
incl. when GPAI integrated in
AI sys
Be addressed in
addition/instead of provider
by AI Office/authorities on
compliance issues
Terminate mandate if reason to consider
provider acting contrary to obligations &
inform AI Office about it & its reasons
Exception: free &
open source models
w/o systemic risks
Authorised RepresentativesObligations (Art. 54): GPAI
28
Placing on the market an AI system branded by a
person or company based in a third country
Importer
Located or established in the Union
Ensure it conforms by verifying:-
If reason to consider – doesnt
conform or falsified docx
Indicate name, trade name/TM &
address on system & packaging/docx
When u/ its responsibility – storage/
transport conditions don’t affect compliance
Keep for 10 yrs certificate by notified body,
instructions for use & EU declaration of conformity
Give these & other docx to authorities on
request in easy lang. to show it conforms
Cooperate w/ action by
authorities, esp. to mitigate risks
Don’t place on
market till
conforms
If risk to HSFR – inform
provider, AR & market
surveillance authorities
Conformity assessment done
Tech docx drawn up
Bears CE marking
Has EU Declaraon of Conformity
Has instrucons for use
AR appointed
ImportersObligations (Art. 23):High
Risk AI Systems
29
Making an AI system available on the Union market
Distributor
In the supply chain, other than the Provider or the Importer
Verify before putting on market
Bears CE marking
Has EU declaration of conformity
Has instructions for use
Provider &/or importer – Name, trade
name/TM & address indicated
Has QMS
Considers sys doesn’t conform –
not put on market till it does
If risk to HSFR – inform provider/ importer
When u/ its responsibility – storage/ transport
conditions don’t affect compliance
Give all info & docx to authorities on request to show it conforms
Cooperate w/ action by authorities, esp. to mitigate risks
Already on market – take corrective actions/
withdraw/ recall or ensure provider/ importer does so
Already on market – inform provider/ importer &
authorities
DistributorsObligations (Art. 24): High Risk AI Systems
30
Who supplies tools/services/ components/ processes & used in sys
Specify imp. info, capabilities, tech access,
etc. based on generally acknowledged state
of art – so provider can fulfil duties
Exception
Tools, etc. provided u/ free &
open source licence
GPAI – no excepon
AI Office to develop voluntary
model contract terms for suppliers
& providers
3rd Party SuppliersObligations (Art. 25(4)): High Risk AI Systems
31
Regardless of whether it is provided
- by themselves and vertically integrated or
- by another entity based on contractual relations
Downstream
Provider
Provider of
AI System
General –
Purpose AI Model
Operator
A general term referring to a Provider, Product Manufacturer, Deployer,
Authorised Representative, Importer or Distributor
OTHER IMPORTANT DEFINITIONS IN THE ACT
32
Performing third-party conformity assessment activities such as
Conformity
Assessment
Body (CAB)
Testing
Certification
Inspection
AND
Submitting an application for Notification (Art 29)
Notified Body
A CAB officially recognised under this Regulation and other relevant
Union harmonisation legislation
Third-country CAB can act as notified bodies if an
EU agreement exists and they meet Art 31 or
equivalent compliance
OTHER ACTORS
The national authority
Setting up and carrying out the necessary procedures for the assessment
of
Designation
Notification
Monitoring
Designateded or established by MS at least one
Notifying
Authority
Of CAB
Requirements
Being established, being organized, and operation without conflict of interest with CAB
Not providing any activities that CAB performs nor any consultancy services on a commercial or competitive basis
Complying with confidentiality requirements (Art 78)
33
Carrying out the activities and taking the
measures
Pursuant to Regulation (EU) 2019/1020
(on market surveillance and compliance of products)
Market
Surveillance
Authority
The national authority
34
Conformity Assessment Body (CAB) — submits an application to the Notifying Authority (see Application of a Conformity Assessment Body for Notification).
Notifying Authority — reviews the application, officially designates the CAB as a Notified Body, and ensures it meets ongoing requirements (see Notification
Procedure and Requirements Related to Notified Body).
Notified Body — once designated, carries out assessments but must report any changes to its status (see Changes to Notification) and can be challenged if
competence is questioned (see Challenge to the Competence of Notified Body). Coordination between bodies ensures consistent evaluations (see Coordination of
Notified Body).
Designation and Responsibilities of Notified Bodies
This section outlines the process for organizations to become trusted evaluators of high-risk AI systems under the Regulation, involving three key actors:
35
Procedures Related to Notified Bodies, Notifying Authority & Conformity Assessment Bodies
shall submit an application for notification
to the notifying authority of the Member State in which they are established.
accompanied by
- a description of the conformity assessment activities,
- the conformity assessment module or modules and
- the types of AI systems for which the conformity assessment body claims to be competent
- an accreditation certificate, where one exists, issued by a national accreditation body attesting
that the conformity assessment body fulfils the requirements laid down in Article 31.
o !"#$%&#'()'(**+,-"#(#"%)'*,+#"."*(#,/'0$(11'2+%3"-,'(11'#$,'-%*&4,)#(+5',3"-,)*,'),*,00(+5'
6%+'#$,'3,+"."*(#"%)/'+,*%7)"#"%)'()-'+,7&1(+'4%)"#%+")7'%6'"#0'*%421"()*,'8"#$'#$,'
+,9&"+,4,)#0'1("-'-%8)'")':+#"*1,';<=
- Any valid document related to existing designations of the applicant notified body under any
other Union harmonisation legislation
Application of a Conformity Assessment Body for Notification (Art 29)
Conformity
Assessment
Body (CAB)
Notifying Authority (of MS that CAB established)
Notified
Body
Submitting an application for
notification to
A description of the conformity assessment activities
The conformity assessment module or modules
The related types of AI systems
(If any) an accreditation certificate, issued by a national accreditation body, confirming compliance with Art 31
If without accreditation:
Documentary evidence proving
compliance with Art 31
(If any) any relevant document for other Union harmonisation legislation
AND
ACCOMPANIED BY
Keeping any documents and certificates to support their designation
Updating any documents to monitor and verify continuous compliance with Art 31
36
Notification Procedure (Art 30)
Notifying Authority
CAB (only with compliance with Art 31)
The Commission and other MSs
Notifying about CAB that
satisfied the requirements in
Art 31, with relevant
documents to
May object and enter into consultations with (but the
Commission shall decide whether the authorisation is
justified)
Relevant MSs
The CAB
AND
CAB with objection from the
Commission and other MSs cannot
carry out the activities of Notified Body
37
Requirements Related to Notified Body (Art 31)
Being established under the national law and having legal personality
Notified
Body
CAB, their management, and assessment staff
Not being involved in the design, development,
marketing, or use of high-risk AI systems
Avoiding any activity that could compromise
their independence such as offering consultancy
services
Being independent from the Provider, any economically interested
parties, and competitors of the high-risk AI system
But may use assessed high-risk AI systems if
needed for their own operations or personal use
Ensuring independence, objectivity, and impartiality through clear structures and
procedures
Protecting the confidentiality (Art 78), except when disclosure
required by law or to Notifying Authorities
Having documented procedures ensuring their personnel,
committees, subsidiaries, subcontractors and any associated
body or personnel of external bodies maintain confidentiality
Having the organisational, quality management, resources and process requirements that are
necessary to fulfil their tasks and suitable cybersecurity requirements.
38
Taking appropriate liability insurance for their conformity assessment activities, unless liability assumed by the MS
(national law)
Performing all tasks with the highest professional integrity and competence, either independently or through others under
their responsibility
Having procedures that consider the Provider’s size, sector, structure, and the AI system’s
complexity
Notified
Body
Having permanent availability of sufficient administrative, technical, legal, and scientific personnel with experience and
knowledge in relevant AI systems, data, and data computing, as well as the requirements of high-risk AI systems
Participating in coordination of Notified Bodies (Art 38)
Engaging directly or through representation in European standardization organizations, ensuring staying up to date on
relevant standards
AND
When CAB meeting the criteria of relevant harmonized standards published in the Official Journal of the EU => it is
presumed to comply with Art 31 (if the standards cover those requirements)
39
When having subsiadiaries or subcontracting (Art 33)
Notified
Body
Only possible with the Provider's agreement
Operatinal obligations (Art 34)
Submitting (upon request) the Provider's documents,
available and submit them upon request to the
Notifying Authority
Verifying the conformity of high-risk AI systems (outlined in Art 43)
Avoiding unnecessary burdens for Providers, taking into account the Provider's
size, sector, structure, and the complexity of the AI system
Esp., with micro- and small enterprises
(defined in Recommendation
2003/361/EC)
Ensuring they also meet requirements in Art 31
Informimg the Notifying Authority and keeping relevant
qualification and work documents for five years for the Notifying
Authority
Taking full responsibility for their tasks
Making a public list of subsidiaries available
40
Changes to Notification (Art 36)
Notifying Authority
The Commission and other MSs
Informing of any changes to the
notification of a Notified Body to
Extensions of the scope of the
notification, shall follow
The procedure of application of a
conformity assessment body for
notification (Art 29)
Two kinds of changes
Notification procedure (Art 30)
AND
Other changes
Notified body’s cessation of its
conformity assessment activities
(Voluntary)
Notifying authority suspecting the
Notified Body no longer meets
Article 31 (Investigation triggered)
Consider the
following
details
41
Concerned
Provider
Notifying Authority
AND
Notified body’s cessation of its
conformity assessment activities
(Voluntary)
Deciding to cease
Notified Body shall
inform (as soon as
possible)
Planning to cease
Notified body shall
inform ( 1 year before
cesssation)
When cessation
Another Notified Body confirms in writing it will
assume responsibility and completes a fill
assessment within 9 months
Certificates = valid 9
months after cessation
if
Notifying Authority shall withdraw the designation
Notifying Authority suspecting the
Notified Body no longer meets Art
31 (Investigation triggered)
Notifying Authority informs
the Notified Body and allows a
response
If non-compliance
confirmed
The designation is restricted,
suspended, or withdrawn
The Notifying Authority shall
notify the Commission and other
MSs
Consider Post-
Suspension/Restriction/Wit
hdrawal Actions below
42
Post-Suspension/Restriction/Withdrawal Actions
Notified Body informs the Provider
within 10 days
Notifying Authority
Securing and storing related files of the Notified Body and
sharing with other Notifying Authorities and Market
Surveilance Authority (on request)
Assessing the impact on the certificates issued by the Notified
Body
Submitting a report on its findings within three months of
notifying the designation changes to
The Commission and
the other MSs
Requiring the Notified Body to suspend or withdraw any
certificates unduly issued and inform
Providing relevant certificate details to National Competent
Authorities for action against potential risks to HSFR
Certificate validity (details below)
43
Certificate
validity
Designation suspended or
restricted = valid if
Not applying to unduly issued certificates as they are already invalid
Designation withdrawn
= valid for 9 months if
Notifying Authority
confirming
No risk to HSFR AND setting a
remedy timeline
No certificates will be issued, amended, or re-issued during
suspension. AND the Notified Body can monitor and stay
responsible for existing certificates
OR
If the Notified Body cannot monitor: →
Provider must appoint another Body and
confirm to National Authorities within 3
months.
Related National
Competent Authority
confirming no risk to
HSFR
AND
Another Notified Body
takes over and
completes assessment
within 12 months
The National Competent Authority
may extend three more months but
in total no more than 12 months
44
Challenge to the Competence of Notified Bodies (Art 37)
The Commission
When in doubt, investigating the competence of a Notified Body
(against the requirements in Art 31)
(On request) the Notifying Authority giving relevant information about
notification and competence to the Commission
Maintaining confidentiality
(Art 78)
When non-compliance
found, informing
The MS and requesting it to take the necessary
corrective measures, including the suspension
or withdrawal of the notification if necessary
If MS fails to act, the Commission may,
suspend, restrict or withdraw the designation
45
Coordination of Notified Bodies (Art 38)
The Commission
Ensuring appropriate coordination and
cooperation among Notified Bodies
Regarding conformity assessment procedures
and putting in place sectoral groups of
Notified Bodies (which Notified Bodies shall
participate)
Providing for the exchange of knowledge and
best practices between Notifying Authorities
46
Types (Art. 3(49))
Death/ serious harm to
health
Serious & irreversible disruption of
mgmt./op of critical infra
FR protection duties
infringed
Serious harm to
property/environment
Reporting Requirements (Art. 73)
Providers – report to market
authorities of MS where
occurred
Report in max. 15 days after provider/
deployer become aware & likely link
b/w sys & incident established
Widespread infringement
or serious & irreversible
disruption of critical infra
mgmt./op – report after
max. 2 days
Death – report after
max. 10 days of link
b/w incident & sys
established
If timely reporting
imp. – can 1
st
submit
incomplete report
After report, provider –
investigate incident & sys,
incl. risk assessment &
corrective action
Cooperate w/ authorities &
notified body during
investigation
No investigation altering sys so
as to affect subsequent
evaluation of causes before
informing authorities
Serious Incidents: High Risk AI Systems
47
AI SYSTEMS FALLING UNDER ANNEX III BUT CLASSIFIED AS NON-HIGH RISK
Providersobligations if found as
high risk (Art. 80)
Take action to make
compliant w/ High Risk
Sys requirements
Otherwise – fines
Take corrective action for all
sys on market
If deliberately
misclassified – fines
48
Commission w/ MS to set up & maintain EU database
Info on sys in Annex 3
– incl. those classified
non-high risk
Provider/ AR – enter in it
data in Annex 8, Sec. A &
B
Public body deployers –
enter in it data in Annex 8,
Sec. C per Art. 49(3)-(4)
Have personal data
only necessary to
collect & process info
Include name & contact of
provider/deployer’s rep. & are
responsible to register sys
EU DATABASE FOR HIGH RISK AI SYSTEMS IN ANNEX III (Art. 71)
49
Standards, Conformity Assessment, Certificates, and Registration
The Union institutions make available harmonised standards (see Harmonised Standards and Standardisation Deliverables) or, where necessary, common
specifications (see Common Specifications), which can be used to presume compliance with this Regulation. Some specific requirements can also be presumed (see
Presumption of Conformity with Certain Requirements)
Based on this, high risk AI systems undergo a conformity assessment to verify that they meet the essential requirements which if applicable involves Notified Body
(see Conformity Assessment). Once compliance is confirmed and if the assessment involves a Notified Body, a Notified Body issues a certificate as proof that the
system has passed the assessment (see Certificate). Notified bodies are also required to fulfil information obligations, meaning they must report their assessment
activities, issued certificates, and any significant findings to the relevant authorities to ensure oversight and transparency (see Information Obligation of Notified
Body). In certain cases, the conformity process may allow derogations (see Derogation from Conformity Assessment Procedure).
After certification, the related Operator must issue an EU declaration of conformity (see EU Declaration of Conformity), apply the CE marking (see CE Marking),
and register the AI system in the EU database before it is placed on the market (see Registration).
50
shall submit an application for notification
to the notifying authority of the Member State in which they are established.
accompanied by
- a description of the conformity assessment activities,
- the conformity assessment module or modules and
- the types of AI systems for which the conformity assessment body claims to be competent
- an accreditation certificate, where one exists, issued by a national accreditation body attesting
that the conformity assessment body fulfils the requirements laid down in Article 31.
o !"#$%&#'()'(**+,-"#(#"%)'*,+#"."*(#,/'0$(11'2+%3"-,'(11'#$,'-%*&4,)#(+5',3"-,)*,'),*,00(+5'
6%+'#$,'3,+"."*(#"%)/'+,*%7)"#"%)'()-'+,7&1(+'4%)"#%+")7'%6'"#0'*%421"()*,'8"#$'#$,'
+,9&"+,4,)#0'1("-'-%8)'")':+#"*1,';<=
- Any valid document related to existing designations of the applicant notified body under any
other Union harmonisation legislation
OR
Harmonised Standards and Standardisation Deliverables (Art 40)
Commission
The Board
Relevant
Stakeholders
including
Advisory Forum
Obligations for Providers of
GPAI, and obligations for
Providers of GPAI with
systematic risk
= Harmonised standards
Issuing standardization
requests to
European
standardisation
organisations
Requirements for high-risk
AI systems
Asking for deliverables
(such as reporting and
documentation processes to
improve AI systems’
resource performance) from
AND
COVERING
AND
With clarity and consistency with
Union harminisation legislation
High – risk AI systems
General – purpose AI models
Presumed to comply with this
Regulation (to the extent that the
standards cover)
AND
51
Common Specifications (Art 41)
Commission
Advisory Forum
AND
Establishing the common specs when the harmonized standards
request not accepted, or the standards late, not covering
fundamental rights, nor meeting the request and no reference
was published in the Official Journal or expected soon
But when the standards are done and
complete, the common specs will be
repealed
= Common Specifications
High – risk AI systems
General – purpose AI models
Presumed to comply with this
Regulation (to the extent that the
standards cover)
If Provider does not
comply, shall justify
If MS considers not entirely meets the requirement, shall
inform the Commission to assess
Presumption of Conformity with Certain Requirements (Art 42)
High – risk AI systems
Being trained and tested on data reflecting the specific
geographical, behavioural, contextual or functional setting within
which they are intended to be used
= Presumed to comply with data and data
governance requirements (Art 10(4))
Being certified or under a cyber security scheme (Regulation
(EU) 2019/881(Cybersecurity Act) with the references published
in the Official Journal of the European Union
= Presumed to comply with the cyber security
requirements (Art 15) (in so far as it covers
requirements)
52
Conformity Assessment (Art 43)
High risk AI Systems listed in point 1 of Annex III (AI
systems related to biometrics)
High risk AI System listed in points 2 - 8 of Annex III
High risk AI System in the List of Union harmonisation
legislation (Section A, Annex I)
Each has its own
conformity procedure
A new conformity
assessment is needed if
substantial modification
(whether it is intended to be
further distributed or
continues to be used by the
current Deployer)
But changes from pre-
determined learning,
included in the technical
documentation (point 2(f) of
Annex IV) =/= substantial
modifications
53
If systems used by law enforcement,
immigration, asylum, or EU bodies — the
Market Surveillance Authority acts as the
Notified Body
High risk AI Systems listed in point 1 of Annex III (AI
systems related to biometrics)
Provider
If having applied harmonised standards (Art 40) OR common
standards (Art 41), can choose to follow either
Conformity assessment procedure based
on internal control (Annex VI)
Conformity based on an assessment of the
quality management system and an
assessment of the technical documentation
(Annex VII)
OR
Involving a Notified Body
No harmonized standards or common specs exist, or the provider hasn’t fully applied them, or
standards were published with restrictions needing compliance
CHOOSE WHERE
High risk AI System listed in points 2 - 8 of Annex III
Provider
Conformity assessment procedure based
on internal control (Annex VI)
No Notified Body
54
Provider
Conformity procedure set by those laws
Specific Annex VII points in this Regulation (4.3
– 4.5, and the fifth paragraph of 4.6)
Requirements for high-risk AI systems
Involving Notified Body
High risk AI System in the list of Union harmonisation
legislation (Section A, Annex I)
Notified bodies under other Union laws can check AI system compliance with this
Regulation, if they are qualified under Art 31
Manufacturers can skip third-party checks only if all harmonised standards or common
specs are fully applied
55
Certificate (Art 44)
Issued by a Notified Body, showing conformity with Annex VII, must be written in a language
that can be easily understood by the relevant Authorities in the MS where the Notified Body is
established
Valid for the period they indicate not exceed
5 years for AI systems in Annex I
4 years for AI systems in Annex III
Provider
Can request to extend with a maximum of
5 years for AI systems in Annex I
4 years for AI systems in Annex III
Based on a re-assessment according to applicable
conformity assessment procedure
Any supplement to a certificate shall remain valid, if
such certificate valid
56
Information Obligation of
Notified Bodies (Art 45)
Notified Body
Notifying Authority
SHALL INFORM
- Any Union technical documentation assessment certificates, supplements, and quality
management system approvals issued, refused, restricted, suspended, or withdrawn
under Annex VII
- Any changes affecting their notification scope or conditions;
- Any information requests from Market Surveillance Authorities;
- (on request), details of conformity assessments, including cross-border and
subcontracted activities.
Other Notified Body
SHALL INFORM
- Any refused, suspended, withdrawn, or, upon request, issued quality management
system approvals
- Technical documentation certificates, including supplements
Maintaining confidentiality ( Article 78)
57
Derogation from Conformity Assessment Procedure (Art 46)
Market
Surveillance
Authority
For exceptional reasons (public security, health, environment, or key industrial and
infrastructural assets), authorising the placing on the market or use of specific high-risk AI
systems (without completing conformity assessment in Art 43) for a limited period while
conformity assessment is completed
Only with high-risk AI system meeting the requirements for high-risk AI systems (Section 2,
chapter 3)
Law-enforcement
authorities or civil
protection authorities
In urgent cases (public security or specific, substantial and imminent threat to the life), law-
enforcement or civil protection may deploy high-risk AI system without prior authorisation
but must request it to Market Surveillance Authority during or immediately after use. If
refused, use must stop and results discarded
Commission
and other MSs
Of any of its authorisation both from the Market Surveillance Authority itself and from the
requests of law-enforcement authorities or civil protection authorities (exclude sensitive law
enforcement data)
SHALL INFORM
If no objection within 15 days, the authorisation =
justified
If objections or concerns => consultations with
relevant MSs follow. But the Commission will decide
and inform the MSs and operators. If unjustified, the
authorisation must be withdrawn.
58
EU Declaration of Conformity (Art 47)
EU Declaration of Conformity
Confirming compliance to high-risk AI system’s requirements
If the AI system subject to other Union laws requiring a declaration, a single EU
declaration must cover all applicable legislation, with relevant identifiers
Provider
Creating and keeping a written, machine-readable, or electronically signed EU
declaration of conformity for 10 years, available to National Competent Authorities. A
copy must be submitted upon request.
By issuing = assuming responsibility for compliance with high risk AI system
requirements and must keep it up-to-date
Being translated into a language understood by relevant National Competent Authority
Information from Annex V (which can be amended by the Commission as appropriate)
59
CE Marking (Art 48)
CE Marking
Following general principle in Art 30 of Regulation (EC) No 765/2008 on accreditation and market surveillance
For digital high – risk AI systems
Digital CE marking: easily accessed via the
interface OR easily accessible machine-readable
code OR other electronic means
Being affixed visibly, legibly, and indelibly
If not possible or not warranted => on the
packaging or accompanying documents
(Where applicable) CE marking + Notified Body
ID number, which must also appear in any
promotional material mentioning CE marking
compliance
Affixed by the Notified Body itself OR the
Provider
If other Union law applies, the CE marking
confirms compliance with both that law and this
Regulation
60
Registration (Art 49)
High-risk AI listed in Annex III, except critical
infrastructure AI system (point 2 of Annex III)
Non-high-risk (Article 6(3)) - self-assessed
Provider or authorised reps
Shall register themselves and their
system in the EU database (Art
71) before placing on market or
use
Deployers who are public
authorities, Union institutions,
bodies, offices or agencies or
persons acting on their behalf
Provider or authorised reps
Special cases
High – risk AI system in the areas of
law enforcement, migration, asylum,
border control (Annex III points 1, 6,
7)
Shall register in a secure non-public
section of the EU database (Art 71)
with the info of
Annex VIII (A: 1-5,7,10 /
B: 1-5,8-9 / C: 1-3)
Annex IX: 1,2,3,5.
Only the Commission and National
Authorities (Article 74(8)) can access
Critical infrastructure high risk AI
system (point 2 of Annex III)
Shall register at national level
AND
61
Measures in Support of Innovation
This section introduces AI regulatory sandboxes (see AI Regulatory Sandbox) — controlled environments where AI systems can be tested under
regulatory supervision, with clear rules on their setup and operation (see Detailed Arrangements for, and Functioning of, AI Regulatory Sandboxes).
These sandboxes also allow limited further processing of personal data for developing AI systems in the public interest (see Further Processing of
Personal Data for Developing Certain AI Systems in the Public Interest in the AI Regulatory Sandbox). Outside sandboxes, high-risk AI systems can
be tested in real-world conditions under strict oversight (see Testing of High-Risk AI Systems in Real World Conditions outside AI Regulatory
Sandboxes), and informed consent from participants is required (see Informed Consent to Participate in Testing in Real World Conditions outside AI
Regulatory Sandboxes).
The section also provides targeted support for providers and deployers, especially SMEs and start-ups, to ease compliance (see Measures for
Providers and Deployers, in Particular SMEs, including Start-ups), and allows specific derogations for certain operators in justified cases (see
Derogations for Specific Operators).
62
AI Regulatory Sandbox (Art 57)
Providing for a controlled environment for developing, training, testing and validating AI
systems for a limited time period before market launce or use
May include testing in real world with
supervised conditions
Pursuant to a specific AI sandbox plan agreed by
Provider or Perspective Provider AND Competent
Authority
Not limiting the supervisory or corrective powers of Competent Authorities at any level (including regimal or local) Significant
risks to HSFR must be adequately mitigated. If mitigation is not possible, authorities may suspend testing or sandbox
participation and must inform the AI Office
Designed and implemented to facilitate cross-border cooperation between National
Competent Authorities
AI Office
Making publicly available a list of planned and existing sandboxes and keep it up to
date
63
Provider or Perspective Provider
Competent Authority
Remainin liable for any harm but if follow the plan and
act in good faith, no administrative fines under the
Regulation or other laws
SHALL PROVIDE
Guidance, supervision, and support to identify and mitigate risks, ensuring compliance with
this Regulation and other relevant laws
Regulatory expectation and how to comply with this Regulation
Written proof and an exit report on sandbox activities, which providers can use to show
compliance
Market
Surveillance
Authority
Notified
Body
AND
Taking these documents positively into
account to help speed up conformity
assessments
Written proof and an exit report on sandbox activities, which providers can use to show
compliance
The Commission and the Board can access if subject to confidentiality (Art 78) AND with
agreement with Provider or Perspective Providers
Can be made public if both Provider or Perspective Provider AND Competent Authority
explicitly agree
Ensuring that data protection and other relevant authorities are involved in the AI
regulatory sandbox and supervise aspects within their remit
submitting annual and final reports on AI sandboxes to the AI Office and the Board, covering progress, best
practices, incidents, and recommendations, and publish them or their summaries online
The Commission
Considering the annual reports when relevant
Creating a single interface for AI sandboxes, enabling stakeholder interaction, enquiries, and non-binding
64
Detail Arrangements for, and Functioning of, AI Regulatory Sandboxes (Art 58)
The Commission
Setting rules for AI sandbox setup, operation, and supervision, including eligibility, application, participation, and
terms for participants to avoid fragmentation
Helping providers use their learning outcomes to meet conformity rules and apply voluntary codes of conduct (Art 95)
Facilitating the involvement of other relevant actors within the AI ecosystem (public and private)
Participation is time-limited based on project complexity, with possible extensions.
Procedures being simple, clear, and SME-friendly, streamlined across the Union, and participation must be
mutually recognized with equal legal effect Union-wide
65
Provider or Perspective Provider
National Competence Authority
SUBMITTING APPLICATION TO
With transparent and fair participation’s eligibility and selection criteria
May submit application in partner with
Deployer
Other relevant third
parties
AND
Being informed of the decision within 3 months
Access to AI sandboxes is free for SMEs and start-ups, except for fair and reasonable exceptional costs
Esp., SMEs and start-ups, being guided to pre-deployment services, standardisation support, testing facilities, and
innovation hubs where relevant.
May authorize real-world testing in AI sandboxes, agreeing on terms and safeguards to HSFR, cooperating
with other authorities to ensure consistency across the Union
66
Further Processing of Personal Data for Developing Certain AI Systems in the Public Interest in the AI Regulatory Sandbox (Art 59)
Personal data collected for other reasons can be used in the AI sandbox to develop or test AI systems,
but only when ALL of the following conditions (1-10) are met:
1. AI systems aiming to safeguard substantial public interest in areas like public health, safety,
environment, energy, transport, infrastructure, or public services.
2. The use is necessary to comply with the requirements for high-risk AI systems cannot be
fullled by using anonymised, synthetic or other non-personal data
3. Having effective monitoring mechanisms, including Data Protection Impact Assessments
(DPIA) as required by Art 35 GDPR and Art 39 of Regulation (EU) 2018/1725
, to identify high
risks to data subjects' rights and with response mechanisms to mitigate or stop processing if
needed
4. Personal data in the sandbox, being processed in a separate, secure environment controlled by
the Provider, with access limited to authorized persons
5. Providers may share original data only under EU data protection law; data created in the
sandbox must not be shared outside
6. The processing of personal data not affecting data subjects or limit their rights under EU data
protection law
Not affecting laws
restricting personal
data processing to
specified purposes or
those allowing data
use for developing,
testing, or training AI,
in line with Union data
protection laws
67
Must be based on a specific EU or national law
If the processing of personal data is for the
prevention, investigation, detection or prosecution of
criminal offences or the execution of criminal
penalties
Subjecting to the same strict conditions (1-10 above)
7. Personal data being protected with proper safeguards and deleted after the sandbox ends or
when retention expires
8. Processing logs being kept for the sandbox’s duration, unless Union or national law states
otherwise
9. A full description of the AI system’s training, testing, validation, and results must be kept in the
technical documentation (Annex IV)
10. A short summary of the AI project being published on the website of the Competent
Authorities; not covering sensitive operational data in relation to the activities of law
enforcement, border control, immigration or asylum authoritiesv
68
Testing of High-Risk AI Systems in Real World Conditions Outside AI Regulatory Sandboxes (Art 60)
Commission
Defining the real world test plan’s details through implementing acts
National
Market
Surveillance
Authority
To ensure safe environment of test,
having power to
Require Provider or Perspective
Provider to provide info
Conduct unannounced remote or on-
site inspections
Check testing processes and related
high-risk AI systems
When serious incident during testing
Provider or
Perspective
Provider
Reporting to National Market Surveillance Authority
Take immediate mitigation measures
If no mitigation: suspend or terminate testing
Establish an AI system recall procedure if terminated
69
Provider or
Perspective
Provider
High – risk
AI systems
May test their systems in real-world conditions before market launch or use
The testing is without prejudice to
Doing the test alone or with Deployers
Union or national laws on testing
products (Annex I)
Art 5 Prohibitions
Ethical review requirements (Union or national
level)
Conducting the test only where ALL of the
following conditions (1-11) are met:
Being liable for any damage under Union and national
law
Notifying Market Surveillance Authority of suspension
or termination and nal testing outcomes
1. Provider or
Perspective
Provider
Drawing up a real-world testing plan and
submitted it to
National Market Surveillance Authority
2. National Market
Surveillance Authority
Approving the plan
If no response within 30 days = approval assumed
Unless national law excludes
silent approval
70
3. Provider or
Perspective
Provider
If AI system is NOT in points 1, 6, 7 (law
enforcement, migration, asylum, border control)
or point 2 of Annex III
Register under Art 71(4) with
Union-wide unique ID
Info from Annex IX
AND
If AI system is in points 1, 6, or 7 (law
enforcement, migration, asylum, border control)
of Annex III
Register in secure non-public EU
database (Art 49(4)(d)) with
AND
Required info
If AI system is in point 2 of Annex III
Register under Art 49(5)
4. Provider or
Perspective
Provider
Having appointed a legal representative
who is in the Union
Established in the Union
OR
5. Data collected and processed being transferred to third countries ONLY with
appropriate and applicable safeguards under Union law
6. Testing only as long as needed to
meet objectives (6 months max)
Extension possible: +6
months when
Provider or
Perspective
Provider
Market
Surveillance
Authority
Notifying with explanation in advance to
Union-wide unique ID
71
9. Subjects involved in testing
10. Provider or Perspective Providers
as well as Deployer or Perspective
Deployer
Overseeing the testing with qualified persons
having proper training, capacity, and authority
11. The predictions, recommendations or decisions of the AI
system = can be effectively reversed and disregarded
8. If subjects of the testing in vulnerable groups = be appropriately protected
Giving informed consent (Art 61)
In case of law enforcement, if
consent would prevent testing
Testing must not negatively affect subjects
Personal data must be deleted after testing
Can withdraw any time with no
negative effect and no justification
needed
Withdrawal does not
affect past activities
Can request immediate and permanent deletion of personal data
7. If organizing with Deployer(s)
Deployer(s)
Being informed of all relevant
aspects of testing
Being given instructions for use of
AI system (Art 13)
Signing agreement with Provider or
Prospective Provider
Specifying roles and responsibilities
Ensuring compliance with testing
provisions and applicable laws
72
Informed Consent to Participate in Testing in Real World Conditions Outside AI Regulatory Sandboxes (Art 61)
Subjects involved in testing
Before participation, giving freely-given informed consent with concise, clear, relevant, and understandable
information regarding:
Nature, objectives & possible inconvenience of testing
How to request reversal or disregarding of AI system outputs
Right to withdraw anytime, no justification, no detriment
Right to refuse participation
Rights & guarantees:
Conditions & expected duration of participation
Union-wide unique ID number of the testing (Art 60(40)(c)
Provider’s contact info
Consent to be dated,
documented, given to
subject (a copy)
73
Measures for Providers and Deployers, in Particular SMEs, including Start-ups (Art 62)
MS
SMEs including start-ups,
Deployers, other innovators
SHALL PROVIDE
priority access if fulfiling the eligibility conditions and selection criteria
specific awareness raising and training activities on the application of this Regulation
Open channels for advice & queries on Regulation and sandboxes
Facility to encourage participation in the standardisation development process
Fees for conformity assessment (Art 43) being adjusted or reduced considering specific
interests and needs (size, market size and other relevant indicators)
AI Office
Providing standardised templates for areas in this Regulation (on Board’s request)
Maintaining a single EU-wide user-friendly info platform for this Regulation
Organising communication campaigns on obligations
Promoting and evaluating best practices for AI in public procurement
74
Derogations for Specific Operators (Art 63)
Microenterprises (as
defined in
Recommendation
2003/361/EC)
Applying simplified quality management system (Art 17)
ONLY if no partner or linked enterprises exist
Issuing guidelines on simplified elements
Commission
Simplification must not reduce protection or legal
compliance for high-risk AI
Operators must still
meet all other
obligations, including
Art 915, 72, and 73
75
Codes of Practice (Art. 56): GPAI
AI Office & Board to develop, & include at
least all obligations for GPAI w/ & w/o
systemic risks
Means to ensure tech docx, &
info & docx for providers of AI
sys who integrate GPAI is kept
updated
Adequate detail in summary of content
used for training
Identification of type & nature
of Union level systemic risks,
incl. sources
Measures, procedures &
modalities to assess & manage
Union level systemic risks – be
proportionate to risks
76
Areas
1) Biometrics
Remote biometric ID sys
Exception: biometric verification sys solely
to authenticate/ confirm identity
Biometric categorisation per sensitive attributes
Emotion recognition
2) Critical Infra
3) Education
Safety component in critical digital infra’s mgmt. & op., road safety, water/gas/heating/electricity supply
Determine access/admission
Evaluate learning outcomes
Assess level of ed. one will get
Monitor & detect prohibited acts in tests
4) Employment
Recruit/select/evaluate persons
Make decisions affecting job terms, promotion, termination, work allocation on
individual behaviour/traits, & monitor & evaluate performance & behaviour
5) Essential Services
Exception: Sys to detect financial fraud
6) Law Enforcement
7) Migration, Asylum &
Border Control Mgmt.
8) Administration of Justice
& Democratic Processes
Exception: Verify travel docx.
Exception: Persons not directly exposed to output
ANNEX III: HIGH RISK AI SYSTEMS ACCORDING TO SCENARIO 2
77
General
descript. of sys
Detailed descript.
of sys elements &
its dev. process
Detailed info of
monitoring,
functioning & control
Describe
performance
metric aptness
Detailed
descript.
of RMS
Changes by
provider thru
its lifecycle
List – applied
harmonised
standards
EU declaration
of conformity
copy
Detailed descript. of sys
to check AI sys
performance after launch
None applied – detailed
descript. of other ways
adopted to comply
Performance
abilities & limits
Foreseeable
unintended outcomes
& HSFR risk sources
Human oversight
measures
Input data specs
Version,
intended
purpose &
provider name
How it
interacts w/
HW/SW
not its part
SW/FW
versions &
update req.
Describe
all forms
in which
in market
Describe
HW on
which to
be run
If product
component
showing external
features – marking
& internal layout
UI – basic
descript.
Use instructions
Methods & steps
done to develop
Design specs
& choices
Descript. – sys
architecture
Data req.
Human oversight
measures needed
Descript.
predetermined
changes to sys
& performance
Validation &
testing
procedures used
Cybersecurity
measures
ANNEX IV: TECHNICAL DOCUMENTATION OF HIGH RISK AI SYSTEMS
78
Info to include in it
Sys name & type
& addl. info to
trace & identify
Provider’s name &
address or its AR’s
Statement –
declaration
provider’s sole
responsibility
Statement –
sys
complies
w/ Act
If processes personal
data – statement
complies w/ EU
Data Protection
Framework
Harmonised
standards used
Name & ID of
notified body,
conformity
assessment details
& certificate ID
Declaration’s
date, place &
signatory
ANNEX V: EU DECLARATION OF CONFORMITY
79
Unique single ID for
EU of testing
ID details of provider &
deployer involved in testing
Sys descript., ID info
& intended purpose
Testing plan summary
– main characteristics
Testing suspension/
termination info
ANNEX IX: INFORMATION TO BE SUBMITTED REGARDING TESTING OF
HIGH RISK AI SYSTEMS IN REAL WORLD CONDITIONS
80
Info to include in it by providers
General Descript. of GPAI
Tasks intended to do &
type of sys in which can
be integrated
Acceptable use policies
Release date &
distribution methods
Architecture
Modality & input &
output format
Detailed descript. of these elements &
dev. process info
Licence
Tech means to integrate it in AI sys
Model design specs & training process
Data used to train, test & validate
Training details, like computational
resources
Energy consumption
ANNEX XI: TECHNICAL DOCUMENTATION OF GPAI
81
Additional info by providers of
GPAI w/ systemic risks
Evaluation strategies – detailed
descript., esp. criteria, metrics
& methodology of ID of limits
Detailed descript. – internal/
external adversarial testing
measures
Detailed descript. – sys
architecture
82
Model’s general descript.
Tasks intended to do &
type of sys in which can
be integrated
Acceptable use policies
Release date &
distribution methods
Architecture
Modality & input &
output format
Licence
How it interacts w/
HW/SW not its part
SW versions
Describe model elements & dev. process
Tech means to integrate
Modality, input & output format &
max. size
Data used to train, test & validate
ANNEX XII: TRANSPARENCY INFORMATION BY GPAI PROVIDERS FOR
DOWNSTREAM PROVIDERS THAT INTEGRATE GPAI IN AI SYSTEMS
83