*/
HANDBOOK ON THE EU AI ACT DEVELOPED BY
KAMRAN JAVED & WORAWITCHA PATTANAKIJ
TABLE OF CONTENTS
S. No.
Particulars
Page No.
1
List of Abbreviations
1-3
2
Status of AI Systems/Models Already on the Market or in Service
4
3
AI Literacy
5
Different Categories of AI Systems / Models
4
Classification as Prohibited AI Systems
6-7
5
High Risk AI Systems
8
6
Compliance Requirements for High Risk AI Systems
9-14
7
General Purpose AI Models with Systemic Risks
15
Actors
8
Provider
16
9
Providers’ Obligations: High Risk AI Systems
17
10
Additional Obligations for Providers of High Risk AI Systems
18-20
11
Providers’ Obligations: GPAI Models
21
12
Providers’ Obligations: GPAI with Systemic Risks
22
13
Deployer
23
14
Deployers’ Obligations: High Risk AI Systems
23
15
Additional Obligations for Deployers of High Risk AI Systems
24
16
Transparency Obligations for Providers and Deployers of Limited Risk AI systems
25-26
17
Authorised Representatives
27
18
Authorised Representatives’ Obligations: High Risk AI Systems
27
19
Authorised Representatives’ Obligations: GPAI
28
20
Importer
29
21
Importers’ Obligations: High Risk AI Systems
29
22
Distributer
30
23
Distributors’ Obligations: High Risk AI Systems
30
24
3rd Party Suppliers
31
25
3rd Party Suppliers’ Obligations (Art. 25(4)): High Risk AI Systems
31
26
Other Important Definitions in the Act
32
27
Operator
32
28
Downstream Provider
32
29
Other actors
33
30
Conformity Assessment Body (CAB)
33
31
Notified Body
33
32
Notifying Authority
33
33
Market Surveillance Authority
34
34
Procedures Related to Notified Bodies, Notifying Authority & Conformity Assessment Bodies
35-46
Miscellaneous
35
Serious Incidents: High Risk AI Systems
47
36
AI System Falling under Annex III but Classified as Non-high Risk
48
37
EU Database for High Risk AI Systems in Annex III
49
38
Standards, Conformity Assessment, Certificates, and Registration
50-61
39
Measures in Support of Innovation
62-75
40
Codes of Practice: GPAI
76
Annex
41
Annex III: High-Risk AI Systems According to Scenario 2
77
42
Annex IV: Technical Documentation of High-Risk AI Systems
78
43
Annex V: EU Declaration of Conformity
79
44
Annex IX: Information to Be Submitted Regarding Testing of High-Risk AI Systems in Real-World Conditions
80
45
Annex XI: Technical Documentation of GPAI
81-82
46
Annex XII: Transparency Information by GPAI Providers for Downstream Providers That Integrate GPAI in AI Systems
83
LIST OF ABBREVIATIONS
& And
Acc. – According
A/c – Account
Addl. – Additional
Admin. Administrative
AI – Artificial Intelligence
Alt. – Alternative
AR – Authorised Representative
Art. – Article
Auth. Authority
Auto Automatic/Automatically
B/w – Between
CAB Conformity Assessment Body
Descript. – Description
Dev. – Development
Doc – Document
Docx – Documentation
DSM Directive – Directive (EU) 2019/790 on Copyright in the Digital Single Market
Ed. – Education
Esp. – Especially
EU – European Union
Euro European
1
Exam – Examination
FLOPs – Floating-point Operations Per Second
FR – Fundamental Rights
FW – Firmware
GPAI – General Purpose AI Models
HSFR – Health, Safety or Fundamental Rights
HW – Hardware
ID – Identification
Imp. – Important
Incl. Including
Info – Information
Infra Infrastructure
K – Thousand
LLMs – Large Language Models
Max. – Maximum
Mgmt. – Management
Min. – Minimum
MS – Member States
Natl. – National
No. – Number
Op – Operation
Org – Organisational
QMS – Quality Management System
Rep. – Representative
Req. – Requirements
2
RMS – Risk Management System
Sec. – Section
Specs Specifications
SW – Software
Sys – System
Tech – Technical
Thru Through
TM Trademark
u/ – under
UI – User Interface
Union-wide Unique ID – Union-wide Unique Single Identification Number
w/ – with
w/o – without
Yrs – Years
3
Sys as component of large-scale IT sys
established by legal acts in Annex 10 &
put on market before 02.08.2027 –
comply by 31.12.2030
High risk sys put on market before
02.08.2026 – dont comply unless
significant changes in design
Public auth. to use high risk
sys – operators must comply
by 02.08.2030
02.08.2025 – providers to
comply by 02.08.2027
BUT
STATUS OF AI SYSTEMS/MODELS ALREADY ON
THE MARKET OR IN SERVICE (Art. 111)
4
To be complied with by providers and deployers of any kind of AI system
Staff & persons who operate &
use sys – provider & deployer
ensure adequate AI literacy
Take into a/c – tech knowledge,
experience, ed., training & context
Consider persons & groups on
whom to be used
AI LITERACY (Art. 4)
5
OR
1) Subliminal techniques
beyond person’s consciousness
2) Purposefully manipulative/
deceptive technique
3) Exploits vulnerabilities of age, disability or specific social/econ.
situation with object/effect of material distortion of their behaviour &
causing significant harm.
4) Evaluate/ classify persons/ groups on social
behaviour or known/ inferred/ predicted
personal/ personality characteristics & social
score leading to:
Objective/ Effect: Materially distorts behaviour by appreciably impairing
ability to make informed decision leading to decision won’t otherwise
take & causes significant harm to self/ others.
Detrimental/unfavourable
treatment in social contexts
unrelated to context in which
data originally generated.
Detrimental/unfavourable
treatment = unjustified/
disproportionate to social
behaviour or its gravity.
5) Assess/ predicts risk of persons committing crimes
solely by profiling/ assessing their personality traits.
Systems to support human assessment of a person’s
involvement in a crime based on objective & verifiable facts
directly linked to a crime.
6) Creates/ expands face recognition database by
untargeted scraping of facial images from
internet/CCTV.
7) Infers emotions in
workplace/educational institutes.
For Medical/
safety reasons.
Excepon
Excepon
Classification as Prohibited AI Systems (Art. 5)
6
8) Biometric categorisation systems categorise persons
on biometric data to infer race, political opinions, trade
union membership, religious beliefs, sex life/orientation.
Label/filter lawfully acquired biometric datasets based
on data or its categorisation in law enforcement.
9) Real-time remote biometric ID systems in
publicly accessible spaces for law
enforcement.
1. Target search for abduction, trafficking, sexual
exploitation victims, or missing persons.
2. Prevent specific, substantial or imminent life/ safety
threat, or genuine & present/ foreseeable terrorist attack
threat.
3. Identify suspected criminals for investigation,
prosecution or executing penalty for offences in Annex II
& with min. 4 years sentence.
Only to identify specific target, & take account of following:-
Nature of situation, i.e., seriousness, probability &
scale of harm if not deployed
Seriousness, probability & scale of impact on
rights & freedoms of all.
Necessary & Proportionate safeguards, esp. temporal, geographic
& personal limitations.
Must complete FR impact assessment & register it in EU database
Prior authorisation by judicial or independent admin. authority.
Excepon
Excepon
7
Scenario 1
Scenario 2
Exceptions
Remarks
Classification
(Art. 6)
Safety Component/ Product
covered by Annex I legislation
Undergo 3rd party conformity
assessment.
Fall under Annex III
No significant risk to health, safety or FR
(“HSFR”).
- Narrow procedural task
- Improve result of previous human activity
- detect decision-making
patterns/deviations
- preparatory task
Profiling of
natural persons :
always high risk.
Document its assessment
before selling/ using
Fulfil registration obligations
Give to natl. auth. on
request
Provider
HIGH RISK AI SYSTEMS (Chapter III)
8
Risk Mgmt. System
(“RMS”) : Art. 9
Continuous iterative process run throughout its life.
Regular Review & Update
Used for intended purpose: Identify & analyse known & foreseeable risks to HSFR.
Adopt appropriate & targeted risk mgmt.
measures to address them.
Estimate & evaluate risks when used for intended purpose & u/ conditions of
foreseeable misuse.
Evaluate other risks based on data from post-market monitoring systems.
Consider likely adverse impact on minors & other vulnerable groups when implementing
RMS.
Compliance Requirements for High Risk AI Systems (Chapter III, Sec. 2)
9
Data Governance
(Art. 10)
Training, validation
& testing data sets
Subject to appropriate data governance & mgmt.
practices
Relevant, representative, error free & complete
Appropriate statistical properties
Take a/c of characteristics/elements particular to geographical,
contextual, behavioural & functional setting in which to be used.
Design choices
Data collection & origin
Original purpose, if personal data
Data prep processing ops
Assumptions formulation
Assess needed data setsavailability, quantity &
suitability
Examine possible biases affecting HSFR
Apt measures to detect, prevent & mitigate possible biases
Identify & ways to address data gaps preventing compliance
10
Technical Documentation (Art. 11)
Draw up before placing on market
Keep up-to-date
Contain elements in Annex IV
Record Keeping (Art. 12)
Technically allow auto log recording
Logging capabilities
Identify situations
presenting risks to
HSFR or
substantial
modification
Facilitate post-
market
monitoring
Enable monitoring
operations of
systems affecting
HSFR
Logging capabilities for Remote Biometric ID Systems
Record each use period
Reference database against which input data checked
Input data
Identify persons involved in verification of results
11
Transparency Obligations
Towards Deployers (Art. 13)
Operation sufficiently transparent for
interpreting output
Have instructions for use in appropriate format
with concise, clear, comprehensive & relevant
info
Provider’s identity & contact details
Characteristics, capabilities & limitations
Predetermined changes to system & its performance
Human oversight measures, esp. tech measures to facilitate outputs
interpretation
Required computational & hardware resources, expected lifetime, & necessary
maintenance measures
Descript. of mechanisms allowing to properly collect, store &
interpret logs
Intended purpose
Level of accuracy & foreseeable circumstances that may
impact it
Risks to HSFR when used for intended purpose & u/
conditions of foreseeable misuse.
Tech capabilities to give relevant info to explain output
Performance regarding specific persons/groups on which
intended to be used
Specs for input data or other relevant info regarding
training, validation & testing data sets used
Info to enable deployers to interpret output & use
appropriately
12
&/OR
Human Oversight Measures (Art. 14)
Designed & developed so
can be effectively overseen
by natural persons
Aim: prevent/ minimise
risks to HSFR when used
for intended purpose & u/
conditions of foreseeable
misuse
Measures: commensurate
w/ risks, autonomy level &
context of use
Identified & built into it before placing on market
Identified before placing on market & apt for
deployer to implement
Properly understand capacities & limits, & able to monitor its
operation
Remain aware of automation bias, esp. for systems that give info/
recommendations for decision by persons.
Correctly interpret output
If needed, decide not to use or disregard/ reverse output
Intervene/ interrupt in a way that it safely comes to a halt.
Remote Biometric ID Systems:
Action/ Decision taken on ID by
system be separately verified &
confirmed by 2 persons w/
necessary competence, training &
authority.
13
Accuracy, Robustness & Cybersecurity (Art. 15)
Designed & developed so has apt
accuracy, robustness &
cybersecurity levels & maintains it
throughout
Tech & org measures so resilient to
errors or inconsistencies that occur
in system or its environment, esp.
from interaction w/ persons or other
systems
Systems that continue to learn
eliminate/reduce biased
feedback loops risks, & are
addressed w/ apt mitigation
measures
Resilient to
attempts to exploit
vulnerabilities to
alter use, output or
performance
Tech solutions for
cybersecurity – apt
for relevant
circumstances & risks
Declare accuracy levels
in instructions for use
Can achieve robustness thru
tech redundancy solutions like
backup/ fail-safe plans.
Include measures
to prevent, detect,
respond, resolve &
control attacks –
data poisoning,
model poisoning,
model evasion,
confidentiality
attacks or model
flaws.
14
Scenario 1
Scenario 2
Scenario 3
Scenario 4
Classification
High impact capabilities evaluated
thru apt tech tools & methodologies
Decision of Commission has such capabilities. Criteria to
determine (Annex 13):-
Model’s no. of parameters
Data set quality/size
Amount of computation for training measured in FLOPs
or combination of other variables like estimate training cost,
training time or energy used for training
Input & output modalities, like:
o LLMs
o text-to-image
o multi-modality
o state of art threshold to determine each modality’s
capabilities
o specific type of inputs & outputs
benchmarks & evaluations of capabilities, like:
o no. of task w/o addl. training
o adaptable to learn new, distinct tasks
o autonomy level & scalability
o tools has access to
if high impact on internal market due to reach – presumed if
supplied to at least 10k business users
no. of registered end users
Trained with more than
10^25 FLOPs
Decision of Commission
poses concrete identifiable
risk at Union level.
Be based on Annex 13
criteria.
General Purpose AI Models with Systemic Risks (Art. 51)
15
Developing or having it developed
Under its own name or trademark
Provider
Placing it on the market
Putting into service
Whether for payment or free of charge
AI System
General –
Purpose AI Model
AND
OR
16
ProvidersObligations (Art. 16):
High Risk AI Systems
Ensure compliance requirements fulfilled
Indicate name, trade name/TM & address on system/
package/ docx
Have Quality Mgmt. System (“QMS”)
Keep documentation
Keep auto generated logs, if under
their control
Put thru conformity
assessment procedure
Draw up EU Declaration of Conformity
Affix CE
marking
Fulfil registration obligations
If needed, Corrective actions & inform
Demonstrate when requested
Compliance w/ Euro Accessibility Act
For min. 6 months
Appoint by written mandate & empower AR to carry out obligations
17
Post-Market Monitoring (Art. 72)
Establish & document monitoring sys
proportionate to nature of tech & risks
Collect, doc & analyse data on sysperformance
given by deployer or collected thru others
Allow her to evaluate continuous compliance
Analysis of interaction w/ other sys, if any
Exception: Sensitive operational data of
enforcement authorities
Based on post-market monitoring plan
Plan – part of tech docx
Duty to Market Surveillance Authority (Art. 74)
If necessary – full access to docx
& data sets used to develop sys
On request, access to source
code if both conditions met:
Access imp. to assess
conformity
Testing/auditing process & verifications
based on data & docx given exhausted &
insufficient
Additional Obligations for Providers of High Risk AI Systems
18
include
Quality Mgmt. System (“QMS”): Art. 17
Systematically Documented – written
policies, procedures & instructions
Regulatory compliance strategy
Techniques, procedures & systematic
actions for
Exam, test & validation procedures to be
carried out & its frequency
Tech specs to be applied
Systems & procedures for data mgmt.
RMS
Setting-up, implementation & maintenance of post-market
monitoring system
Procedure to report serious incident
Handling of communication w/ relevant authorities
Systems & procedures for record keeping
Resource mgmt.
Design, design
control &
verification
Development,
quality control
& assurance
Accountability framework setting out
responsibilities of mgmt. & other staff
19
inform
Documentation (Art. 18)
Technical documentation
Documentation on QMS
Documentation on approved changes
Decisions & docx issued by notified bodies
EU declaration of conformity
Corrective Actions & Duty to Inform (Art. 20)
Immediately take corrective
actions to bring in conformity,
if consider not in conformity
When risk to HSFR
& becomes aware –
investigate causes
Market surveillance authorities
Notified bodies
Inform – nature of non-
compliance & corrective
action taken
Cooperation with Authorities (Art. 21)
Upon reasoned request by a
competent authority
Provide all info & docx necessary to prove conformity w/
compliance requirements – simple language
Give access to auto generated log when under their control
Keep for 10 years after placing
system on the market
Or withdraw,
disable, recall
Distributors, deployers,
authorized rep. (AR), importers
Ensure corrective action
for all sys on market
within deadline by
market auth.
20
Apply to all General Purpose AI Models
(“GPAI”) – incl. w/ systemic risks
Draw up & keep
updated tech docx –
incl. training & testing
process & evaluation
results – & at least info
per Annex 11
Provide to AI
Office & natl.
auth. on request
Draw up, keep updated &
give to providers of AI sys
who integrate GPAI in sys
– info & docx.
Enable AI sys providers to
comply w/ their duties &
have good understanding
of its abilities & limits
Have at least info
per Annex 12
Put in place policy to
comply w/ EU’s copyright
& related rights lawesp.
to identify & comply w/
rights reserved per DSM
Directive
Draw up & make public
detailed summary of
content used for training
Template by AI Office
Cooperate w/
Commission &
national
authorities
Exception: free &
open source models
No exception for
models w/
systemic risks
Can rely on codes of practice until
harmonised standard published
Compliance w/ harmonised standards
conformity presumption for obligations covered
by standards
If don’t follow codes of practice or
harmonised standards – show alt.
adequate means of compliance
3
rd
country providers –
appoint authorised
representatives (AR) in EU
& empower to perform all
tasks in mandate
ProvidersObligations (Art. 53): GPAI Models
21
Max 2 weeks after
becomes aware – has high
impact capabilities – notify
Commission
Include in notification
– info imp. to show
req. met
Commission becomes
aware but not notified –
may designate itself.
May include substantiated
arguments – req. met but due
to specific characteristics no
systemic risks
Final decision –
Commission’s
If designated – can
request & Commission
may reassess.
Include in request
objective, detailed &
new reasons arisen after
designation.
If designation
maintained – can
request again after 6
months
In addition to obligations for all GPAI
Perform model evaluation
in acc. w/ state of art tools
& protocols – incl.
adversarial test to identify &
mitigate systemic risks
Assess & mitigate
Union level
systemic risks, incl.
sources
Track, doc & report
to AI Office & natl.
auth. info about
serious incidents &
corrective measures
to address
Adequate
cybersecurity
protection for
model & physical
infra
Can rely on codes of practice until
harmonised standard published
Compliance w/ harmonised standards –
conformity presumption for obligations covered
by standards
If don’t follow codes of practice or
harmonised standards – show alt.
adequate means of compliance
ProvidersObligations (Artt. 52 & 55):
GPAI with Systemic Risks
22