AI Regulation

EU AIAI Act

Our AI Compliance Advisors are playing a pivotal role in shaping AI regulation in Europe, particularly with the development of the EU AI Act, which is on track to become the world’s first comprehensive AI law. This groundbreaking legislation, aiming to regulate AI use within the EU, includes key amendments like a ban on AI in biometric surveillance and mandatory disclosure for AI-generated content by systems such as ChatGPT. The anticipated enactment of these rules into law by the end of 2023 highlights the significant influence of our advisors in steering this major regulatory advancement.

Overview of Topic

The EU Artificial Intelligence Act (AI Act) was proposed by the European Commission in April 2021 and is the first of its kind to propose horizontal regulation for AI across the EU, focusing on the utilization of AI systems and the associated risks. Following the model of the GDPR, it is likely that these regulatory approaches will have influence beyond the EU – and may even become global standards. The EU AI Act is the first comprehensive attempt to regulate these technologies. Key developments in 2023 include the European Parliament’s approval of amendments on June 14, expanding the scope of the AI Act, and the adoption of the Parliament’s negotiating position on the AI Act. The next phase involves discussions with EU member states to finalize the law’s implementation.

Some of the main areas include:

Conformity Assessments: 

Before deploying, high-risk AI systems must undergo assessments to ensure they meet the Act’s requirements. Some can be self-assessed by providers, while others need verification by third parties.

Risk management:
  • Data governance (ensuring high-quality datasets without biases)
  • Documentation (providing proof of compliance)
  • Transparency (ensuring users know they’re interacting with an AI system)
  • Human oversight (to minimize erroneous outputs)
  • Robustness, accuracy, and cybersecurity.
Bans on Certain AI Practices: 

The Act prohibits certain AI practices that might harm people’s rights. Examples include systems that manipulate human behavior, exploit vulnerabilities of specific individuals or groups, or utilize social scoring by governments.

EU Database for Stand-Alone High-Risk AI:

  • The Act proposes a database to register these AI systems to maintain transparency.
Governance and Implementation: 

A European Artificial Intelligence Board would be established to ensure consistent application across member states.

Fines for Non-compliance:

Companies violating the regulations might face hefty fines, similar to the penalties under the General Data Protection Regulation (GDPR).

Significance in Today's Landscape

The legislative procedure is expected to conclude by the end of 2023, with a grace period of 2 to possibly 3 years before enforcement begins. During this grace period, the European Commission aims to foster early implementation through industry collaboration. Some provisions of the AI Act, particularly concerning high-risk systems, have been agreed upon, but many crucial elements like definitions remain unsettled. The act primarily aims at consumer protection rather than product safety legislation. There is a notable divergence between the EU and US approaches to AI regulation, and the EU AI Act heavily relies on standards and implementing acts for classification, requirements implementation, and assessments of AI systems.

All You Need to Know: EU AI Act

The AI Act classifies AI according to its risk 
  • Prohibited AI Systems: Systems posing unacceptable risks are banned, such as social scoring and manipulative AI. 
  • High Risk AI Systems: The majority of the AI Act is devoted to regulating high-risk AI systems. 
  • Limited Risk AI Systems: These systems face less stringent transparency requirements. Developers and deployers need to ensure that users know they are interacting with AI, such as with chatbots and deepfakes. 
  • Minimal Risk AI Systems: These systems are unregulated, including most AI applications currently on the EU single market like AI-driven video games and spam filters. However, developments in generative AI are shifting this landscape. 

Picture 1 EU AI Act

Who needs to act? 

The AIA identifies a wide range of roles that organisations can take within its context. These include, for example, providers, authorised representatives, importers, distributors and users. Not all of these roles come with the same obligations under the AIA. It is also important to state that the AIA has extra-territorial reach. It applies to Electronic copy available at: https://ssrn.com/abstract=4064091 10 any firm operating an AI system within the EU and firms located outside the EU. To ensure the AI systems used within the EU market conform to the AIA, the main onus rests with (a) the providers, who place an AI system on the EU market or put into service an AI system for use in the EU market; (b) users located within the EU market; and (c) providers or users of AI systems that are located outside of the EU, but whose system is used (or has an output) on the EU market. 

Picture 4 EU AI Act

Providers: The primary responsibilities are borne by providers (developers) of high-risk AI systems: 
  • These include entities that aim to introduce or operate high-risk AI systems within the EU, irrespective of whether they are located in the EU or in a third country. 
  • Additionally, providers from third countries are accountable when the output from the high-risk AI systems is utilized within the EU. 
Examples of Providers (Developers) 
  • A company in the US develops an AI system for medical diagnosis that is considered high-risk due to its potential impact on health outcomes. If they intend to market this system in the EU, they must comply with stringent EU regulations, including conducting comprehensive risk assessments and ensuring the system’s reliability and safety. 
  • A tech startup in India creates an AI-driven recruitment tool that analyzes applicant data to predict job suitability. Since this tool could potentially be used by companies in the EU, the Indian startup, as a provider, would need to ensure the system meets EU standards for transparency, non-discrimination, and data protection, even though they are based outside the EU. 

Picture 2 EU AI Act

Note: General purpose AI (GPAI): 
  • All GPAI model providers must provide technical documentation, instructions for use, comply with the Copyright Directive, and publish a summary about the content used for training.  
  • Free and open licence GPAI model providers only need to comply with copyright and publish the training data summary, unless they present a systemic risk.  
  • All providers of GPAI models that present a systemic risk – open or closed – must also conduct model evaluations, adversarial testing, track and report serious incidents and ensure cybersecurity protections.
Deployers: Deployers, who are either natural or legal persons professionally deploying an AI system, are not considered affected end-users: 
  • Deployers of high-risk AI systems are subject to certain responsibilities, although these are less extensive than those assigned to providers (developers). 
  • These responsibilities apply to deployers operating within the EU and to third-country users if the output of the AI system is employed in the EU. 
Examples of Deployers 
  • A German hospital uses an AI system developed in Canada for patient monitoring and treatment recommendations. As a deployer, the hospital must ensure that the system is used in compliance with EU guidelines, which might include verifying that the system’s output is reliable and that it adheres to EU data privacy laws. 
  • A French university deploys an AI system for monitoring student engagements and performance in online courses. The university, as the deployer, must make students aware that they are interacting with an AI system, and ensure that the AI system’s use respects the students’ privacy and adheres to EU educational standards. 

These examples show how both providers and deployers have specific obligations under EU AI regulations, aiming to ensure that AI systems are used safely and ethically within the EU, regardless of where they are developed or operated from.

Picture 5 EU AI ACT

 

Prohibited AI systems (Chapter II, Art. 5) 

AI systems:  

  • deploying subliminal, manipulative, or deceptive techniques to distort behaviour and impair informed decision-making, causing significant harm. 
  • exploiting vulnerabilities related to age, disability, or socio-economic circumstances to distort behaviour, causing significant harm. 
  • social scoring, i.e., evaluating or classifying individuals or groups based on social behaviour or personal traits, causing detrimental or unfavourable treatment of those people.  
  • assessing the risk of an individual committing criminal offenses solely based on profiling or personality traits, except when used to augment human assessments based on objective, verifiable facts directly linked to criminal activity.  
  • compiling facial recognition databases by untargeted scraping of facial images from the internet or CCTV footage. 
  • inferring emotions in workplaces or educational institutions, except for medical or safety reasons. 
  • biometric categorisation systems inferring sensitive attributes (race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation), except labelling or filtering of lawfully acquired biometric datasets or when law enforcement categorises biometric data.   
  • ‘real-time’ remote biometric identification (RBI) in publicly accessible spaces for law enforcement, except when:  
    • targeted searching for missing persons, abduction victims, and people who have been human trafficked or sexually exploited;  
    • preventing specific, substantial and imminent threat to life or physical safety, or foreseeable terrorist attack; or  
    • identifying suspects in serious crimes (e.g., murder, rape, armed robbery, narcotic and illegal weapons trafficking, organised crime, and environmental crime, etc.). 
      • Using AI-enabled real-time RBI is only allowed when not using the tool would cause harm, particularly regarding the seriousness, probability and scale of such harm, and must account for affected persons’ rights and freedoms. 
      • Before deployment, police must complete a fundamental rights impact assessment and register the system in the EU database, though, in duly justified cases of urgency, deployment can commence without registration, provided that it is registered later without undue delay.  
      • Before deployment, they also must obtain authorisation from a judicial authority or independent administrative authority1, though, in duly justified cases of urgency, deployment can commence without authorisation, provided that authorisation is requested within 24 hours. If authorisation is rejected, deployment must cease immediately, deleting all data, results, and outputs. 
High risk AI systems (Chapter III)
Classification rules for high-risk AI systems (Art. 6)  

High risk AI systems are those:  

  • used as a safety component or a product covered by EU laws in Annex I AND required to undergo a third-party conformity assessment under those Annex I laws; OR 
  • those under Annex III use cases (below), except if: 
    • the AI system performs a narrow procedural task; 
    • improves the result of a previously completed human activity; 
    • detects decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment without proper human review; or 
    • performs a preparatory task to an assessment relevant for the purpose of the use cases listed in Annex III. 
  • The Commission can add or modify the above conditions through delegated acts if there is concrete evidence that an AI system falling under Annex III does not pose a significant risk to health, safety and fundamental rights. They can also delete any of the conditions if there is concrete evidence that this is needed to protect people.  
  • AI systems are always considered high-risk if it profiles individuals, i.e. automated processing of personal data to assess various aspects of a person’s life, such as work performance, economic situation, health, preferences, interests, reliability, behaviour, location or movement. 
  • Providers that believe their AI system, which fails under Annex III, is not high-risk, must document such an assessment before placing it on the market or putting it into service. 
  • 18 months after entry into force, the Commission will provide guidance on determining if an AI system is high risk, with list of practical examples of high-risk and non-high risk use cases. 
Picture 3 EU AI Act
Requirements for providers of high-risk AI systems (Art. 8-17) 

High risk AI providers must: 

  • Establish a risk management system throughout the high risk AI system’s lifecycle;  
  • Conduct data governance, ensuring that training, validation and testing datasets are relevant, sufficiently representative and, to the best extent possible, free of errors and complete according to the intended purpose. 
  • Draw up technical documentation to demonstrate compliance and provide authorities with the information to assess that compliance. 
  • Design their high risk AI system for record-keeping to enable it to automatically record events relevant for identifying national level risks and substantial modifications throughout the system’s lifecycle. 
  • Provide instructions for use to downstream deployers to enable the latter’s compliance. 
  • Design their high risk AI system to allow deployers to implement human oversight. 
  • Design their high risk AI system to achieve appropriate levels of accuracy, robustness, and cybersecurity. 
  • Establish a quality management system to ensure compliance. 
Category Description
Non-banned biometrics
  • Remote biometric identification systems, excluding biometric verification that confirm a person is who they claim to be.
  • Biometric categorisation systems inferring sensitive or protected attributes or characteristics.
  • Emotion recognition systems.
Critical infrastructure Safety components in the management and operation of critical digital infrastructure, road traffic and the supply of water, gas, heating, and electricity.
Education and vocational training
  • AI systems determining access, admission, or assignment to educational and vocational training institutions at all levels.
  • Evaluating learning outcomes, including those used to steer the student’s learning process.
  • Assessing the appropriate level of education for an individual.
  • Monitoring and detecting prohibited student behaviour during tests.
Employment, workers management, and access to self-employment
  • AI systems used for recruitment or selection, particularly targeted job ads, analysing and filtering applications, and evaluating candidates.
  • Promotion and termination of contracts, allocating tasks based on personality traits or characteristics and behaviour, and monitoring and evaluating performance.
Access to and enjoyment of essential public and private services
  • AI systems used by public authorities for assessing eligibility to benefits and services, including their allocation, reduction, revocation, or recovery.
  • Evaluating creditworthiness, except when detecting financial fraud.
  • Evaluating and classifying emergency calls, including dispatch prioritising of police, firefighters, medical aid and urgent patient triage services.
  • Risk assessments and pricing in health and life insurance.
Law enforcement
  • AI systems used to assess an individual's risk of becoming a crime victim.
  • Polygraphs.
  • Evaluating evidence reliability during criminal investigations or prosecutions.
  • Assessing an individual’s risk of offending or re-offending not solely based on profiling or assessing personality traits or past criminal behaviour.
  • Profiling during criminal detections, investigations, or prosecutions.
Migration, asylum, and border control management
  • Polygraphs.
  • Assessments of irregular migration or health risks.
  • Examination of applications for asylum, visa, and residence permits, and associated complaints related to eligibility.
  • Detecting, recognising, or identifying individuals, except verifying travel documents.
Administration of justice and democratic processes
  • AI systems used in researching and interpreting facts and applying the law to concrete facts or used in alternative dispute resolution.
  • Influencing elections and referenda outcomes or voting behaviour, excluding outputs that do not directly interact with people, like tools used to organise, optimise, and structure political campaigns.
General purpose AI (GPAI) (Chapter V) 

GPAI model means an AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications. This does not cover AI models that are used before release on the market for research, development and prototyping activities. 

GPAI system means an AI system which is based on a general purpose AI model, that has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems.  

GPAI systems may be used as high risk AI systems or integrated into them. GPAI system providers should cooperate with such high risk AI system providers to enable the latter’s compliance.  

All providers of GPAI models must (Art. 53):  
  • Draw up technical documentation, including training and testing process and evaluation results. 
  • Draw up information and documentation to supply to downstream providers that intend to integrate the GPAI model into their own AI system in order that the latter understands capabilities and limitations and is enabled to comply.  
  • Establish a policy to respect the Copyright Directive.  
  • Publish a sufficiently detailed summary about the content used for training the GPAI model. 

Free and open licence GPAI models – whose parameters, including weights, model architecture and model usage are publicly available, allowing for access, usage, modification and distribution of the model – only have to comply with the latter two obligations above, unless the free and open licence GPAI model is systemic. 

GPAI models are considered systemic when the cumulative amount of compute used for its training is greater than 10^25 floating point operations per second (FLOPS) (Art. 51). Providers must notify the Commission if their model meets this criterion within 2 weeks (Art. 52). The provider may present arguments that, despite meeting the criteria, their model does not present systemic risks. The Commission may decide on its own, or via a qualified alert from the scientific panel of independent experts, that a model has high impact capabilities, rendering it systemic.  

 

In addition to the four obligations above, providers of GPAI models with systemic risk must also (Art. 55): 

  • Perform model evaluations, including conducting and documenting adversarial testing to identify and mitigate systemic risk.  
  • Assess and mitigate possible systemic risks, including their sources.  
  • Track, document and report serious incidents and possible corrective measures to the AI Office and relevant national competent authorities without undue delay.  
  • Ensure an adequate level of cybersecurity protection. 

 

All GPAI model providers may demonstrate compliance with their obligations if they voluntarily adhere to codes of practice until European harmonised standards are published, compliance with which will lead to a presumption of conformity (Art. 56). Providers that don’t adhere to codes of practice must demonstrate alternative adequate means of compliance for Commission approval. 

 

Codes of practice (Art. 56) 
  • Will account for international approaches.  
  • Will cover but not necessarily limited to the above obligations, particularly the relevant information to include in technical documentation for authorities and downstream providers, identification of the type and nature of systemic risks and their sources, and the modalities of risk management accounting for specific challenges in addressing risks due to the way they may emerge and materialise throughout the value chain.  
  • AI Office may invite GPAI model providers, relevant national competent authorities to participate in drawing up the codes, while civil society, industry, academia, downstream providers and independent experts may support the process.  
Governance (Chapter VI) 
  • The AI Office will be established, sitting within the Commission, to monitor the effective implementation and compliance of GPAI model providers (Art. 64).  
  • Downstream providers can lodge a complaint regarding the upstream providers infringement to the AI Office (Art. 89). 
  • The AI Office may conduct evaluations of the GPAI model to (Art. 92):  
    • assess compliance where the information gathered under its powers to request information is insufficient. 
    • Investigate systemic risks, particularly following a qualified report from the scientific panel of independent experts (Art. 90).  
What are the penalties for non-compiance? 

The penalties outlined in the AIA for non-compliance closely resemble those in the GDPR, with the goal of making them effective, proportionate, and dissuasive. The sanctions are categorized into three levels: 

  • Severe non-compliance involving prohibited AI practices or violations of data and governance requirements for high-risk AI systems can result in fines of up to €30 million or 6% of global annual turnover (whichever is greater). 
  • General non-compliance with other AIA requirements may lead to penalties of up to €20 million or 4% of global annual turnover (whichever is higher). 
  • Providing false, incomplete, or misleading information to notified bodies or authorities can incur fines of up to €10 million or 2% of global annual turnover (whichever is greater). 

Enforcement is handled by national authorities, and individuals harmed by an AI system may have legal recourse, particularly in cases involving privacy violations or discrimination. 

Picture 6 EU AI Act

WHO DOES IT IMPACT?

EU Act applies to regulated and unregulated firms.

Asset Managers
Banks
Supervisors
Commodity Houses
Fintechs

How Can We Help?

In response to the AI Act, a proposed regulation by the European Union for the safe and ethical development and use of artificial intelligence (AI), organizations can engage in various activities to ensure compliance and ethical application of AI. Working with senior AI and Compliance advisors who are at the forefront of AI supervisory dialogue, we can support the below activities:

The following steps can summarise it:

1

Compliance Assessment and Advisory

Our Compliance Experts can help you understand the AI Act, identify whether an AI system falls under the high-risk category, and determine specific compliance requirements.

2

Risk Management and Mitigation Strategies

This involves assessing risks associated with AI systems and developing strategies to mitigate these risks, especially for high-risk AI applications where strict regulatory adherence is mandatory

3

Ethical AI Frameworks Development

Our Compliance SME will set up or review your ethical AI frameworks and guidelines in line with the AI Act’s requirements, focusing on fairness, accountability, transparency, and data governance.

4

Technical and Operational Support

Our Technology Compliance SME ensure that AI systems are designed, developed, and deployed in compliance with the AI Act, which may include updating or modifying existing systems.

5

Training and Capacity Building

Our AI Compliance SMEs will help with design and roll-out training programs for employees on legal and ethical aspects of AI as per the AI Act to foster organization-wide understanding and best practices in AI usage.

6

Data Governance and Privacy Compliance

Our Compliance Experts ensure alignment with the AI Act and other relevant regulations such as GDPR, focusing on data privacy, protection, and management

7

Monitoring and Reporting Mechanisms

T3 Compliance SMEs establish continuous monitoring and reporting processes, as mandated by the AI Act, especially for high-risk AI systems

8

Strategic Planning for AI Initiatives

Our Technical Compliance Consultants plan AI projects to comply with the AI Act while fulfilling business goals.

9

Stakeholder Engagement and Communication

Our Compliance Experts actively engage with stakeholders, including regulatory bodies, customers, and partners, to discuss AI utilization and compliance.

10

Impact Assessment and Auditing

T3 Compliance SMEs conduct regular impact assessments and audits of AI systems to ensure ongoing compliance and identify areas for improvement.

11

Policy Advocacy and Regulatory Insights

Our Technical Compliance Consultants stay updated on the changing regulatory landscape and engage in policy discussions pertaining to AI. 

Want to hire 

AI Regulation Expert? 

Book a call with our experts