AI Regulation

EU AIAI Act

Our AI Compliance Advisors are playing a pivotal role in shaping AI regulation in Europe, particularly with the development of the EU AI Act, which is on track to become the world’s first comprehensive AI law. This groundbreaking legislation, aiming to regulate AI use within the EU, includes key amendments like a ban on AI in biometric surveillance and mandatory disclosure for AI-generated content by systems such as ChatGPT. The anticipated enactment of these rules into law by the end of 2023 highlights the significant influence of our advisors in steering this major regulatory advancement.

DOWNLOAD THE EU AI ACT GUIDELINE

Get your free copy of the EU AI ACT Guideline

Overview of Topic

The EU Artificial Intelligence Act (AI Act) was proposed by the European Commission in April 2021 and is the first of its kind to propose horizontal regulation for AI across the EU, focusing on the utilization of AI systems and the associated risks. Following the model of the GDPR, it is likely that these regulatory approaches will have influence beyond the EU – and may even become global standards. The EU AI Act is the first comprehensive attempt to regulate these technologies. Key developments in 2023 include the European Parliament’s approval of amendments on June 14, expanding the scope of the AI Act, and the adoption of the Parliament’s negotiating position on the AI Act. The next phase involves discussions with EU member states to finalize the law’s implementation.

Some of the main areas include:

Conformity Assessments: 

Before deploying, high-risk AI systems must undergo assessments to ensure they meet the Act’s requirements. Some can be self-assessed by providers, while others need verification by third parties.

Risk management:

•        Data governance (ensuring high-quality datasets without biases)

•        Documentation (providing proof of compliance)

•        Transparency (ensuring users know they’re interacting with an AI system)

•        Human oversight (to minimize erroneous outputs)

•        Robustness, accuracy, and cybersecurity.

Bans on Certain AI Practices: 

The Act prohibits certain AI practices that might harm people’s rights. Examples include systems that manipulate human behavior, exploit vulnerabilities of specific individuals or groups, or utilize social scoring by governments.

EU Database for Stand-Alone High-Risk AI:

The Act proposes a database to register these AI systems to maintain transparency.

Governance and Implementation: 

A European Artificial Intelligence Board would be established to ensure consistent application across member states.

Fines for Non-compliance:

Companies violating the regulations might face hefty fines, similar to the penalties under the General Data Protection Regulation (GDPR).

Significance in Today's Landscape

The legislative procedure is expected to conclude by the end of 2023, with a grace period of 2 to possibly 3 years before enforcement begins. During this grace period, the European Commission aims to foster early implementation through industry collaboration. Some provisions of the AI Act, particularly concerning high-risk systems, have been agreed upon, but many crucial elements like definitions remain unsettled. The act primarily aims at consumer protection rather than product safety legislation. There is a notable divergence between the EU and US approaches to AI regulation, and the EU AI Act heavily relies on standards and implementing acts for classification, requirements implementation, and assessments of AI systems.

WHO DOES IT IMPACT?

EU Act applies to regulated and unregulated firms.

Asset Managers
Banks
Supervisors
Commodity Houses
Fintechs

How Can We Help?

In response to the AI Act, a proposed regulation by the European Union for the safe and ethical development and use of artificial intelligence (AI), organizations can engage in various activities to ensure compliance and ethical application of AI. Working with senior AI and Compliance advisors who are at the forefront of AI supervisory dialogue, we can support the below activities:

The following steps can summarise it:

1

Compliance Assessment and Advisory

Our Compliance Experts can help you understand the AI Act, identify whether an AI system falls under the high-risk category, and determine specific compliance requirements.

2

Risk Management and Mitigation Strategies

This involves assessing risks associated with AI systems and developing strategies to mitigate these risks, especially for high-risk AI applications where strict regulatory adherence is mandatory

3

Ethical AI Frameworks Development

Our Compliance SME will set up or review your ethical AI frameworks and guidelines in line with the AI Act’s requirements, focusing on fairness, accountability, transparency, and data governance.

4

Technical and Operational Support

Our Technology Compliance SME ensure that AI systems are designed, developed, and deployed in compliance with the AI Act, which may include updating or modifying existing systems.

5

Training and Capacity Building

Our AI Compliance SMEs will help with design and roll-out training programs for employees on legal and ethical aspects of AI as per the AI Act to foster organization-wide understanding and best practices in AI usage.

6

Data Governance and Privacy Compliance

Our Compliance Experts ensure alignment with the AI Act and other relevant regulations such as GDPR, focusing on data privacy, protection, and management

7

Monitoring and Reporting Mechanisms

T3 Compliance SMEs establish continuous monitoring and reporting processes, as mandated by the AI Act, especially for high-risk AI systems

8

Strategic Planning for AI Initiatives

Our Technical Compliance Consultants plan AI projects to comply with the AI Act while fulfilling business goals.

9

Stakeholder Engagement and Communication

Our Compliance Experts actively engage with stakeholders, including regulatory bodies, customers, and partners, to discuss AI utilization and compliance.

10

Impact Assessment and Auditing

T3 Compliance SMEs conduct regular impact assessments and audits of AI systems to ensure ongoing compliance and identify areas for improvement.

11

Policy Advocacy and Regulatory Insights

Our Technical Compliance Consultants stay updated on the changing regulatory landscape and engage in policy discussions pertaining to AI. 

Want to hire 

AI Regulation Expert? 

Book a call with our experts