Strategic Recommendations for 2025 

Audit AI Systems:

Periodically review the implementation of new requirements especially on high-risk and self-generative AI systems.

Strengthen Governance:

Establishing a robust AI governance frameworks including clear accountability and human oversight.

Enhance AI Literacy:

Implement training programs for employees to comply with mandatory literacy standards, for example in the EU.

Monitor Global Trends:

Keep up to date with regional information, particularly in fast-moving regions such as Asia-Pacific and Africa.

In 2025, it will be a turning point for the legislation of Artificial Intelligence towards rigorous regulatory framework that will lead to influence on global adoption of the technology.

Significance in Today's Landscape

As AI becomes more pervasive, global regulatory strides are increasing. For example, the United States, Singapore, China and the European Union are all taking steps to establish context specific regulatory mechanisms. Each region is looking to address individual AI regulatory concerns around data privacy, ethical AI frameworks and accountability against the backdrop of regional societal norms and economic priorities including:

  • European Union: The EU is moving forward with the AI Act regulation framework which attempts to regulate AI systems according to their risk levels. This far-reaching regulation could become one of the world’s toughest AI laws by guaranteeing consumer protection, transparency and accountability.
  • United States: The US is promoting a sectoral regulatory approach with a number of proposals around privacy protection and fair competition, announcing – most recently in executive orders and legislative proposals – safety in AI, bias mitigation, and the retention of tech leadership.
  • China: Chinese regulation focuses on national security and social stability with an AI governance and control policy designed to avoid abuse. China has proposed broad restrictions on generative AI, particularly around content development and social media, stressing the need to match AI development to government policies.
  • Singapore: Widely acclaimed for its forward-looking digital policy framework, Singapore is moving toward “ethical by design,” encouraging companies to adopt principles of transparency and accountability in its Model AI Governance Framework, thus balancing the promotion of innovative AI while protecting ethical AI use.

Main AI Regulation by Jurisdiction

AI Plans and Developments Table
Country/Region AI Plans and Developments Key Highlights Source
European Union Preparing for the EU AI Act, focusing on prohibitions (Article 5) and mandatory AI literacy (Article 4).
  • Compliance deadline for Articles 4 and 5: February 2025.
  • Advisory programs like Simmons & Simmons’ AI Literacy Programme launched to aid businesses.
  • Prohibited practices remain complex; organizations are urged to seek guidance.
EU AI Act
Australia Published AI and ESG guidance emphasizing responsible AI’s role in advancing ESG goals.
  • AI Impact Navigator assesses AI’s ESG performance (rating from ‘poor’ to ‘excellent’).
  • Consumer law consultation open until 12 November 2024.
  • Privacy Act guidance for generative AI training and product use.
Australian Government and Office of the Australian Information Commissioner Guidance
United States The Department of Labor issued workplace AI guidelines focusing on transparency and ethical use.
  • Workplace AI guidelines promote worker rights and retraining for displaced employees.
  • National security memo emphasizes protecting AI systems and global leadership in AI.
  • Agencies required to report progress on implementation.
US Department of Labor Guidelines (October 2024), White House National Security Memo
Poland Opened consultations to align national legislation with the EU AI Act.
  • General consultation ends 15 November 2024.
  • Article 5 consultation on prohibited practices open until 31 December 2024.
  • Focus on innovation while implementing transparency measures.
Polish Ministry of Digital Affairs Announcement, October 2024
Hong Kong Released a dual-track AI policy for financial markets.
  • Promotes responsible AI in financial services.
  • Institutions must ensure safeguards for high-risk AI applications.
  • Collaboration with academia to build AI adoption capacity.
Hong Kong Government Policy Statement, October 2024
Japan Issued AI safety and red teaming guidance with a focus on human-centric principles.
  • AI safety principles focus on eliminating bias and ensuring data security.
  • Red teaming protocols include training on misinformation and attacks.
  • Generative AI consultation open until 22 November 2024.
Japanese AI Safety Institute Guidance and Japan Fair Trade Commission Consultation
United Kingdom Launched the Regulatory Innovation Office (RIO) to support tech innovation.
  • Aims to position the UK as a top destination for tech investment.
  • Reduces regulatory hurdles for faster tech deployment.
  • Commitment to innovation-driven economic growth.
UK Department for Science, Innovation, and Technology Announcement, October 2024
G7 Nations Held a summit to address competition concerns related to AI development.
  • Emphasized risks to IP and privacy in AI.
  • Guiding principles for fair competition and AI risk mitigation.
G7 Competition Summit Statement, October 2024

WHO DOES IT IMPACT?

Any firm that relies on AI models in their decision making

Asset Managers
Banks
Supervisors
Commodity Houses
Fintechs

How Can We Help?

Working with senior AI and Compliance advisors who are at the forefront of AI supervisory dialogue, we can support the below activities

The following steps can summarise it:

1

Training & Culture change

  • Design training programs for organization-wide needs 
  • Deliver C-suite/senior manager/company introduction to AI and in-depth AI risk management trainings 
  • Deliver AI literacy EU AI Act expectations training 
  • Write / co-design AI Principles and policies 
  • Design processes for internal and/or external stakeholder consultation 

2

Maturity Assessments

  • Create fairness and impact testing frameworks that can be integrated into the AI development pipeline. 
  • Map and report existing responsible AI and AI governance efforts against proprietary FIPA maturity curve. Identify next steps to mature along the FIPA curve 
  • Advise on 3P considerations and decision-planning 
  • Mitigation Plans: Develop tailored mitigation strategies for each risk, prioritizing high-risk applications.   
  • Conduct audit/assessment/conformity assessment (TBD. Requires more research) 
  • Model validation 

3

Governance Optimisation

  • Set up AI review boards, escalation paths, incident response plans, and decision-making processes 
  • Design documentation requirements for high-risk AI systems 
  • Template transparency reports for C-suite/Boards/publication 

4

Risk Management

  • Design AI risk management framework (from scratch) 
  • Adapt existing risk management frameworks to account for AI 
  • Build evaluation frameworks for identifying and prioritizing AI impact 
  • Design and conduct scenario planning and horizon scanning exercises to get ahead of emerging AI risks <likely expand existing operational resilience efforts> 

5

Specialized expertise : Large & Advanced Companies

  • Review and provide guidance on Transparency Report 
  • Conduct exercises to clarify tradeoffs for improved decision-making (e.g. privacy v fairness v accuracy v transparency v opportunity costs) 
  • Validate/challenge existing AI governance and Responsible AI efforts 
    • Conduct assessment against proprietary FIPA maturity curve. Produce a report for management on gaps and requirements to advance maturity and achieve best-in-class AI governance 
  • Design risk management processes and structures for emerging, unproven and early stage technologies 
  • Inform risk-based scaled versus bespoke risk management protocols 

6

Establish a Global AI Compliance Framework

  • Centralized Governance: Create a centralized governance structure to oversee AI compliance globally, led by a Chief AI Officer or Compliance Lead.  
  • Local Adaptation: Appoint regional compliance officers to adapt the global framework to local regulatory requirements (e.g., EU AI Act, US NIST Framework, Australian Privacy Act). 
  • Algorithmic Accountability: Document AI system design, data sources, and decision-making processes to demonstrate compliance.   
  • Uniform Standards: Implement baseline AI ethical and governance principles (aligned with global standards such as OECD AI Principles and ISO standards) to ensure consistency across jurisdictions.   

Want to hire 

AI Regulation Expert? 

Book a call with our experts