Global AI Legislative and Policy Updates: August 2024 Round-Up

Global AI Policies
Listen to this article

Introduction

Artificial Intelligence (AI) continues to be at the forefront of technological innovation, driving significant advancements across various sectors. As AI technology evolves rapidly, governments worldwide are increasingly focusing on establishing robust legislative and regulatory frameworks to harness AI’s potential while managing associated risks. This article provides a comprehensive overview of the latest developments in AI legislation, regulation, and policy initiatives from around the globe, highlighting key updates from the UK, EU, Japan, UAE, and the United States as of August 2024.

1. The UK’s AI Opportunities Action Plan

On July 26, 2024, the UK Government published the “AI Opportunities Action Plan: Terms of Reference,” marking a significant step toward developing a comprehensive AI Bill. This action plan outlines the government’s strategy to leverage AI’s potential for economic growth and societal advancement. The plan emphasizes several critical objectives, including fostering a globally competitive AI sector, enhancing public services through AI integration, and improving citizen interactions with government systems. Additionally, it focuses on creating robust foundations for AI adoption by enhancing data infrastructure, procurement processes, and regulatory frameworks.

The UK’s approach involves collaboration with leading AI experts from both business and academia to shape a future-proof roadmap for AI regulation. This initiative reflects the government’s commitment to positioning the UK as a leader in the global AI landscape. The action plan’s development will play a crucial role in defining AI’s future trajectory in the UK, balancing innovation with ethical considerations and public trust.

2. The EU’s Technical Standards Delay for the AI Act

The European Union (EU) has been a pioneer in establishing comprehensive AI regulations, notably with the AI Act, which aims to create a legal framework to manage AI’s risks and benefits. However, the EU recently announced a delay in the release of the technical standards essential for the Act’s implementation. These standards are critical for businesses to achieve compliance, especially for high-risk AI systems. Initially scheduled for release in early 2025, the standards are now expected to be ready by the end of 2025, just six months before the AI Act’s primary provisions come into effect in August 2026.

The delay has sparked concerns within the AI sector about meeting compliance deadlines, given the tight timeline. Industry stakeholders worry that the lack of finalized standards could hinder their ability to adapt to new regulatory requirements. Despite these concerns, the chair of the group developing the standards has reassured that the standards will be completed on time and will be of high quality, ensuring they provide the necessary guidance for compliance.

3. Japan’s Shift Towards AI Legislation

Japan is also making significant strides in AI regulation, transitioning from a “soft-law” approach to more formal legislative measures. The Japanese government, initially favoring guidelines that were largely self-regulating, is now considering specific legislation to address the evolving needs of its AI sector. This shift comes after meetings with a newly formed panel of experts, including academics and AI industry leaders, who highlighted the limitations of current guidelines and identified several “blind spots” in Japan’s AI regulatory framework.

The proposed legislation aims to provide a flexible regulatory environment that can adapt to rapid technological changes while aligning with international standards, such as those set by the EU AI Act. Japan’s move towards formal AI legislation reflects its intent to balance fostering innovation with ensuring responsible AI use, safeguarding public interests, and enhancing international cooperation on AI governance.

4. The UAE’s AI Charter: A New Framework for AI Development

On July 30, 2024, the UAE introduced its “Charter for the Development and Use of Artificial Intelligence,” a significant component of the UAE Strategy for AI. The charter outlines a framework for safe, fair, and inclusive AI development, aiming to position the UAE as a global hub for AI innovation. Key principles in the charter include promoting societal benefits, enhancing human-machine relationships, ensuring equitable technological access, and maintaining compliance with relevant laws.

This charter underscores the UAE’s commitment to fostering a safe and privacy-centric environment for AI applications, emphasizing the importance of trust and ethical considerations in AI development. By setting these standards, the UAE aims to enhance its digital landscape, support sustainable development, and strengthen global AI partnerships.

5. The UK Department of Science, Technology, and Innovation’s AI Cybersecurity Consultation

The UK continues to demonstrate its proactive approach to AI regulation with the recent launch of a consultation on AI cybersecurity by the Department for Science, Technology, and Innovation. This initiative seeks to establish a voluntary code of practice for AI cybersecurity, emphasizing the importance of robust cybersecurity measures to ensure AI safety. The consultation aims to gather input from stakeholders to develop a global standard for AI cybersecurity practices, reflecting the UK’s strategy to advance its digital infrastructure securely.

The consultation forms part of a broader effort to address the complexities of AI and connected technologies, reinforcing the need for comprehensive policies that safeguard against potential risks while promoting innovation.

6. The US Department of Commerce’s New AI Guidance

In alignment with President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development of AI, the US Department of Commerce released new guidance on July 26, 2024, aimed at helping AI developers evaluate and mitigate risks associated with AI technologies. The guidance includes final and draft documents that provide a framework for managing AI risks and a new software package for testing AI systems against adversarial attacks.

Additionally, the US Patent and Trademark Office has updated its guidance on patent eligibility for AI innovations, supporting continued innovation while managing the unique risks posed by generative AI. These efforts demonstrate the US government’s commitment to fostering a safe and innovative AI ecosystem, balancing regulatory oversight with the need for technological advancement.

Conclusion

As AI continues to evolve, the global landscape for AI legislation and regulation is becoming increasingly complex and dynamic. The updates from the UK, EU, Japan, UAE, and the US highlight the diverse approaches countries are taking to balance innovation with ethical considerations, public safety, and international cooperation. Staying informed about these developments is crucial for businesses, developers, and policymakers to navigate the challenges and opportunities presented by AI effectively.

Interested in speaking with our consultants? Click here to get in touch

 

Some sections of this article were crafted using artificial intelligence technology