AI Legislative Developments: August 2024 Round-Up

Global AI legislation recap 2024
Listen to this article

Introduction
In the ever-evolving landscape of artificial intelligence (AI) regulation, it can be challenging to stay abreast of the numerous legislative, regulatory, and policy developments across the globe. This article delves into the major AI-related developments reported in August 2024, providing a concise overview of important updates from the United States, South Africa, UNESCO, Hong Kong, the Philippines, and China. As countries refine and enact their AI regulations, these policies shape the ethical, economic, and technological contours of the global AI ecosystem.

California AI Bill Moves Closer to Becoming Law

On 15 August 2024, California’s proposed “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act” (SB-1047) moved a step closer to becoming law. This bill, which applies to developers of large AI models requiring significant computational resources, aims to regulate the safety and security of frontier AI models. The key requirements for compliance include:

  • Shutdown Capability: Developers must have the ability to promptly and fully shut down AI models if needed.
  • Security Protocols and Audits: Companies need to establish safety protocols and undergo annual independent audits to assess their compliance with the act.
  • Certificate of Compliance: An annual certificate of compliance is mandatory for all developers.

The amended version of SB-1047 notably removed the proposed new regulatory body for oversight and eliminated criminal penalties for non-compliance, opting instead for civil penalties enforced by the Attorney General. The amended bill replaces the standard of “reasonable assurance” for developers with “reasonable care.” Despite facing opposition from industry stakeholders, the bill reflects California’s commitment to regulating emerging AI technologies in a manner that prioritizes safety while attempting to support innovation.

South Africa’s National AI Policy Framework

On 14 August, South Africa introduced a comprehensive national AI policy framework intended to guide the responsible use of AI for economic growth and societal advancement. The framework rests on twelve strategic pillars, each targeting different aspects of AI development and deployment. These pillars include:

  1. Talent Development and Capacity Building: By integrating AI into education and establishing specialized training programs, South Africa aims to cultivate a skilled AI workforce.
  2. Research and Innovation: The framework focuses on establishing research centers and supporting startups through public-private partnerships to drive technological advancements.
  3. Public Sector Implementation: To enhance government efficiency, AI will be integrated into administrative processes, guided by ethical principles to ensure responsible deployment.
  4. Ethical AI and Privacy Protection: The development of AI ethics guidelines and robust privacy measures ensures that AI aligns with human rights and mitigates risks like bias and privacy infringements.

South Africa’s framework represents an ambitious national agenda, aiming to leverage AI in ways that are both economically beneficial and ethically sound.

UNESCO’s Guidelines for AI in Courts and Tribunals

UNESCO launched a public consultation on 2 August for its draft guidelines concerning the use of AI in courts and tribunals. This consultation came in response to findings that, while most judicial operators are aware of AI technologies, few have implemented formal guidelines on their use. The proposed guidelines aim to ensure that AI systems in judicial settings uphold justice, human rights, and the rule of law.

The use of AI in legal settings holds great promise for enhancing efficiency and accessibility, but it also raises questions about bias, transparency, and fairness. UNESCO’s efforts are geared towards balancing these considerations by establishing a framework that aligns AI usage with core legal principles, thereby fostering greater trust in AI-enhanced judicial processes.

Hong Kong’s Guiding Principles for Generative AI in Consumer Protection

The Hong Kong Monetary Authority (HKMA) issued new guiding principles on 19 August, aimed at ensuring consumer protection in the use of generative AI by financial institutions. These principles cover four major areas:

  1. Governance and Accountability: Financial institutions must establish clear lines of responsibility, with senior management accountable for AI use.
  2. Fairness: Generative AI applications must produce unbiased outcomes, allowing consumers the option to opt out and seek human intervention when desired.
  3. Transparency and Disclosure: Institutions are required to disclose the presence of generative AI systems, including their limitations and intended purposes.
  4. Data Privacy and Protection: AI implementations must adhere to local data privacy laws, including the Personal Data (Privacy) Ordinance, to safeguard consumer information.

Hong Kong’s proactive approach aims to foster public confidence in AI tools, particularly in high-stakes areas like financial services. By focusing on transparency, fairness, and privacy, these guidelines seek to ensure that AI technologies are deployed in ways that are ethical and consumer-friendly.

Philippines Proposes Deepfake Accountability and Transparency Act

In an effort to combat misinformation and uphold digital ethics, the Philippines proposed the Deepfake Accountability and Transparency Act in July 2024. This bill aims to regulate the creation and distribution of deepfake content—technologically altered media designed to misrepresent individuals or events.

The proposed law mandates explicit disclosures for any deepfake content, detailing the type of alteration and clearly indicating AI involvement. Depending on the nature of the media (visual, audio, or both), these disclosures may need to be verbal, written, or both, with penalties for non-compliance ranging from fines to increased sanctions for repeated violations.

This legislative move aims to curb the malicious use of deepfake technology, emphasizing the importance of transparency and accountability in AI-driven media creation. By imposing clear requirements for content labeling, the bill seeks to empower consumers to better understand and assess the media they encounter online.

China Registers Over 190 Generative AI Models

China continues to make rapid advances in the AI domain, with over 190 generative AI models registered as of 12 August. The Cyberspace Administration of China (CAC) has reported substantial adoption, with more than 600 million users registered across these platforms.

China’s AI strategy includes the promotion of domestic technology through the development of independently controlled computing chips and algorithm frameworks. By fostering self-reliance, China aims to secure a leading position in AI technology globally. Furthermore, the CAC plans to improve safety standards related to AI classification, testing, and emergency response, underscoring the nation’s focus on regulatory stability in its AI initiatives.

With these measures, China aims to not only encourage widespread AI use across sectors such as healthcare and education but also to ensure that the expansion of AI capabilities is matched by corresponding safety and regulatory oversight. This focus on balance highlights China’s attempt to build robust AI governance frameworks to mitigate risks while promoting technological growth.

 

This month’s round-up of AI legislative updates underscores the growing complexity of AI governance across the world. From ethical guidelines in the judiciary to stringent consumer protections and proactive measures against misinformation, governments and international bodies are actively trying to regulate the evolving AI landscape. These efforts reflect a shared commitment to ensuring that AI technology serves humanity’s best interests while safeguarding fundamental rights and public safety.

Interested in speaking with our consultants? Click here to get in touch

 

Some sections of this article were crafted using artificial intelligence technology