How AI is used by regulators around the world

Listen to this article

AI has had a significant influence in the realm of regulatory supervision, with various applications seen across the financial sector and beyond.

1. Risk and Compliance Management:

AI plays a transformative role in risk and compliance management, with applications such as identity verification, fraud detection, and regulatory reporting. In fact, AI’s potential in these areas is so profound that the term ‘Regtech’ has been coined to denote the use of technology to improve regulatory compliance. These technologies are increasingly embraced by regulators as they seek to leverage AI’s capabilities in areas like mapping obligations and conducting risk management.

2. Supervisory Technology (Suptech):

Supervisory authorities are leveraging AI in the development of ‘Suptech’ – technologies used by supervisory agencies to support oversight. For instance, they use AI to assess risks in financial institutions, including credit, liquidity, and governance risks.

3. Loan Applications and Fraud Detection:

The financial sector has seen an acceleration of AI adoption in areas such as loan applications and fraud detection, particularly fueled by the COVID-19 pandemic. This has led to efficiency gains, cost savings, improved forecasting, and better risk management.

4. Legal Landscape:

AI is also profoundly affecting the legal landscape. It’s prompting discussions around the constraints imposed by existing legal instruments on AI development and use. For example, legal considerations are impacting AI applications in recruitment and contact tracing for COVID-19. Navigating these regulations requires a deep understanding of the legal landscape and the integration of legal expertise into system design decisions.

5. Governance and Transparency:

 As AI’s influence grows, companies are urged to consider the impacts of AI on their operations, specifically focusing on unfair outcomes, decision scope, operational complexity, and governance capabilities. Proposed regulations aim to ensure trust and safety by demanding higher transparency and explainability standards. There are also calls for regulations to manage the evolvability of AI algorithms, considering that continuous learning in AI raises acceptance concerns.

Despite its many benefits, the adoption of AI in regulatory supervision also brings challenges such as data standardisation, resource gaps, privacy risks, cyber threats, explainability, and bias. However, with continuous efforts in regulatory collaboration, establishment of minimum standards, harnessing of technical expertise, and ongoing monitoring, these challenges can be managed. It’s crucial that organisations ensure robust AI algorithms to build public trust and ensure financial stability. In the end, prudential oversight must evolve to keep up with the technology’s advancements and mitigate potential risks.