Who Owns AI Risk? Why the SMCR Regime Is Under Pressure to Catch Up

Listen to this article
Featured image for

The financial services industry is currently undergoing a significant transformation with the rise of artificial intelligence (AI) technology. With firms incorporating AI into their operations, there are challenges to ensure it works in conjunction with the existing regulatory framework of the Senior Managers and Certification Regime (SMCR). Under the SMCR, accountability, especially of senior managers working in such firms, is central; as it requires senior managers to act with proper conduct and adhere to regulatory compliance. To that end, managing the deployment of AI, which brings along opportunities and risks, is key as firms strive to comply with SMCR requirements.

The article is designed to help financial services firms clarify the senior managers’ responsibilities for AI-related activities and risks. Mastering the integration of such state-of-the-art tools, all while remaining compliant under the regulation, is pivotal at this juncture. In doing so, firms can make full use of the potential of AI, all the while preserving the standards of integrity and scrutiny called for by the SMCR.

SMCR is a fundamental framework aimed at strengthening individual accountability in regulated firms. It is designed to make sure that individuals working at all levels understand what they are responsible and accountable for, encouraging a culture of openness and accountability. The three main elements of SMCR are the Senior Managers Regime, the Certification Regime, and the Conduct Rules.

Senior Managers Regime

This is designed to identify senior managers within the firm and clearly define their responsibilities so that senior managers are accountable for the areas of the business in which they are responsible.

Certification Regime

Staff who are not senior managers, but who hold significant harm functions, as defined by the relevant firm, need to be certified by their firm as meeting, and continues to meet, the FCA’s and/or PRA’s (Prudential Regulation Authority) standards of fitness and propriety.

Conduct Rules

These are high-level standards of individual behavior that apply to almost all staff working in regulated firms.

The application of the SMCR is full and complete for both solo regulated and dual regulated firms, emphasizing the importance of the SMCR in protecting the integrity of the market. Solo regulated firms are those that are regulated solely by the FCA, while dual regulated are those regulated by both the PRA and the FCA. This is key as this ensures that all firms are operating in a consistent manner under a single set of rules, thereby maintaining the integrity of the industry.

The Transformative Power of AI and New Risks

AI is transforming many sectors, and financial services is a notable area where AI has a transformative impact. AI is used widely in financial services. For example, machine learning algorithms are employed to analyze data and detect unusual transactions to prevent fraud in real time. Through rapidly processing vast volumes of market data and finding the optimal price at which to execute, AI is a key driver behind algorithmic trading. Customer service has been transformed by the use of AI-powered chatbots, which provide 24/7 efficient service.

But this transformation brings new and significant risks. A key concern is bias in AI models; if unchecked, this bias could lead to unfair outcomes. For firms to be successful, transparency on how AI systems make decisions and their operation is critical. It is also important to keep data secure. With extensive data sets in use, there is the potential for breaches that could compromise the integrity of customer and business data. Maintaining operational resilience remains critical to ensuring AI systems do not fail in the face of a changing environment and to avoid financial loss due to AI errors. Ethical conduct is paramount, requiring a system of accountability to ensure that AI is used responsibly. This draws attention to the need for strong governance over AI data. Regulators are showing increasing focus in these areas, compelling firms to take measures to address transparency, accountability, and ethical AI use. By prioritizing these areas, the financial services industry can maximize the impact of AI while effectively managing its risks.

Assigning AI Accountability in the Context of the SMCR

The inclusion, and growing use, of Artificial Intelligence (AI) in the operations of financial firms is crucially linked to the Senior Managers and Certification Regime (SMCR). This regulatory structure highlights the importance of accountability within those Senior Management Functions (SMFs) that are connected with cutting-edge technologies such as AI. Under the SMCR, SMFs are accountable both for the oversight of historical activities and next-generation systems, carrying full responsibility for ensuring compliance and effective management of AI-related risks.

A fundamental building block in allocating accountability for AI is the mapping of AI-oriented risks against the Statement of Responsibilities setting out which Senior Manager has oversight of which AI functions and associated risks. This process helps ensure that all potential AI-specific risks are fully identified and controlled. Not only does this reinforce the compliance structure but it also serves to enhance organizational clarity and Principals.

Established oversight arrangements are also imperative to the development and use of AI systems in a responsible fashion. These must define the Senior Managers’ responsibilities and support integrated AI governance. A well-articulated line of oversight enhances the ability of the firms to manage the lifecycle of AI projects, from creation to implementation, and therefore to minimize risks and remain compliant with regulation.

The SMCR expectations regarding ‘reasonable steps’ must be adhered to by those Senior Managers. What this means is that Senior Managers must be taking steps that are appropriate to manage and mitigate AI-specific risks. It is vital that Senior Managers are armed with the tools and knowledge to meet these expectations. Proactively managing AI operations through forward planning and regular review will lend itself to accountability and the maintenance of the operational integrity of the firm.

In summary, as a disruptor within the financial sector, continued diligence of Senior Management accountability under the SMCR will be an essential aspect of overseeing AI changes. By situating AI governance within the current regulatory structures, firms can manage the introduction of AI into their operations while ensuring mitigation against risks.

Application of SMCR Responsibilities to AI Operations

With the growing integration of artificial intelligence (AI) services in financial services, the incorporation of the Senior Managers and Certification Regime (SMCR) provides a strong governance and risk management toolkit. The SMCR, introduced by the UK Financial Conduct Authority (FCA) and Prudential Regulation Authority (PRA), serves as an essential tool in ensuring ethical and regulatory requirements are met for AI operations.

Senior Managers Regime: Governance and Risk Management

Central to SMCR is the Senior Managers Regime (SMR), requiring that senior managers provide the adequate governance structures and risk management protocols necessary for AI operations. Senior managers are now tasked with oversight of AI systems, ensuring that they are consistent with an institution’s strategy, and compliant with regulatory standards. This includes instant liability for AI governance arrangements, promoting transparency and effectiveness in AI risk assessments.

Certification Regime: Fitness and Propriety

The Certification Regime expands this governance model beyond senior managers, ensuring that anyone engaged in the development and deployment of AI is fit and proper. For AI operations, the manager certification regime insists that all individuals have the requisite skills and training to manage complex AI systems. This regime mandates annual assessments to maintain operational competency given advances in AI, thereby maintaining service integrity.

Conduct Rules: Ethical AI Decision Making

The SMCR Conduct Rules directly apply to AI decision-making, creating a clear ethical framework for those working with AI. Rules dictate that employees act with integrity, due care, skill, and diligence. With respect to AI, this means ethical judgments on the deployment of AI, ensuring AI outcomes do not disadvantage customers and broader ethical judgments are attached to AI outcomes.

Assessment of Fitness and Propriety

Continuation of assessments on fitness and propriety for AI roles in accordance with FCA and PRA expectations remain. Fundamental principles such as honesty, integrity, and reputation are necessary aspects of ensuring those engaging in AI responsibilities are indeed competent. Such assessments can encourage an environment of accountability and dependability around AI operations, consistently meeting regulatory expectations among managerial teams.

In summary, the application of SMCR responsibilities to AI operations serves as a structured and regulatory compliant method for AI governance, instilling confidence in growing AI ventures across financial services. The emphasis placed on senior managers’ certification regime obligations highlights a firm’s commitment to ethical AI practices and regulatory obligations to its stakeholders.

Navigating AI compliance poses the most significant challenge facing firms as they look to integrate AI into their business activities. Existing regulation struggles to keep up with the fast pace of technology development, creating a challenge for firms to remain compliant. This changing landscape can create uncertainty around where accountability lies and what good practice looks like in maintaining regulatory compliance. Data governance adds further complexity. With vast amounts of data involved in AI systems, maintaining the integrity of data and ensuring alignment with data protection laws is crucial. Moreover, there is a marked scarcity of expertise in the field of AI compliance which may limit the ability of firms to manage these challenges successfully; specific expertise is needed to navigate compliance requirements accurately.

To overcome these challenges, firms can follow a number of best practices. Developing comprehensive AI ethics frameworks can set out principles for ethical AI use, ensuring that AI initiatives conform to wider ethical norms and any relevant legal obligations. Robust testing and clear documentation processes can improve accountability and help firms to evidence regulatory compliance and the desired outcomes of their AI systems. Regular training is essential to keep staff informed about both technological developments and regulatory change, to ensure ongoing compliance and mitigate risk.

Proactive identification and mitigation of risk is key to managing regulatory compliance in AI effectively. By regularly scanning for potential pitfalls and proactively strategizing to address them along the way, firms can reduce their legal exposure and maintain a good image. This approach not only ensures compliance with the law as it exists today but also builds trust with stakeholders, supporting the sustainable, compliant use of innovative AI solutions.

In the fast-moving world of AI, the FCA and PRA have been at the forefront in defining AI accountability, stressing the importance of transparency in AI deployment and calling for firms to monitor closely to meet the requirements of SMCR. Recent guidance from the FCA and PRA requires firms to demonstrate a robust process in AI risk management in order to satisfy the regulators that they are meeting their expectations. This includes maintaining detailed records and transparent processes throughout the AI life cycle that evidence responsible AI use.

The regulatory environment is still developing around how AI is going to be regulated. However, the FCA and PRA have been clear in articulating what is and is not acceptable: while innovation is welcomed in financial services, it cannot come at the cost of harm to, or disorder in, the market. Firms need to develop a robust governance framework to ensure AI accountability, with the need for organizations to evidence appropriate governance over AI implementations under the SMCR. It is therefore imperative that firms strengthen their compliance strategy to meet these new requirements. As regulators continue to develop their AI policy frameworks, businesses will need to get ahead of understanding and embedding such requirements to stay in line, and in doing so, embed trust in their AI strategies.

In summary, the SMCR is a key component for establishing AI accountability for regulated firms. When combined with AI, the principles of the SMCR hold senior managers to a higher level of responsibility. This approach obliges firms to implement and oversee AI with transparency and ethics at the forefront. As AI advances, firms should periodically revisit and improve their governance structures to meet regulatory standards. They must stay one step ahead and reinforce their systems to embed continuing responsibility and accountability in AI deployment.

Explore our full suite of services on our Consulting Categories.