Following the launch of a discussion paper (DP5/22 or DP22/4) (eventid=17983) in October 2022 on issues surrounding the use of Artificial Intelligence (AI) and Machine Learning (ML) in supervised firms‘ operations, the Prudential Regulation Authority (PRA) and the Financial Conduct Authority (FCA) have now published a corresponding feedback statement (FS2/23). In it, the regulators briefly describe the responses they have received on their discussion paper in which they posed a number of questions aimed at collecting information on
– firms‘ views on the potential benefits and risks in connection with the use of AI in financial services;
– the application of the current regulatory framework both of the PRA and the FCA to AI;
– the need to adopt further regulation, e.g. to ensure confidence in the use of AI or to protect investors from data misuse; and
– steps the regulators may further take to advance the use of AI and explore its benefits while maintaining a secure, sound financial market systems.
The key objective of the discussion paper was to enable the FCA and the PRA to determine whether or not the current regulatory framework is suited for encouraging and fostering the use and development of Artificial Intelligence, but is also sufficient to address any potential risks arising from such.
The key issues raised by respondents are briefly noted below. For more detailed, comprehensive information, please consult the original feedback statement as linked above.
#### The main points emphasized by those responding to DP5/22 were as follows:
The majority of respondents believed that creating a specific regulatory definition of AI for the financial services sector in the UK would not be beneficial for ensuring the safe and responsible use of AI. Their reasons included concerns that such a definition might quickly become obsolete due to the rapid pace of technological advancement, that it could be either overly broad, potentially covering non-AI systems, or overly narrow, failing to encompass all relevant use cases. They also noted that a sector-specific definition might inadvertently encourage regulatory arbitrage and could conflict with the regulatory authorities‘ technology-neutral stance. Instead, in vie of AI capabilities evolving rapidly, akin to other advancing technologies, regulators could adapt by creating and maintaining dynamic regulatory guidance, which is periodically updated to reflect best practices and examples.
Also, most respondents viewed an ongoing industry engagement of great importance. Initiatives like the AI Public Private Forum have proven valuable and could serve as models for continued collaboration between the public and private sectors. Respondents also noted that the current regulatory landscape for AI is complex and fragmented. Specifically, data regulation is fragmented and varies among regulators. As a consequence, respondents recommended greater regulatory harmonization to address data-related risks, especially those concerning fairness, bias, and the handling of protected characteristics.
In same context, many respondents referred to the legal and regulatory advancements in other regions, such as the proposed EU AI Act, and stressed the advantages of international regulatory alignment, especially for multinational companies. They pointed out that the development of effective and adaptable cooperation mechanisms for sharing information and lessons learned across jurisdictions could reduce obstacles and promote beneficial innovation in AI and ML.
In response to the question about potential benefits and risks that supervisory authorities should focus on, a majority of respondents emphasized the importance of prioritizing consumer protection. They acknowledged that AI can bring benefits to consumers, such as improved outcomes, personalized advice, cost reductions, and better pricing. However, they also noted that AI poses risks, including bias, discrimination, lack of transparency and explainability, and the potential exploitation of vulnerable consumers or those with protected characteristics.
Finally, the majority of respondents did not find the idea of creating a new Prescribed Responsibility (PR) for AI to be allocated to a Senior Management Function (SMF) helpful for improving AI governance. They cited various reasons, such as the multitude of potential AI applications within a firm or the belief that relevant responsibilities were already covered by existing PRs or could be addressed in the „statements of responsibilities“ for current SMFs. Additionally, many respondents argued that firms should have local owners to maintain accountability for AI and that adding this PR could place an excessive burden on the Chief Operations Officer who was seen as the most likely SMF to be assigned the PR for AI.