Article

FCA’s AI regulation of industry impacts

17 April 2026

Make an enquiry
An image showing a stylus on a computer

Under the Senior Managers & Certification Regime (SM&CR), senior managers are required to demonstrate they understand and controlled risks within their area of responsibility. In practice, this means that they are duty-bound to not only accept a level of required understanding for AI systems, but also to ensure that controls are put in place to mitigate potential risks, or risk being held accountable for any consumer harm that may arise.

Stakeholders have argued that senior managers being held accountable for AI systems they do not fully understand is unduly burdensome. However, the FCA contends that it is a reasonable expectation that a senior manager comprehends the system it knowingly deploys, and a lack of understanding is no defence.

The question of whether there is a conflict depends on two factors. The first is whether it is accepted as fact that AI systems are inherently opaque. Certainly, there is ambiguity in many AI model’s internal decision-making processes and/or the system’s computational complexity, however this does not mean that there isn’t a level of pragmatic transparency that allows individuals to understand enough to make decisions about the practical, ethical, and safe use of it. The second is the level of knowledge a senior manager is assumed to have – is it enough for a senior manager to understand the general premise of the tool and its features to make decisions about its use without having a deeper awareness of its more complex features? This question will hopefully be answered by the publication of comprehensive, practical guidance by the FCA as per the Treasury Committee’s recommendations.

Comprehensive guidance from the FCA which complements the existing SM&CR regulations could suffice in helping to iron out these perceived discrepancies by offering further explanation as to accountability and the level of assurance expected from senior managers for harm caused through the use of AI. The longer it takes for guidance to be published by the FCA the more harm will be done, as it is clear from the Committee’s report that various stakeholders are currently uncertain as to their regulatory requirements and this could manifest in a chilling effect on the adoption of AI models by financial institutions.

How can we help you?

Related articles

View All