top of page

Take away points from the FCA's speech on AI in financial crime

Updated: Aug 18, 2023

A speech last week by Rob Gruppetta, Head of the Financial Crime Department at the FCA, provided some interesting insights and a sensible approach to use of technology as a tool. A draft of the speech available on the FCA's website discusses at a high level how the regulator has come to start to utilise advanced analysis techniques to learn from large data sets while not over-zealously committing to one particular methodology to achieve its outcomes.

Reading through the draft speech I began to think about (and draw parallels with) the key tenets (I have) adopted in the past when taking on Compliance System Design projects, which could be summarised as:

1. Own the problem

2. Build on top of a robust, data-rich infrastructure

3. Evolve your project

4. Build systems around the way the firm does business

5. Build with expansion in mind

I will expand upon the points above and draw on text from the FCA's speech which seems provide useful teachings.

1. Own the problem

By owning the problem, I mean taking a view on what you are trying to solve, why and how. As well as looking at the edges of the solution - how does this sit within your system landscape? Dive into underlying regulatory guidance. If you are looking to automate position disclosure, take ownership of knowing the thresholds and manual processes of submissions. Want to carry out trade surveillance? Make it a point to dig out regulatory definitions of terms and how the regulator captures transgressions in its handbook (e.g. as the FCA does with front running). I have found that once you own it and have done a good deal of pencil and paper work, some kind of pseudo code naturally flows on from there. You already have the the how in the making.

With tackling financial crime, the buck in many ways stops with the FCA - i.e. this is something they have to take ownership of. That they choose to remain open to new technologies as tools for carrying out the task of identifying what looks bad emphasises their ownership of the actual the problem. The tools are just that - tools and not the end owner of the problem.

"Of course, we’re mindful that making predictions based solely on machine learning algorithms can be misleading, so we take great care to ensure that these are overlaid with appropriate financial crime and sector expertise. We only ever use them as the first step in a rigorous, multi-layered risk assessment process to help us target the riskiest firms. Simply put, the algorithms improve, rather than replace, supervisory judgment. The results so far look promising: year on year, we have improved risk targeting in our AML supervision work by over 65%"

2. Build on top of a robust, data-rich infrastructure

This could be built out in parallel to defining and scoping the problem. If a firm is committed to long term in house build, I would recommend having your own server/ cloud space to capture a history of compliance data points used or to be used for your analysis. If you have no restrictions on space you might want to adopt an "if you can record it, then store it" approach. You may find that recording findings from monitoring today could lead to overlaying an algorithm in months to come to start to recognise and predict a bad situation from its learning.

In his speech, Rob reveals how there existed a data gap which was plugged with the financial crime data return:

"To address this data gap, we introduced a financial crime data return in 2016 to get an industry-wide view on key financial crime risks that firms face. We use the data to help us target our supervisory resources on firms that are exposed to high inherent risk. For example, the amount of cross-border business a firm conducts with risky countries or the proportion of wealthy, politically-exposed clients it holds in its client base.

The returns are filed annually, which will allow us to chart risks and trends over time."

3. Evolve your project

While a big bang approach will get the bulk of the work out of the way in one hit, there will (at best) be bug fixes, but more likely long-term scope creep. There are a couple of ways to minimise this creep. One could be to start with the end (or a point far away enough to be a few releases away) in mind. For example you might use a layout similar to that in my last post as your end goal. Phase one might only deliver a single surveillance outcome or data view and this evolutionary approach will allow you to refine your system build over time to better suit your needs and outcome. The end vision can become malleable as your business priorities change, but gives you the ability to see where it is headed and to evolve your design.

The FCA is all too aware of the need to respond to changing requirements:

"financial crime doesn’t lend itself easily to statistical analysis – the rules of the game aren’t fixed, the goal posts keep moving"

Approaching your surveillance in a similar fashion will help keep in mind that any fixed tests will always only deliver a limited output. Slightly more meaningful outputs will usually come from understanding that transgressions come in various flavours.

4. Build systems around the way the firm does business

This ties back to some of the points above and chiefly that systems are tools to help you do business, which should be designed to support principles of your business execution and conflict management.

On the assumption that the firm has been executing business long enough to know their optimal processes, it should be an exercise of documenting and developing that process. A few years back I was working with a control room team which ended up creating work arounds and endless tactical patches (Excel logs) to supplement a vendor product. It wasn't long before they realised the benefit of a system built around how they receive and log deal data, manage wall crossings.

Ultimately the (regulatory) principles of a process reign supreme and successful system delivery will support the way the firm carries out its activities.

in the case of the FCA where principles around implementation of AI are yet to be formalised, it remains prudent they step cautiously while not forgetting the principles of their inquiry.

"So innovation should not be embraced without scrutiny, and the current artificial intelligence (AI) boom is no exception, as Professor Michael Jordan, world-renowned AI researcher and Professor of Statistics and Computer Science at the University of California warned earlier this year (link is external). He said that, just as we built buildings and bridges before there was civil engineering, we’re currently building large, complex AI decision-making systems in the absence of an engineering discipline with sound design principles. And just as early buildings and bridges sometimes fell to the ground unexpectedly and with tragic consequences, many early AI systems are exposing serious flaws in how we’re thinking about AI. While the building blocks of AI have started to emerge, sound principles for putting them together haven’t been developed yet. So as a regulator, a degree of scepticism about innovations like AI is rational"

5. Build with expansion in mind

Regulatory change is inevitable and so is the need to respond to it. If you were already designing/ building your surveillance systems pre LIBOR-fixing and already had a model of interrogative tools sitting on top of a data warehouse, then integrating benchmark tests would have been a relatively easy addition. The surveillance space seems to evolve at varying pace - accelerated by public announcements of failings and fines. Creating a robust data infrastructure will surely allow firms to be more nimble if operating reactively, or ideally in a more entrepreneurial way and seek to find new patterns, trends by routinely testing business integrity.

18 views0 comments
bottom of page