Login

News & Updates

AI, Regulation and Investor Protection: CFA Institute responds to the FCA’s Mills Review

04 March 2026

As artificial intelligence becomes increasingly embedded in financial services, regulators and market participants are grappling with how to balance innovation with investor protection. In this context, the CFA Institute has submitted its response to the UK Financial Conduct Authority’s (FCA) Mills Review, a forward-looking initiative examining how AI could transform retail financial services in the coming decade. 

 

The review, launched by the FCA in early 2026, seeks to assess how emerging technologies — including generative and increasingly autonomous forms of AI — may affect consumers, firms and market structures by 2030 and beyond. The regulator is particularly interested in understanding how AI could reshape competition, influence consumer behaviour and challenge existing regulatory frameworks in retail finance. 

In its response, CFA Institute supports the FCA’s technology-neutral and outcomes-based regulatory approach. Rather than creating a separate regulatory regime for AI, the Institute argues that existing conduct and governance frameworks can remain effective if they are clearly articulated and adapted to the evolving role of AI systems within financial services. 

A central theme of the submission is the need for regulatory clarity as AI systems move along a spectrum from assistive tools to increasingly autonomous decision-making agents. According to CFA Institute, supervisory expectations should reflect the level of discretion granted to AI systems, with governance requirements becoming progressively stricter as the technology assumes a greater role in consumer-facing decisions. 

The Institute also emphasizes that regulatory oversight should focus on the function performed by AI systems rather than the technology itself. In practice, this means that regulation should be determined by the impact AI has on investor outcomes and market behaviour, not by how the technology is labelled or designed. 

Another key issue concerns accountability. While many financial firms have designated individuals responsible for AI oversight, the Institute warns that formal accountability must be matched by genuine understanding of how these systems operate. Without sufficient technical and operational knowledge among senior managers, governance frameworks risk becoming purely procedural rather than effective safeguards for investors. 

The response also highlights the importance of international coordination. Because AI infrastructure and data ecosystems often operate across borders, CFA Institute encourages ongoing dialogue among global regulators — including bodies such as IOSCO and the Financial Stability Board — to ensure consistent standards and avoid regulatory fragmentation. 

Finally, the Institute underscores the growing importance of “hybrid” professional skills within financial institutions. As AI becomes part of the core infrastructure of financial markets, professionals will increasingly need to combine technical literacy with investment expertise, ethical judgement and fiduciary responsibility. Building this combination of capabilities within firms and regulatory bodies will be critical to ensuring that technological innovation ultimately strengthens, rather than undermines, investor trust. 

The CFA Institute’s submission reflects a broader debate taking place across the financial industry. AI promises significant efficiency gains and new forms of financial services, but it also raises complex questions about governance, transparency and responsibility. As regulators refine their approach, contributions from professional bodies and market participants will play an important role in shaping a framework that supports both innovation and investor protection in an increasingly technology-driven financial system.