04 February 2026
A CFA Institute perspective on why financial professionals must rethink their AI strategies to preserve independence, avoid overreliance on Big Tech, and navigate the shifting terrain of agentic AI.
The rapid development and adoption of large language models (LLMs) have ignited profound transformations across the financial sector. Yet as the dust settles on the first wave of experimentation and excitement, the strategic question looms: how can firms and professionals in finance adapt their AI approaches without ceding control to a handful of dominant players? In his recent Enterprising Investor article, Dan Philps, PhD, CFA, reflects on this post-LLM reality through the lens of insights shared by Yann LeCun - Meta’s Chief AI Scientist - during a UK Parliament hearing. The key argument: the real risk in today’s AI is not raw model size, but the growing concentration of control over AI infrastructure, user interfaces, and data pipelines.
LeCun’s critique of today’s ecosystem is sharp and clear. While LLMs have captured attention for their conversational fluency and content generation capabilities, they remain fragile, statistically driven tools. True machine intelligence - characterized by reasoning, memory, and agency - remains far from reality. In the meantime, the power to influence decisions is shifting not to the models themselves, but to the entities that deploy them at scale.
This dynamic has major implications for the investment world. As AI assistants become embedded in workflows, risk assessment, portfolio construction, and client communications, there’s a growing threat of dependency on black-box systems and proprietary platforms. The more centralized the infrastructure, the greater the danger of regulatory capture, information asymmetries, and misaligned incentives - particularly when AI tools are used without sufficient transparency or accountability.
Philps highlights LeCun’s call for an open and federated future: one in which AI models are trained across distributed networks, data remains decentralized, and institutions retain sovereignty over their own tools and data flows. This vision stands in stark contrast to the current trend, where access to high-performing AI systems is often gated by a few technology giants.
For financial professionals, this is more than a technical debate. It’s a strategic imperative. Relying on third-party AI tools without questioning their architecture, governance, and alignment can lead to unintended consequences: from poor investment decisions to systemic market risks. Institutions must demand explainability, traceability, and interoperability in the AI tools they adopt.
For CFA Society Italy members, the message is timely and urgent. In an era where agentic AI is on the horizon - systems that may soon operate with greater autonomy - staying informed and critically engaged is essential. Professionals must not only understand the mechanics of AI, but also the power structures behind it. By doing so, they can safeguard their analytical independence, uphold fiduciary standards, and contribute to a more equitable and resilient financial system.
AI may reshape finance, but who controls that reshaping is a question that must not be left unanswered.