The financial services sector is currently navigating a “gold rush” of AI-washing. Every legacy software provider has suddenly “integrated” neural networks, and every consultant is now an “AI Strategist.” For the busy adviser, this creates a significant amount of noise, making it difficult to discern genuine innovation from repackaged automation.
In a regulated environment, the stakes for being wrong are higher than in almost any other industry. This raises a critical question for the 8MDS community: When everyone is claiming expertise, who should you actually listen to?
The Expertise Illusion
The problem isn’t a lack of information; it’s a lack of contextual information. Generalist AI consultants often fail to grasp that for a financial adviser, a “hallucination” isn’t just a technical glitch, it’s a potential breach of FCA COBS rules or a data privacy catastrophe.
True AI expertise in our sector isn’t about the ability to write Python code or build a Large Language Model (LLM) from scratch. It is the ability to bridge the gap between technical capability and regulatory reality.
What Real Expertise Looks Like
For a professional advice firm, a true AI partner or guide demonstrates four key competencies:
- Operational Nuance: They understand the specific friction points of the advice process, from fact-finding and suitability reports to annual reviews, and where AI can (and cannot) safely intervene.
- Regulatory Literacy: They don’t just talk about “innovation”; they talk about GDPR, Consumer Duty, and the FCA’s evolving stance on AI governance.
- Integration Logic: They focus on how AI talks to your existing tech stack (your CRM, your platform, your back-office systems) rather than suggesting you scrap everything for a “magic” standalone tool.
- Strategic Prudence: They are willing to tell you when not to use AI.
The Litmus Test: How to Spot a “Guru”
If you are evaluating an AI vendor or consultant, use this three-point litmus test to separate the salespeople from the specialists:
Ask for Use Cases, Not Features: Don’t be impressed by “we use GPT-4.” Be impressed by “we saved a firm 12 hours a week on paraplanning by automating X while maintaining a human-in-the-loop for Y.”
Ask about Data Sovereignty: If they cannot explain exactly where your client data goes and how it is used to train (or not train) their models, walk away.
Ask about Liability: If the tool produces a flawed suitability recommendation, who carries the PI insurance risk? A real expert understands the chain of accountability.
We don’t need more “gurus” making bold claims about the end of human advice. We need architects who can help us build more resilient, efficient, and compliant firms. In an era of artificial intelligence, human discernment remains our most valuable asset.




