AI is quietly reshaping financial choices
The shift is already happening — and the numbers tell a clear story:
- AI shopping assistants now influence an estimated $200B+ in annual consumer spend
- 67% of users trust AI recommendations as much as a friend’s advice
- Less than 1 in 5 users know when an AI is commercially incentivised to recommend something
Run those numbers together and you get a trust gap with serious financial consequences.
If 67% of people trust AI recommendations at peer level, but 80%+ can’t identify when those recommendations are commercially motivated — that’s not a UX problem. That’s a structural vulnerability in how people make financial decisions.
Think about what that looks like in practice. A user asks their AI assistant which savings account offers the best return. The assistant ranks three options. The top result carries a placement fee. The user, operating with peer-level trust, doesn’t ask why.
That’s the $200B blind spot.
For fintech, the strategic implication is clear. There’s a measurable first-mover advantage in disclosure — not as a legal formality, but as a product feature. Brands that build recommendation transparency into their core UX (how the algorithm works, what commercial relationships exist, where the conflicts sit) are building compounding trust equity. That compounds differently than any interest rate.
The brands that will win long-term aren’t the ones with the best AI. They’re the ones whose AI users actually believe.
Trust is the only product that can’t be reverse-engineered.
How much do you actually trust AI-generated financial recommendations — and should you?
🔗 Source: https://www.economist.com/business/2026/04/19/why-your-ai-assistant-is-suddenly-selling-to-you