In 2025, 42% of companies ended AI initiatives — up from 17% in 2024. The cause is almost always the same: chasing technology instead of solving a customer problem. Here is how to build a chatbot that survives.
A nearshore industry report covering 819 contact-centre executives across Europe found that 42% of companies ended AI initiatives in 2025. That is up from 17% in 2024 — a near-tripling of the failure rate in a single year. The report's authors identify the pattern clearly: initiatives fail when they prioritise what the technology can do over what the customer actually needs.
This is not a new failure mode. The same pattern produced a generation of bad IVR systems — built around menu logic rather than caller intent — and early chatbots that answered every question with a list of links. Understanding what separates an FAQ bot from an outcome agent is the clearest way to avoid repeating it.
An FAQ bot answers questions. It receives a message, matches it to a pre-written answer, and returns that answer. At its best, it deflects simple inquiries. At its worst, it traps customers in loops when their question does not match any pattern the system was trained on.
An outcome agent resolves situations. It does not just provide information — it takes action: books an appointment, processes a request, routes the inquiry to the correct team with full context, or escalates to a human when the situation genuinely requires it. The customer ends the interaction having had their actual need met, not merely informed.
The gap between the two is largely a scoping and design decision, not a technology limitation. The same underlying AI can be either — the difference is in what the build is optimised for.
The Transcard chatbot started with a tightly defined FAQ scope built from their complete knowledge base. After more than a year of continuous operation, it handles 48% of all routine FAQ inquiries automatically. The custom KPI dashboard gives the team full visibility into what is being asked, what is being resolved, and what needs updating. The 48% containment rate did not appear on day one — it grew through iteration, guided by real data.
That is what an outcome agent looks like in operation: a system that starts with a clearly defined problem, measures its performance against defined outcomes, and improves over time. The 42% of companies that cancelled their AI initiatives in 2025 mostly did not do these things. The ones that maintained and grew their deployments did.
The question is not whether AI can work for your customer service. The question is whether you are willing to define the problem clearly, measure the outcome honestly, and iterate until the numbers justify expansion. That discipline is what separates the 58% of AI initiatives that survive from the 42% that do not.
No hype. Just an honest conversation about what AI can do for your business — and how fast.
Book a Free Call