AI platforms will become so advanced that explaining their decisions to retail banking customers would be secondary to the service provided, according to market participants, though developers and corporate users will need more understanding than ever before.
“AI will eventually achieve a level of complexity that makes it virtually indistinguishable from human customer support,” says Henry Vaage Iversen, co-founder of AI firm Boost.Ai. “In this instance, as long as customers are delivered the best possible experience – whether it is coming from straight AI or a human using artificial intelligence to augment their knowledge – the “how” of that experience being delivered will matter less.”
Barbara Johnson, chief operations officer at Kortical, agrees that customers will mind less about AI explainability if the system is accurate. “However, if they feel the AI is being unfair or biased, they will want to know why. As a result, it is now crucial that those responsible for building and maintaining AI powered solutions understand why their AI is making every decision on an aggregate and individual level.”
The UK’s Financial Conducts Authority (FCA) announced in July that it would be partnering with the Alan Turing Institute to develop the explainability and transparency of artificial intelligence in the financial services sector.
The challenge is that explanations are not a natural by-product of complex machine learning algorithms. It’s possible to ‘build in’ an explanation by using a more interpretable algorithm in the first place, but this may dull the predictive edge of the technology,” said Christopher Woolard, executive director of strategy and competition at the FCA, in a speech.
“So, what takes precedence – the accuracy of the prediction or the ability to explain it? These are the trade-offs we’re going to have to weigh up over the months and years ahead. For example, if a mortgage or life insurance policy is denied to a consumer, we need to be able to point to the reasons why. What takes precedence – the accuracy of the prediction or the ability to explain it?”
“Any AI worth considering in the retail space should be able to accurately predict customer intent,” says Iversen. “What it then does with that prediction is key to delivering on a great customer experience.
“Not only should it be able to discern what a customer is asking, but it needs to be able to make the right call on how to respond – be it instantly with the right information or by accurately transferring to a human operator if the query is beyond its scope. When it comes to customer experience, accuracy and explainability of information should not be mutually exclusive.”
For Johnson, both need to go hand in hand. “By being able to explain the results you can build better features, which in turn leads to better models with more accurate predictions. Having explainability also means that a data scientist and domain expert can work together in building the AI models, which means that the results will be better quicker. It is the marriage of both sides of the model building process that creates the best results.”
Dr Mark Goldspink, CEO of The ai Corporation, believes that the accuracy of a prediction likely takes primary position. “From an engineering point of view, understanding why the technology is giving particular results, helps to facilitate the acceleration of future product roadmap. AI technology is designed to help facilitate the decision-making process for all stakeholders. Essentially optimizing quality of the decision, at the right cost to provide the most profitable outcome.”
The FCA carried out a survey with the Bank of England, part of the two regulator’s continued investigationsinto the use of AI. According to Woolard, the survey suggested “that the use of AI in the firms we regulate is best described as nascent. The technology is employed largely for back office functions, with customer-facing technology largely in the exploration stage.”
Goldspink doesn’t agree. “As of today, machine learning is enhancing many of the things we do and take for granted. From helping us to determine a good deal on a car, providing music playlists or to aiding more behind the scenes processes – such as payment fraud detection within banks.
“However, while machine learning is being used extensively, it is not being used efficiently or, indeed, to its full potential. In many automation projects, the machine learning component usually only constitutes a small portion of the overall process, which is then completed manually by rapidly expanding support teams.”
Iversen also believes that AI and machine learning have moved past an exploratory stage. “We are seeing an incredible uptick in customer-facing conversational artificial intelligence. In Scandinavia, large banks and financial institutions are already using this technology to improve interactions with their customers by offering quick and frictionless access to their brands, 24/7.
“We expect this to blossom further in the immediate future as advances in machine learning and natural language understanding will allow for AI to perform tasks (such as reporting a stolen credit card or upgrading a mobile data package) that previously required human assistance.”
For Johnson, the adoption of real, live AI solutions has been slow. “[This is] in part down to a lack of explainability and the difficulty of turning an AI model into software. Both issues are being addressed by innovative platforms to ensure that senior buy-in is strong, as there aren’t crucial customer decisions being made by an unknown model.
“One-click production means that finding data engineers isn’t necessary, and hosting all the different environments to integrate the AI into current systems and apps is seamless. Going from idea to live models will take days, not months or years.”