Generative AI a new dawn of challenges and opportunities for banks
By James King | Published in Global Risk Regulator | May 2023
The technology that drives ChatGPT could revolutionise how banks manage complex and time consuming processes in areas like compliance, but it will come with risks.
In 2017, the chief executive of tech giant Nvidia, Jensen Huang, noted that a “Cambrian explosion” of artificial intelligence (AI)models was underway. The comment went largely unnoticed outside US technology circles. But six years later, the world is bearing witness to the after-effects of this moment, as generative AI platforms like ChatGPT seize the spotlight and affect nearly every sector of the economy.
Generative AI has opened the door to a world in which entirely new content, including text, images and audio, among other things, can be created by machine-learning algorithms. This capability is helping to break down the walls between human and machine, opening up new methods of communication. Itis also what separates generative AI from previous versions of general AI.
“Now we’re actually getting to this point where the computer for the first time can interact in the native language of humans. That’s one way to understand what’s happening here,” says Nick Lewins, an independent data and AI advisor who was previously the global financial services lead at Microsoft.
Looking ahead, this will have serious implications for the conduct of business; research from Goldman Sachs suggests around 300 million jobs will be affected. This chimes with the observation of former US Treasury Secretary Lawrence Summers, who noted in April this year that ChatGPT, the most well-known version of generative AI on the market, was coming for the jobs of the “cognitive class”.
For banks and other financial institutions in particular, this new dawn will offer an unprecedented mix of opportunities and challenges. Though many of the world’s larger and more sophisticated lenders have been using other forms of AI for years, it has mostly been related to tabular data, used to determine things like credit risk. As such, the recent surge in generative AI applications represents the next, irreversible step in the evolution of the digital economy.
The most obvious near-term use cases include aspects of lenders’ core regulatory and compliance functions, which offer plenty of scope for disruption. Citigroup estimates that the world’s leading banks spend something close to $270bn a year on compliance, reflected in the fact that many lenders have doubled the headcount of their compliance and regulation teams.
For compliance teams that have to manually contend with growing volumes of transaction data, the advent of generative AI will deepen efficiencies across the board. Not only can these teams depend on the pattern recognition of the past, but they can also use these new AI systems to interrogate the data sitting before them to improve their approach to fraud detection, for example.
“The amount of payment data coming through [to banks] has grown exponentially over the years. There is an almost complete shift over to digital payments, so banks need new tools to handle this [from a fraud detection perspective],” says Oliver Tearle, head of innovation technology at The ai Corporation in the UK.
“The advent of generative AI really spices things up. It will enable people to directly query large volumes of [transaction] data including the type of payment methods being used, as well as the volume of payments in specific situations. The AI will be able to generate reports that summarise these queries. That will also apply for real-time data as well,” says Mr Tearle.
In a sign of how the compliance market is changing, the ai Corporation has recently launched a machine-learning fraud prevention tool capable of generating fraud models and associated detection rules. It can then, independently, make suggestions to financial institution fraud managers to improve the performance of the fraud strategy. “This basically removes all the manual effort involved in managing the strategy,” says Mr Tearle.
This capacity to minimise manual effort and optimise complicated tasks means the potential use cases for generative AI in banking are vast. From automating much of the regulatory reporting process to introducing efficiencies when it comes to customer service, lenders will only be constrained by their thinking when it comes to deploying the technology.
“In the context of banking, generative AI can be used for a variety of applications, including fraud detection, customer service and investment management,” says Aruna Pattam, head of AI analytics and data science at Capgemini, a multinational technology services and consulting company.
According to Ms Pattam, generative AI will also be effective when it comes to banks’ approach to data management and analysis. “Generative AI can help banks protect their customers’ personal and financial information by creating synthetic data that can be used for testing and analysis. This approach can improve the security of banks’ systems while still leveraging the power of data analysis to enhance customer experiences and prevent fraud,” she says.
Despite the advantages presented by generative AI systems, leading global banks have, at least publicly, been reticent towards the technology. In February this year, Bloomberg reported that Deutsche Bank, Bank of America, Goldman Sachs and Wells Fargo were among a number of lenders to limit or prohibit the use of ChatGPT by their staff. Three of Japan’s mega banks are also following suit, with the likes of MUFG Bank, Mizuho Bank and Sumitomo Mitsui Banking Corporation adopting similar positions.
This apparent reluctance to engage with generative AI systems has little to do with the underlying technology, according to Mr Lewins. Rather, it reflects banks’ software as a service (SaaS) policies, whereby an external SaaS system like ChatGPT has not been subject to any due diligence scrutiny, and through which staff may be inadvertently leaking sensitive strategy documents or customer data, among other things.
Rather, banks are increasingly leaning on trusted big technology partners to harness the power of generative AI. Microsoft’s Azure cloud computing platform is a case in point. In March the US technology giant started offering ChatGPT through its Azure OpenAI service, offering a tailored enterprise version of the ground-breaking AI platform that can be integrated with companies’ existing platforms and data, and that automatically opts out of training the ChatGPT AI model.
In February, Australian banking giant Westpac announced a deepening of its existing partnership with Amazon Web Services, involving the provision of generative AI models to support a range of functions across the lender’s business lines. Meanwhile, in March, Goldman Sachs announced that it was using generative AI models internally to help its software engineers to develop code, without specifying which product it was using.
Banks’ ability to adapt to, and engage with, generative AI stems in part from their long history of working with previous AI models. This will be important moving forward because generative AI brings with it a fresh set of risks and challenges, including the possibility of potentially biased or unreliable underlying algorithms. For this reason, many organisations at the frontier of generative AI use are adopting a “co-pilot” approach to using the technology, by always having a human at hand.
“As soon as you get to the point of taking the human out of the loop and saying this is fully automated, you’re in big trouble because these methods are probabilistic by their very nature. They’re always going to get some percentage of things wrong,” says Mr Lewins.
Even so, from a risk management perspective, banks are in pole position in terms of their ability to deal with an increasingly AI-dominated world. “Out of all the verticals financial services, I would argue, has the best-developed risk management framework around [other] forms of AI because financial services firms use these systems in a systematic way to do really important and really risky things, like predict credit and predict liquidity buffers,” says Mr Lewins.
“[Banks] that have implemented advanced internal models under the Basel II and Basel III framework will already have put in place a model risk management function. Some of the most interesting people in this space are people that actually run the model risk management function at these institutions,” he adds.
This bodes well for the future, particularly as banks and other players encounter increasingly anxious national authorities. Global regulators are, so far, taking very different positions on the question of generative AI.
The EU, for example, looks set to pass its ground breaking AI Act later this year. The US and UK, meanwhile, are opting for a less codified approach to date, although other markets are taking a far more extreme position: in April, Italy became the first country in Europe to impose a market-wide ban on ChatGPT.
“If, as a regulator, you take a hard stance and say we will stop this or we’ll do a six-month moratorium, you will lose in the long run. It’s just that competitive now. And so we think that that’s a dangerous game to attempt to block these capabilities,” says Dan Doney, chief technology officer at Securrency, a blockchain compliance firm, and former chief intelligence officer of the US Defense Intelligence Agency.
Indeed, banning generative AI models will not be a viable path forward because, beyond the technology’s singular merits, it is already intersecting with other innovations to demonstrate new growth avenues for the wider financial system. One example of this is the ways in which blockchain and smart contracts can leverage generative AI to automate and embed compliance within any transaction.
Securrency, for example, is looking at ways to use generative AI to translate global financial regulations into code, as part of a smart contract. Today, it is actively engaging with regulatory authorities in the US and the UAE to demonstrate its approach.
“Blockchain actually allows you to have very deterministic and verifiable rules that can say the movement of value can only happen under proper conditions,” says Mr Doney.
“But what if a machine could read the regulations, globally, translate and encode these rules and enforce them with the absolute certainty of an immutable ledger? That’s exciting. Because now you can actually automate regulatory enforcement at scale.”
The advent of generative AI is set to shake up the global economy like few other technological advances have done in recent decades. But, on balance, the banking sector and wider financial services system have much to gain. By drawing on previous experience with older iterations of AI, while reaping the operational benefits of new generative AI models, banks could be well-placed to withstand the powerful changes that are set to rock the global economy in the coming months and years.
Read the full article in Global Risk Regulator: here