This Is How AI Can Boost Financial Services

April 24, 2024

 

This article first appeared in April’s print edition of Business Monthly.

When ChatGPT, a free AI chatbot, launched in 2023, it put the spotlight on the entire artificial intelligence (AI) landscape. “Developments in AI have provoked a mixture of excitement and anxiety among commentators, politicians, policymakers and members of the public,” Jana Mackintosh, managing director of payments, innovation and resilience at UK Finance, a trade association, said in a November report.

An August report from The Alan Turing Institute, the UK’s national institute for data science and artificial intelligence, stressed: “In its various forms, from simple rule-based systems to advanced deep learning models, AI represents a paradigm shift in technology’s role in finance.” Such shifts bring significant benefits, threats, pitfalls, implementation challenges, and the need for careful regulation.

New lenders

AI is particularly beneficial for financial services companies, especially in volatile business, monetary and fiscal environments. “AI-based models are increasingly used for automated decision-making in lending,” noted the institute’s report. “They can significantly improve the credit risk assessment of a loan applicant due to their reliance on diverse, often non-traditional, data sets.”

Fintech companies benefit the most from AI, given the technology’s “capacity to handle a larger variety of data, [allowing them] to venture into territory that has, up until now, been uncharted.”

Commercial banks usually do not use AI to assess small borrowers “because they believe that [their] low likelihood of payback and potentially high loan risk will not even cover the evaluation costs,” the report said.

In either case, AI “can lead to a reduction in operational costs [by minimizing] loan default rates.” That means “improved customer targeting.”

Banks and fintechs also can use AI to “automate key business processes in customer service and insurance.” The technology allows them to “make easy wins in key areas such as untapped client segments, [which] improves financial inclusion … lower acquisition costs, stronger usage of existing products and services, and improved access and scale by adopting an AI-first approach to customer interaction.”

AI also benefits bank and non-bank traders, especially “equity trading and, more recently, trading in the foreign exchange market, [as the technology] can identify unexpected market trends within a limited time.”

The technology also helps ensure regulatory compliance, prevent illegal insider trading and detect fraud thanks to its “real-time monitoring of data, which is paramount for the timely detection of suspicious activities.” Additionally, AI can help “strengthen cybersecurity resilience [by] providing better protection from social engineering attacks, such as phishing.”

Risky technology

Using borrowers’ data stored in financial organizations and their public digital footprints to support decision-making raises concerns. “Researchers and practitioners point out that the use of AI comes with many threats [and] potential pitfalls,” the Turning Institute report said. “Therefore, organizations, users and regulators must remain … vigilant of the potential drawbacks associated with using AI to ensure this technology is utilized fairly and efficiently.”

The first risk is data privacy breaches and abuse. “It is imperative to ensure that data is collected and processed in compliance with relevant data protection regulations, including the [EU] ‘s General Data Protection Regulations and other industry-specific regulations,” the report stressed.

Another risk is that “the decision-making process of AI models is often compared to a ‘black box’ [where] users are unable to comprehend how the system operates, makes decisions and the underlying reasons behind those decisions.” That creates a “challenge in identifying errors and biases in the system, which may result in inaccurate or unjust decisions.”

Additionally, AI systems are not held accountable for making wrong decisions. “This becomes particularly problematic when AI is employed to make critical decisions with important implications, such as assigning falsely bad credit scores or [denying] access to a loan.”

Depending too much on AI can significantly harm employees in banks and financial institutions, as it “diminishes human skills and discourages employees from developing the necessary skills to make decisions independently. Implementing AI on a large scale … particularly in commercial banks will likely result in job displacement for many workers, as automation of routine tasks replaces human tasks.”

Lastly, The Alan Turing Institute report stressed, “Researchers and practitioners have warned that using AI can increase [systemic] risk.” Such risks would likely appear in “unstable markets, [causing] increased volatility … which can create spillover effects and increase systemic risk.”

Implementation challenges

The first implementation challenge facing financial institutions when using AI is the availability of training data. “The more data the AI model has access to, the more accurate and reliable its predictions and decisions will be.” That requires complete digitization of a financial organization’s operations to ensure that all the organization’s data can be used to train the AI models effectively.

Data quality is also “paramount when training AI models,” the report said. “If the data used is incomplete, inaccurate, biased or inconsistent, it can negatively affect the model’s performance and lead to inaccurate or unfair predictions.” That is most evident if “financial organizations and regulators are increasingly … generating synthetic data” to counter deficiencies in real-world data.

The next challenge is to select a suitable system, as “no single AI algorithm is effective for all problems. Using an unsuitable [one] can result in poor performance, inaccurate predictions, or even the inability to solve the problem.”

Financial institutions also need to invest in replacing legacy infrastructure, as they “may not possess the necessary processing power or storage capacity to effectively train and operate AI models … This can lead to longer processing times and reduced accuracy.”

Upskilling employees is also vital to the AI ecosystem, as “many AI systems require specialized programming, data analytics and machine learning knowledge,” the institute report said. “Without these skills, employees may encounter difficulties comprehending how to properly use and interpret the outcomes produced by AI systems.”

Character traits, such as “agility and adaptability,” are also vital when managing AI risks amid increasing competition from other financial organizations that use AI. “Using AI can [change] how businesses operate and make decisions, which can require adjustments to existing processes and structure.”

Everyday AI 

Adopting AI, particularly in the heavily regulated financial sector, requires laws to effectively govern its use. “A fundamental aspect of good financial regulation is enhancing public trust by ensuring markets function well,” the report noted.

Those laws need to assess “the implications for consumers … regarding how their data might be used.” The other reason to regulate AI is “competition concerns, … especially smaller firms looking to compete with well-established tech firms that start providing financial services.”

AI laws should also ensure “market integrity [and] the implications of financial stability.” Regulation must also provide “operational resilience” and prevent “cyberattacks within a rapidly growing dependence on technology.”

The following vital and potentially tricky step is when regulators use AI to govern financial organizations that use AI. “The move toward predictive supervision … will bring benefits to consumers and markets through quicker prevention of harm … through more efficient targeting of supervisory resources.”

However, the report said it “poses numerous challenges that mirror those relevant to the industry.” The solution will be with “regulators [who] must always be mindful of their oversight role and ensure that they exhibit appropriate behaviors in their use of AI.”