The Fintech Times previously heard from the industry how generative artificial intelligence (AI) could be seen as both a blessing and a curse. Though it has the potential to massively help in the fight against fraud, in the wrong hands it could be detrimental. Further analysing the impact the technology can have on compliance, risk management and security, we hear from Agent IQ, SymphonyAI Sensa-NetReveal, Accion, Market Research Future, Clearbank, and Pulse.
Not the top option, however…
Slaven Bilac, CEO and co-founder at Agent IQ, the AI solution provider for digital banks, analyses the impact generative AI has on compliance and security. While it is not the best use for the technology, it can certainly be applicable.
“Given the hype around ChatGPT and the generative AI it is based on, a natural question that arises is what are the limits of the technology and what practical problems can it solve for? While the fluency and breadth of responses produced by generative AI seem universal, it is important to remember that generative AI, at its core, is a language and communication technology. It is not really suitable for tasks which are not language/communication related.
“For tasks like security and risk management, different approaches are preferable. e.g anomaly and adversary detection, as well as various regression models for estimating probabilities.
“Nonetheless, when it comes to applying generative AI (and ignoring the question of whether that is the best use), there are some potential applications to enhance fintechs’ approach to compliance, security and risk management.
“For compliance, you could use generative AI to produce both analytical questions and detailed responses to augment data aggregation and decision making, but it would still be advisable to human verify appropriateness of each compliance related question. For KYC applications, generative AI can help create prompts for screening and identification, as well as verify them using an existing customer database.”
Combating sophisticated cyber threats
In a similar vein to Bilac, Charmian Simmons, fincrime and compliance expert, SymphonyAI Sensa-NetReveal, the software and solutions for regulatory compliance provider, highlights the instrumental impact of ChatGPT in the adoption of generative AI.
“The rise of AI models, such as ChatGPT, has profoundly impacted many industries, including fintech. Experimental AIs has quickly evolved to demonstrate tangible benefits in optimisation, efficiency, anomaly detection and pattern behaviour.
“This is seen in the use of both generative AI and predictive AI as a power combination for fintechs. For example, in anti-money laundering compliance, generative AI copilots assist case investigators by collating and presenting data about alerts or suspicions, allowing investigators to ask questions, receive precise information and summaries, and draft narratives for regulatory reporting.
“Generative AI is transforming cybersecurity and infosec teams’ security approaches by providing advanced threat detection, real-time monitoring, and adaptive defence mechanisms. Various AI-powered tools enhance their ability to combat sophisticated cyber threats, safeguard customer data, and maintain a resilient security posture in the dynamic digital landscape.
“Risk management functions also use AI-powered solutions to analyse vast datasets, identify patterns, and predict potential risks. Most notable are real-time risk assessment capabilities for proactive risk mitigation and decision-making, enhancing overall portfolio performance, and generative AI’s ability to adapt and learn from new data to ensure continuous improvement in learnings and result outcomes. These make AI an invaluable tool in managing evolving risks and uncertainties.”
The three big gaps in the global financial system
According to Jayshree Venkatesan, senior director, consumer protection and responsible finance at the center for financial inclusion, the independent think tank housed at Accion, the nonprofit fintech, there are three glaring errors in the global financial system if generative AI is too see strong uptake.
“Findex 2021 showed that in developing economies, 18 per cent of adults paid utility bills directly from their account. About one-third of these adults did so for the first time as a result of the pandemic. The number of first-time users has not been the only metric on the rise. Reports indicate that global attacks increased by 38 per cent in 2022 compared to 2021.
“Generative AI, and chatbots, when used by first-time users can often be the easiest vector for cyber-attacks. In the context of generative AI, the most commonly thought of risk is data privacy. However, the financial sector needs to be mindful of system-level risks that can be created if it provides access to the system through ‘prompt injections’ i.e malicious text or code that force the AI to execute things beyond the analysis it was meant to do.
“There are massive gaps in the global financial system that need to be addressed urgently:
- Define who will take responsibility to protect the global financial system against cyberattacks- this is easier said than done since the system is fragmented across multiple stakeholders- governments, financial institutions, supervisors and other industry players
- Create an international reporting mechanism to track every cyberattack as soon as it is detected
- Invest in a talent pool internationally that can be drawn upon by the most vulnerable in the system- individuals, countries and institutions- to build cyber resilience.”
“The use of generative AI for better risk management is a trend that has been gaining traction in the banks. MRFR predicts that by 2025, 22 per cent of all test data for consumer-facing use cases will be synthetically generated through generative AI.
“With applications ranging from fraud detection and trading prediction to synthetic data generation and risk factor modelling, generative adversarial networks (GANs) and natural language generation (NLG) are increasingly being used in banking and investment services. In these situations, generative AI has the ability to create new degrees of personalisation in financial services and client experiences, spurring the market for innovation and efficiency.”
Effective use of generative AI won’t happen overnight
Although there may be an understanding that simply integrating a generative AI solution into your platform will solve a variety of compliance and risk management problems, Bernard Wright, CISO at paytech ClearBank, points out that taking the time to learn the technology will in the longer term yield the better results.
“When it comes to compliance, generative AI has the potential to radically improve fraud prevention by enhancing anti-money laundering (AML) and Know Your Customer (KYC) processes and protocols. But during these early days of generative AI, fintechs should take the time to learn the capabilities and risks associated with the technology, and carefully consider how they implement it, if at all, into their own operations.
“It’s worth considering specific internal use cases when considering generative AI, and how it can help improve productivity within the company itself. For example, companies that hold huge amounts of monitoring data in security information and event management (SIEM) generative AI can help narrow down queries without the need to be an expert at the associated tool. Thus getting to the answer much quicker.
“Overall, generative AI should be seen as an opportunity for fintechs, but they should approach it with consideration and caution.”
Generative has the potential to revolutionise the fintech industry, however, it must ensure it doesn’t generate biased results. Chirag Shah, founder and CEO of Pulse, the data insights provider notes: “Generative AI is not just having a major effect on fintechs’ risk management, security and compliance strategies. It’s creating risks in almost every industry you can think of. The main risks are: bias or adversarial machine learning; data privacy and security; and regulatory risks.
“If an AI model is trained using biased data it results in a biased outcome. It can also produce discriminatory texts and images. That means fintechs have to eliminate bias by regularly checking and, if necessary, changing the data the model is trained on. They must also be wary of data poisoning, where malicious data is introduced into generative AI models, and obfuscated data being revealed using these models, which, again, requires the data to be constantly scrutinised.
“As far as data privacy and security is concerned, generative AI has the potential to create synthetic data that is indistinguishable from real data. That makes it easy for AI to circumvent any controls, thus exposing the user’s privacy and security.
“Added to these two risks, there are no clear requirements governing how to comply with regulations when using generative AI.
“Outside of these risks, fintechs need to gather large amounts of data in order to make better-informed decisions. But obtaining that data can be challenging, while a large amount of time and resources are often spent on ensuring the correct data sources.
“Despite these challenges, generative AI can, however, enable better risk management. The technology can be deployed to identify patterns in data that may indicate fraud. It can also be used to generate model simulations to analyse risks.”