AI is in a peak moment, having been adopted by multiple industries. Banking is one sector which is adopting the budding tech, with banks harnessing the power of AI in areas like customer service, loan management and even the Banking of Things (BoT).
But some of the most essential and elaborate services banks are legally bound to conduct – such as fraud detection, anti-money laundering, identity verification, risk management, stress testing, microprudential and macroprudential reporting – can now be performed with the help of automation and AI, enabling financial bodies to easily monitor user transactions and detect unusual activity.
With the change of pace spurred by the pandemic which affected the majority of industries, a greater need for banks to diversify their services and embed new technologies emerged, partially due to the increasing competition of tech firms and payment players in banking. In fact, according to a survey from The Economist Intelligence Unit, pre-pandemic, 77 percent of bankers said they believed that the ability to harness the value of AI would be the difference between the success or failure of banks in the future.
In an example case study encompassing both process automation and AI, Automation Anywhere (AA) has enabled banks such as NextBank and Arab National Bank to bolster their controls, increase productivity and mitigate risk.
Sharing his experience at Automation Anywhere’s recent Imagine conference in Austin, Texas, Mike Reynolds, business technology executive of service digitization at KeyBank, said: “Our focus this year, as many banks can attest to, has been risk identification, creating controls across the bank that monitor our biggest metrics and our most risk. We’ve got 62 automations in that space right now.”
Reynolds also shared more about the growing pace of technology the bank is pursuing: “From our humble beginnings in 2017, we’ve risen to 288 automations – that’s the equivalent of 500 full-time workers coming in every day. We have so many exciting areas. There are six main categories, and in loan services we have 100 running automations.”
But with increasing demand for automation, the added value that AI brings to the table makes pairing the two forms of tech even handier in the banking industry. AA chief product officer Adi Kuruganti explained that by combining the power of automation with generative AI, AA predicts a “surge” in potential use cases that can now be automated. Research from McKinsey backs this up by claiming companies can automate as much as 60-70 percent of their processes.
The downside? A huge pipeline for developers is created. For this reason, in a pledge to supercharge the work of companies’ developers who are likely to be inundated with automation requests, the firm has added GenAI to its offerings, allowing developers and business users to generate new process automations using natural language prompts. This expansion also enables automation of any generative AI use case in their application of choice, along with quicker development of end-to-end automations from process discovery.
Showing this in action, AA demonstrated a case study where a developer receives a shortcut to building an automation that investigates bank transactions, as a generative AI model translates those detailed process steps into robust, almost complete automation code.
Risks of AI in banking
As AI and ML evolve further and take an honorable seat at the forefront of automations and processes entrusted by banks, one may wonder about the potential risks and considerations that accompany the application of this tech.
According to a recent paper by the International Monetary Fund on GenAI in Finance, its Risk and Considerations, GenAI presents risks regarding data privacy and embedded bias, robustness, explainability and broader threats to financial stability.
The paper outlines that regulatory policy will evolve over time to help guide the use of GenAI applications by financial institutions, but urges for interim actions to be taken as GenAI needs close human supervision commensurate with the risks that could materialize from employing the technology in financial operations.
Speaking at Imagine 2023, Peter White, senior vice president of emerging products at Automation Anywhere, said: “Nearly every customer that I talk to sees the potential for GenAI, but there are things that are holding them back from realizing that today. There are risks to consider like data privacy and security, misuse and hallucinations.
“Many customers also feel like they just don’t have enough of the data science and ML resources they need to really take advantage of all the opportunities they see.”
For this reason, AA has introduced a set of AI tools, governance and best practices as part of its Responsible AI layer, with a new set of capabilities aimed to help organizations build AI-powered automations with the required guardrails in place.
In addition, the company will release data privacy controls to provide extra layers of protection for an enterprise, including a set of data masking capabilities to ensure sensitive data remains under control. There will also be capabilities to monitor and audit with generative AI analytics and audit command center-based tools so users can monitor model performance and check for violations of data privacy.
While governments and institutions are considering the idea of setting up more comprehensive and unified sets of AI regulations to increase trust and reliability in the industry, it seems like proactive actions from vendors can go a long way towards keeping this space safe and secure in the meantime.