Financial Services GenAI: A New Warning Challenges the Industry

By Sanjay Bhakta, VP & Head of Solutions
Financial services GenAI iconography overlaid on an image of a busy cityscape

Nearly every financial services executive surveyed by Ernst & Young in 2023 said they’re either already using or planning to use generative AI (GenAI) within their organization. Financial services GenAI promises to improve operations ranging from fraud detection to customer service. But with increased adoption comes increased risk.

The U.S. Treasury Department underscored this reality recently by publishing a report that warns financial services businesses about the cybersecurity risks associated with the adoption of AI, including GenAI. Financial services firms should consider the report, Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector, as a clarion call to protect themselves with an airtight approach to financial services GenAI.

Main Findings of the Report

The report offers a deep dive into topics familiar to practitioners steeped in the use of AI and GenAI. For instance, it discusses four vulnerabilities that financial institutions should consider when implementing AI-based cybersecurity tools: data poisoning, data leakage during inference, evasion, and model extraction.

The report also delved into the sophisticated ways that bad actors (including organized crime rings) are using AI to breach financial services’ GenAI-based processes, including social engineering, malware generation, vulnerability discovery, and disinformation.

The Treasury Department also shared other alarming news: according to a survey of financial services executives conducted for the report, there’s a significant mismatch between the rapid development of AI technologies and the improvement of risk management frameworks employed by financial institutions. The report recommends financial institutions “identify, monitor, and control risks arising from AI use as they would for the use of any other technology.” But most financial services firms lack the resources to keep up with smaller, more agile bad actors.

Fortunately, third-party software purpose-built to protect processes that use financial services GenAI offer a solution. That is, if you adopt the right tools using a two-step approach.

Financial Services GenAI: How to Prevent Cybercrime

To outsmart bad actors, financial services firms need tools designed for financial services firms. This might seem obvious, but there’s much more to it than meets the eye. Success requires a cybersecurity team to both understand how GenAI affects every process and take a methodical approach to GenAI testing.

1. Understand How Every Process uses GenAI

Start by auditing your system to identify the departments using financial services GenAI. This includes mapping out which department is using which tools, where, when, and how, including a robust examination of data sources and flow routes. For example, how is the loan origination department using financial services GenAI to assess loan risks? 

2. Take a Methodical Approach to Testing GenAI

A methodical approach means using a vetted methodology and tools to systematically test all the ways a bad actor could compromise your GenAI-supported processes.

Financial services GenAI is trained on sensitive data and custom prompts to perform tasks. As the Treasury Department report points out, bad actors can use strategies like data poisoning to corrupt your GenAI model through its data.

It’s critically important that you test your financial services GenAI models for vulnerabilities to data poisoning. But, to do so, your organization will need the right process and tools.

The Importance of Proper Software

Real estate is about location, location, location and GenAI is about software, software, software. Let’s look at an example. Microsoft recently launched Azure AI Studio tools that can monitor and detect vulnerabilities in Microsoft’s other AI services. Features of those tools include:

  • Enhanced prompt shields that detect and neutralize attempts to manipulate an AI model through prompt injection, including a cutting-edge solution to identify subtle attacks.
  • Groundedness detection that pinpoints instances where a model generates responses that are inaccurate or not grounded in reality.
  • Safety system messages that provide a model with guidance, promoting outputs that are safe and ethically sound.
  • Safety evaluations that thoroughly test an application’s resistance to jailbreaking attacks and its potential for generating harmful content.
  • Risk and safety monitoring that helps admins gain insights into how specific inputs, outputs, and user interactions affect a company’s content filters, enabling the business to develop effective mitigation strategies.

Of course, this isn’t to suggest that a financial services organization can simply buy cure-all software and rest easy. But the tools now exist to make it possible to act on the recommendations of the Treasury Department report and build safe, responsible financial services GenAI.

What Comes Next: Protecting Your Financial Services GenAI from Harm

At Centific, we recommend that financial services firms take a holistic 360-degree approach to safeguarding their GenAI-supported processes. This approach goes beyond what we’ve already discussed to include strengthening your security posture with a zero trust architecture approach, deploying purple teaming to constantly test your security posture, and more. 

For the heavily regulated financial services industry, the consequences of failure are unacceptably high. The average cost of a data breach in financial services is second only to healthcare, according to the IBM 2023 Cost of a Data Breach report, and averages a shocking $5.9 million (USD).

That’s why Centific is here to help financial services firms like yours protect themselves while maximizing the benefits of GenAI. This comprehensive partnership begins with a data privacy assessment.

Protect Your Financial Services GenAI with a Data Privacy Assessment

Centific’s data privacy assessment thoroughly examines your GenAI applications and data repositories, prioritizing the protection of customer data across all regions. We’ll scrutinize employee access levels, reducing the potential for unauthorized access to sensitive information by over-privileged users.

The assessment doesn’t stop there—it will also evaluate the strength of your data security policies for both employees and customers. Our team will pinpoint whether your business enforces appropriate restrictions on data and application access within geographic boundaries, safeguarding against cross-border data-sharing violations.

The assessment also includes a deep dive into your business’s data lineage. This involves analyzing data sources, versioning practices, and access controls. We place special emphasis on the security and encryption of data used by financial services GenAI models—ensuring that even in the unlikely event of a breach, the data and its origins remain secure.

Adopt Smart Governance, Risk, and Compliance (GRC) Program Enforcement and Monitoring

A GRC program can help ensure the financial services GenAI models you select comply with regulatory requirements—all while mitigating risk. By establishing clear guidelines for model selection, development, and deployment, a GRC program helps you avoid regulatory fines and reputational damage.

Additionally, such a program can enable the identification of data that may increase risk exposure, leading to potential biases within models. A mature GRC program can help mitigate these biases, resulting in fairer and more accurate decision making across financial services. This ultimately fosters trust with customers and regulators alike.

As part of this comprehensive security assessment, Centific would empower you to calibrate a GRC program according to the enterprise’s level of risk tolerance. If needed, we would collaborate with you to design comprehensive data protection and privacy policies to manage the myriad of requirements ranging from confirming data sources to identifying sensitive information types.

This advanced approach would help ensure that your sensitive information is appropriately classified and that regulatory compliance is achieved across different geographies.

Achieve Safe and Reliable Financial Services GenAI Deployments

Our purple teams are intimately familiar with financial services GenAI and have experience proactively hunting for potential areas of exposure within large language models (LLMs). Through prompt-response experimentation, our purple teams excel at verifying the authenticity of the code produced by your LLMs and identifying vulnerabilities—including the new types of malware that GenAI can generate.

With the proper monitoring metrics to measure the level of risk resulting from LLM responses and crucial insights from our teams, you can substantially reduce the risk associated with using financial services GenAI.

Learn more about how you can protect your digital systems in the GenAI era.