A 360-Degree Approach to Generative AI Cybersecurity and Fraud Prevention

By Sanjay Bhakta, VP & Head of Solutions

These are times of high anxiety for those on the front lines of defending businesses against fraud and cybersecurity breaches, including chief information security officers (CISOs), heads of fraud operations, and heads of risk and compliance. The proliferation of generative AI inside the enterprise (whether authorized or not) creates new risks for costly cybersecurity breaches and data privacy violations. Cybersecurity attacks are getting more sophisticated. Bad actors are increasingly adopting generative AI tools designed to hack and bypass even the most bullet-proof defenses. Meanwhile, tougher cybersecurity disclosure rules from the SEC now hold executives of publicly traded firms accountable for reporting cybersecurity breaches as well as their own governance. 

The anxiety is real and justifiable. According to a recently released IBM report, 47 percent of executives are concerned that adopting generative AI in operations will lead to new kinds of attacks targeting their own AI models, data, or services. And almost all executives (96 percent) say adopting generative AI makes a security breach likely in their organization within the next three years. The report notes, “As generative AI proliferates over the next six to 12 months, experts expect new intrusion attacks to exploit scale, speed, sophistication, and precision, with constant new threats on the horizon.”

Generative AI Is Not Going Away

It is not realistic to ban the use of generative AI in the enterprise. The cat is already out of the bag. Businesses ranging from McKinsey to Walmart are developing their own generative AI tools. Generative AI helps businesses achieve benefits such productivity improvements, more efficient product development, more pinpointed customer research – you name it, generative AI is ushering in one innovation after another. The answer is not to ignore generative AI. The answer is to come up with a game plan to manage cybersecurity and fraud programs with more rigor and process. As Forrester noted bluntly in a recent report, businesses “[n]eed to deploy modern security practices for AI success.” Per Forrester,

Many security technologies that will secure your firm’s adoption of generative AI already exist within the cybersecurity domain. . . . These technologies are introducing new controls to secure generative AI. This will force your team to work with new vendors, technologies, and acquire and train on new skills. While processes also serve as useful  security controls, generative AI will uncover procedural gaps in domains involving data leakage, data lineage and observability, and privacy.

Businesses have been down this path before, such as when they began to adopt cloud computing. Remember the well-founded concerns about security being compromised by data being sent to remote servers? The answer then (as it is now) was to take a comprehensive approach to protect data in the cloud, including the use of mature cloud computing services. Now fast forward to today. To strengthen security in the era of generative AI, we at Centific recommend that CISOs, heads of fraud, and heads of risk and compliance adopt a 360-degree approach to generative AI cybersecurity and fraud prevention. This approach encompasses the right processes, tools, and much more. Here are its key components:

A 360-Degree Approach to Gen AI Cybersecurity and Fraud Prevention

1 Understand Which Business Processes Use Generative AI

Everything starts with conducting an inventory to identify all the departments using generative AI; and mapping out how each department is using AI, including the type of AI tools, data sources, and purposes. This is an enormous undertaking. How is Marketing applying generative AI to A/B test new campaigns? Product design for creative concepting? Customer service for responding to queries via chatbots? The questions are seemingly endless, but the business needs to answer them. 

2 Take a Methodical Approach to Test Generative AI Thoroughly

A methodical approach means using a vetted methodology and tools to systematically test every conceivable way that a bad actor can poison generative AI-supported processes in order to compromise them. Generative AI is trained on data and prompts to perform tasks. Bad actors can use carefully crafted inputs or prompts to bypass their intended limitations or safety features, a technique known as adversarial jailbreaking. Businesses can also rely on adversarial jailbreaking to test generative AI models for weaknesses – but this must be done with the right process and tools in order to work.

At Centific, we help CISOs, heads of fraud, and heads of risk and compliance test their generative AI models using frameworks such as MITRE ATLAS and tools such as Caldera or Counterfeit to test generative AI models for vulnerabilities. The MITRE ATLAS Framework helps organizations prioritize defensive measures, test security controls against real-world threats, and improve threat hunting capabilities. To appreciate just how through the MITRE ATLAS Framework is, I invite you to explore the MITRE ATLAS Matrix, which maps all the steps that bad actors take to poison generative AI models, such as LLM prompt injection, LLM plugin compromise, LLM jailbreak, and much more. 

Anticipate How Bad Actors Operate

Effective cybersecurity defense will always come down to anticipating how bad actors work to stay a step ahead of them, which means thinking like they think and fighting fire with fire. A framework helps a business think like bad actors think. Tools such as Caldera and Counterfeit help businesses actually test generative AI. Caldera allows security teams to build custom “adversary profiles” that mimic the tactics, techniques, and procedures commonly used by attackers targeting generative AI systems. This includes techniques like data poisoning, model evasion, and bias injection. Counterfeit generates data that can be used to fool generative AI models into making incorrect predictions. This allows businesses to test their models against sophisticated attacks and identify potential vulnerabilities.

3 Strengthen Your Security Posture with a Zero Trust Architecture

A security posture includes, among other things, technological controls (e.g., firewalls, intrusion detection and prevention systems, encryption technologies, and other security hardware and software solutions) and access controls (ensuring that only authorized individuals and devices can access certain information). 

Once you assess your security posture, your organization will be faced with a crucial question: just how far are you willing to go to safeguard your company’s systems? This is where zero trust architecture (ZTA) comes into play. Traditional security models often operate on the assumption that everything inside the organization’s network is trusted, creating a strong perimeter to keep threats out. But ZTA assumes that threats can exist both outside and inside the traditional network perimeter, thus necessitating rigorous verification and control measures. As a result, a company employing ZTA protects its systems with a far greater level of rigor. We recently blogged about the value of ZTA. For more detail, read our recently published blog post, “Should Your Business Adopt a Zero Trust Architecture?

4 Deploy Purple Teaming

Purple Teaming is an industrywide collaborative approach used to strengthen an organization’s overall security posture. With Purple teaming, one team simulates both attacks on cybersecurity perimeters and their defense. Each team member plays both the role of attacker and defender, which ensures a more robust and intricate breach/attack simulations.

In a Purple Teaming exercise, the simulated attacks provide a realistic assessment of the organization’s vulnerabilities and the effectiveness of its defenses – in the context of this blog post, generative AI. The Purple Team uses this information to strengthen their defenses, improve response strategies, and train staff.

The primary goal of Purple Teaming is to create a feedback loop where both offensive and defensive strategies inform and enhance each other, leading to a more robust and resilient cybersecurity posture for the organization.

Going Forward with Confidence

Consider how far the industry has progressed with cloud computing despite the risks involved. The cloud computing industry has become both large and influential perhaps beyond anyone’s estimation when cloud computing first emerged. But the impact of cloud computing pales in comparison to the impact generative AI is and will have. The industry is not going back. The issue is how confident businesses are moving forward. A 360-degree approach will turn anxiety into confidence. 

Click here to learn more about our digital safety capabilities