A 360-Degree Approach to Generative AI Cybersecurity and Fraud Prevention

By Sanjay Bhakta, VP & Head of Solutions

These are times of high anxiety for those on the front lines of defending businesses against fraud and cybersecurity breaches, including chief information security officers (CISOs), heads of fraud operations, and heads of risk and compliance. The proliferation of generative AI inside the enterprise creates risks for costly cybersecurity breaches and data privacy violations. Cybersecurity attacks are getting more sophisticated. Bad actors are increasingly adopting generative AI tools designed to hack and bypass even the most bullet-proof defenses. Meanwhile, tougher cybersecurity disclosure rules from the SEC hold executives accountable for cybersecurity governance. 

The anxiety is real and justifiable. According to IBM, 47 percent of executives are concerned that adopting GenAI will lead to attacks targeting their own AI models, data, or services. Almost all executives (96 percent) say adopting GenAI makes a security breach likely in their organization within the next three years. IBM says, “As generative AI proliferates over the next six to 12 months, experts expect new intrusion attacks to exploit scale, speed, sophistication, and precision, with constant new threats on the horizon.”

Generative AI Is Not Going Away

It is not realistic to ban the use of generative AI in the enterprise. The cat is already out of the bag. Businesses ranging from McKinsey to Walmart are developing their own generative AI tools. GenAI helps businesses achieve benefits such productivity improvements, more efficient product development, more pinpointed customer research. The answer is not to ignore generative AI. The answer is to develop with a game plan to manage cybersecurity and fraud programs with more rigor and process. As Forrester noted bluntly in a recent report, businesses “[n]eed to deploy modern security practices for AI success.” Per Forrester,

Many security technologies that will secure your firm’s adoption of generative AI already exist within the cybersecurity domain. . . . These technologies are introducing new controls to secure generative AI. This will force your team to work with new vendors, technologies, and acquire and train on new skills. While processes also serve as useful  security controls, generative AI will uncover procedural gaps in domains involving data leakage, data lineage and observability, and privacy.

Businesses have been down this path before, such as when they began to adopt cloud computing. Remember the well-founded concerns about security being compromised by data being sent to remote servers? The answer then (as it is now) was to take a comprehensive approach to protecting data in the cloud, including the use of mature cloud computing services. Now fast forward to today. To strengthen security in the era of GenAI, Centific recommends that CISOs, heads of fraud, and heads of risk and compliance adopt a 360-degree approach to generative AI cybersecurity and fraud prevention. This approach encompasses the right processes, tools, and much more. Here are its key components:

A 360-Degree Approach to Gen AI Cybersecurity and Fraud Prevention

1 Understand Which Business Processes Use Generative AI

Everything starts with conducting an inventory to identify all the departments using generative AI; and mapping out how each department is using AI, including the type of AI tools, data sources, and purposes. This is an enormous undertaking. How is Marketing applying generative AI to A/B test new campaigns? Product design for creative concepting? Customer service for responding to queries via chatbots? The questions are seemingly endless, but the business needs to answer them. 

2 Take a Methodical Approach to Test Generative AI Thoroughly

A methodical approach means using a vetted methodology and tools to systematically test every conceivable way that a bad actor can poison generative AI-supported processes in order to compromise them. Generative AI is trained on data and prompts to perform tasks. Bad actors can use carefully crafted inputs or prompts to bypass their intended limitations or safety features, a technique known as adversarial jailbreaking. Businesses can also rely on adversarial jailbreaking to test generative AI models for weaknesses – but this must be done with the right process and tools in order to work.

At Centific, we help CISOs, heads of fraud, and heads of risk and compliance test their generative AI models using frameworks such as MITRE ATLAS and tools such as Caldera or Counterfeit to test generative AI models for vulnerabilities. The MITRE ATLAS Framework helps organizations prioritize defensive measures, test security controls against real-world threats, and improve threat hunting capabilities. To appreciate just how through the MITRE ATLAS Framework is, I invite you to explore the MITRE ATLAS Matrix, which maps all the steps that bad actors take to poison generative AI models, such as LLM prompt injection, LLM plugin compromise, LLM jailbreak, and much more. 

3 Apply the Right Software

Cybersecurity defense comes down to anticipating how bad actors work to stay a step ahead of them, which means thinking like they think and fighting fire with fire. A framework helps a business think like bad actors think. Tools such as Caldera and Counterfeit help businesses actually test generative AI. Caldera allows security teams to build custom “adversary profiles” that mimic the tactics, techniques, and procedures used by attackers targeting GenAI. This includes techniques like data poisoning, model evasion, and bias injection. Counterfeit generates data that can be used to fool generative AI models into making incorrect predictions. This allows businesses to test their models against sophisticated attacks and identify potential vulnerabilities.

In addition, since we published this post, Microsoft launched Azure AI Studio tools that can monitor and detect vulnerabilities in Microsoft’s other AI services. This is significant because the update can help organizations safeguard GenAI-supported applications. Features of those tools include:

  • Enhanced prompt shields that detect and neutralize attempts to manipulate an AI model through prompt injection, including a solution to identify subtle attacks.
  • Groundedness detection that pinpoints instances where a model generates responses that are inaccurate or not grounded in reality.
  • Safety system messages that provide a model with guidance, promoting outputs that are safe and ethically sound.
  • Safety evaluations that thoroughly test an application’s resistance to jailbreaking attacks and its potential for generating harmful content.
  • Risk and safety monitoring that helps admins gain insights into how specific inputs, outputs, and user interactions affect a company’s content filters. This helps the business to develop effective mitigation strategies.

This isn’t to suggest that any organization can simply buy cure-all software and rest easy. But the tools now exist to make it possible to build safe, responsible GenAI.

4 Strengthen Your Security Posture with a Zero Trust Architecture

A security posture includes, among other things, technological controls (e.g., firewalls, intrusion detection and prevention systems, encryption technologies, and other security hardware and software solutions) and access controls (ensuring that only authorized individuals and devices can access certain information). 

Once you assess your security posture, your organization will be faced with a crucial question: how far are you willing to go to safeguard your company’s systems? This is where zero trust architecture (ZTA) comes into play.

Traditional security models assume that everything inside the organization’s network is trusted, creating a strong perimeter to keep threats out. ZTA assumes that threats can exist both outside and inside the traditional network perimeter, thus necessitating verification and control measures. As a result, a company employing ZTA protects its systems with a far greater level of rigor. We recently blogged about the value of ZTA. For more detail, read our recently published blog post, “Should Your Business Adopt a Zero Trust Architecture?

5 Deploy Purple Teaming

Purple Teaming is an industrywide collaborative approach used to strengthen an organization’s overall security posture. With Purple teaming, one team simulates both attacks on cybersecurity perimeters and their defense. Each team member plays both the role of attacker and defender, which ensures a more robust and intricate breach/attack simulations.

In a Purple Teaming exercise, the simulated attacks provide a realistic assessment of the organization’s vulnerabilities and the effectiveness of its defenses – in the context of this blog post, generative AI. The Purple Team uses this information to strengthen their defenses, improve response strategies, and train staff.

The primary goal of Purple Teaming is to create a feedback loop where both offensive and defensive strategies inform and enhance each other, leading to a more robust and resilient cybersecurity posture for the organization.

6 Align Efforts with Your Governance, Risk, and Compliance (GRC) Program

Incorporating a holistic approach to GenAI cybersecurity and fraud prevention requires alignment with the governance, risk, and compliance (GRC) efforts of a company. GRC ensures an organization’s activities align with its goals, manage risks effectively, and comply with necessary regulations. This is vital when implementing advanced technologies such as generative AI.

GRC strategies are essential in managing the risks associated with GenAI because they provide the structure for monitoring and enforcing compliance with laws, guidelines, and internal policies. As GenAI evolves, it poses compliance challenges and regulatory scrutiny that can impact various aspects of business operations from data privacy to ethical considerations.

Integrating GenAI into a company’s GRC efforts should ensure these technologies adhere to ethical standards and regulatory requirements while enhancing the company’s ability to manage and mitigate risks effectively. For example, data handled by GenAI must be governed by compliance measures to prevent breaches of privacy and ensure integrity, which aligns with compliance under GDPR in the European Union or CCPA in California.

Risk Management Strategies Must Evolve

Risk management strategies should evolve to cover the vulnerabilities introduced by generative AI, such as susceptibility to data poisoning, which can have far-reaching effects on business operations and reputational integrity. These risks demand rigorous assessment and mitigation plans that should be incorporated into the existing risk management framework.

GRC governance must ensure that decision-making regarding generative AI initiatives is clear and accountable, with regular audits and adjustments based on emerging risks and technological advancements. This requires a coordinated effort among all levels of an organization to align with strategic objectives and maintain operational resilience.

By embedding GenAI security strategies within the broader GRC framework, companies safeguard their assets and reinforce their commitment to ethical standards and compliance. This approach fosters trust among stakeholders. The approach positions the company as a responsible leader in utilizing cutting-edge technologies responsibly and securely.

Going Forward with Confidence

Consider how far the industry has progressed with cloud computing despite the risks involved. The cloud computing industry has become both large and influential perhaps beyond anyone’s estimation when cloud computing first emerged. But the impact of cloud computing pales in comparison to the impact generative AI is and will have. The industry is not going back. The issue is how confident businesses are moving forward. A 360-degree approach will turn anxiety into confidence. 

Click here to learn more about our digital safety capabilities