How Generative AI Increases the Stakes for Digital Fraud

By Sanjay Bhakta, VP & Head of Solutions and Nitanshu Upadhyay, Business Solutions Consultant
A woman in glasses standing next to a line of humanoid robots, representing advancements in robotics and AI.

Businesses are encountering an increasingly potent enemy in their fight against digital fraud: AI. The explosive growth of ChatGPT – reaching 100 million users a week – has an ugly downside. Generative AI tools such as ChatGPT have spawned an arsenal of weapons that bad actors can use in increasingly sophisticated ways to attempt cyber breaches. Businesses are also turning to AI to fight back.

WormGPT. WolfGPT. FraudGPT. These are not made-up names. They’re real gen AI models available to bad actors as they seek better ways to inflict disruption upon businesses and harm people financially. WormGPT specializes in crafting misleading messages, including phishing emails that manipulate people into revealing private information, such as passwords and credit card numbers. WolfGPT is an AI program designed to generate malware and various types of damaging software. Meanwhile, FraudGPT is employed to produce fake online platforms and orchestrate digital fraud schemes.

Democratizing Fraud

This new generation of large language models democratize fraud by giving bad actors access to better ways to commit fraud, and they’re managed and sold like any other kind of service. FraudGPT first appeared on the dark web in July 2023. Unlike ChatGPT, FraudGPT is trained on a massive dataset of malicious content, making it an expert in hacking and cybercrime. FraudGPT is currently being advertised on the dark web as a subscription service, costing either $200 per month or $2,000 per year. For this price, users gain access to a wide range of services, including:

  • Writing malicious code
  • Finding vulnerabilities in systems
  • Creating phishing emails
  • Creating phishing pages
  • Identifying the most hacked and spoofed websites

But making a tool widely available in and of itself is not enough to pose a threat to businesses. The tool must also improve upon the state of the art in digital fraud to achieve adoption. And gen AI fraud tools do just that.

Gen AI helps bad actors solve one of the biggest stumbling blocks to fraud: creating content that seems authentic to an unsuspecting person. Some of the glaring giveaways of fraudulent emails or websites are typos, grammatical errors, and the use of logos or other visual elements that make it easier for someone to spot a fake communication. Gen AI helps bad actors overcome those issues.

FraudGPT is equipped with advanced intelligence that allows cyber assailants to study specific entities and replicate their communication styles or even clone their web pages. This renders FraudGPT an alarmingly potent instrument in the wrong hands.

The sophistication of gen AI transcends that of standard phishing ploys, crafting counterfeit websites and emails with a high degree of precision, making them virtually indistinguishable from legitimate ones. They are free of common errors, with perfect font matching and a writing style that mirrors authentic correspondences.

How Generative Increases the Risk of Fraud

Absent the usual warning signs, unsuspecting individuals may perceive these deceitful emails or websites as trustworthy, leading them to inadvertently disclose sensitive information. But gen AI fraud can do much more than create phony emails and websites. Gen AI makes many forms of fraud easier to commit, such as:

  • Fraud automation at scale: Generative AI can automate complex, multi-step fraud activities, such as creating scripts or codes for programs that steal personal data or breach accounts. This automation can facilitate credential stuffing, card testing, and brute-force attacks.
  • Text content generation: AI can produce realistic text without errors, mimicking the style of a known person or organization, which makes it challenging to detect fraudulent communication.
  • Image and video manipulation: With generative AI, fraudsters can quickly create authentic-looking images or videos to support their scams. This technology can manipulate visuals to a high degree of realism, making it difficult to spot fakes.
  • Human voice generation: AI-generated voices that sound like real people can compromise voice verification systems and are used in scams to build trust with victims.
  • Synthetic identity fraud: Cybercriminals can create fake personas with realistic social profiles to conduct financial crimes without detection.

Any business with an online presence is vulnerable, but some more than others. Typically, businesses that are most susceptible include financial institutions, healthcare providers, and retailers, especially those that conduct a significant portion of their transactions online. These sectors often deal with sensitive personal and financial information, making them attractive targets for fraud. On the other hand, businesses that may be less vulnerable are those with minimal online presence, lower digital transaction volumes, or those that deal with less sensitive data. However, all businesses, regardless of size or sector, need to be aware of the threat.

Fighting Fire with Fire

So, how can businesses fight back? One way is to fight fire with fire. To address the risks associated with technologies such as FraudGPT, businesses are implementing cyber-defense mechanisms powered by generative AI. An IBM survey has recently shown that top-level management is increasingly valuing generative AI for cybersecurity. A vast majority (84 percent) of those surveyed are in favor of these sophisticated systems over traditional security software. This change in attitude highlights the acknowledged capacity of generative AI to improve defenses against cyber threats.

In fact, businesses might consider applying the very tools used to threaten them in order to learn how to beat them. For instance, businesses can use FraudGPT to learn how bad actors use fraud GPT in a variety of ways, such as simulating cyberattacks to test their cybersecurity defenses. This can help businesses to identify weaknesses in their defenses and to develop new strategies for responding to cyberattacks. But it’s important that businesses tread carefully. Using a tool specifically designed to harm a company’s systems makes the company vulnerable to risks such as tool itself being compromised by insiders within an organization, turning the defense mechanism into a potential vulnerability.

These defenses aim to predict and neutralize the strategies used by bad actors to keep companies protected. Nonetheless, further measures are necessary to safeguard the underlying data and algorithms of these AI systems. If not properly secured, these models could be vulnerable to cyber incursions, which may undermine their reliability and lead to more extensive security violations.

The Role of Zero Trust Architecture

But technology alone won’t fight digital fraud. Businesses also need more stringent protocols. One example is a zero trust architecture (ZTA).  ZTA is a security model that assumes no user or device can be trusted by default. Instead, ZTA verifies trust continuously and dynamically based on evidence. This can help businesses fight fraud committed with generative AI tools in several ways, such as reducing the attack surface, improving threat detection and response, and strengthening identity and access management. For instance, ZTA can help reduce the attack surface by restricting access to applications and data to only those who need it. This can make it more difficult for attackers to exploit vulnerabilities in generative AI tools to gain access to sensitive data or systems.

Here are some specific examples of how ZTA can be used to fight fraud committed with generative AI tools:

Synthetic identities, such as fake social media accounts or credit card numbers. ZTA can help mitigate this threat by using multi-factor identification and other risk-based access control mechanisms to verify the identity of users before granting them access to applications and data.

Phishing emails and other malicious content. ZTA can help mitigate this threat by monitoring user and device behavior for anomalies. This can help identify and respond to attacks that use gen AI to generate and distribute phishing emails.

Deepfakes and other synthetic media. ZTA can help mitigate this threat by using digital forensics tools to analyze media for signs of manipulation. ZTA can also be used to educate users about the dangers of deepfakes and other synthetic media.

ZTA can definitely bolster a company’s cybersecurity. But its appropriateness and effectiveness can vary depending on the organization’s size, industry, and existing security posture. There are downsides to ZTA, including cost, complexity, potential disruptions (transitioning to ZTA can cause disruptions as it may require changes in existing workflows and systems), and a less desirable user experience (the additional security measures such as multi-factor authentication and strict access controls can sometimes impede the user experience or slow down processes).

The Cost of Digital Fraud

But businesses might need to accept the downsides to improve the fight against digital fraud in the era of gen AI. After all, cybercrime will inflict $9.5 trillion in damage in 2024. No business can afford the long-term costs of digital fraud.

Centific can help you. We take a proactive approach to detect, classify, protect, and monitor a client’s digital estate in order to continuously outsmart bad actors. Our team constantly applies evolving AI tools in context of our process at speed to support your revenue growth, optimize costs, and protect your customer experience.

 Click to learn more about our Digital Safety Services.