How GenAI Supercharges Fraud—and How to Fight Back
From mainstream ChatGPT to its dark-web twin FraudGPT—which was built specifically for enabling fraud—generative artificial intelligence (genAI) introduced entirely new fraud vulnerabilities seemingly overnight.
Traditional AI has been around for years—think Siri, Alexa, and Google’s search algorithm. Traditional AI uses predefined rules to perform specific tasks. GenAI breaks out of nearly every traditional AI approach: it does not follow predefined rules, but instead generates entirely new data from vast historic datasets. Because it’s constantly learning and improving, genAI creates highly realistic, unique content that matches the quality of human-created content.
So genAI learns, adapts, and moves quickly—sounds like some of the most malicious bots, bad actors, and fraud rings that NeuroID has been fighting since day one. In facing the genAI challenge, NeuroID’s behavioral analytics continue to have an advantage that traditional PII-based fraud stacks don’t have: we rely on tracking behavior, not collected data. Every time a person inputs information, clicks in a box, edits a field, hovers before clicking, or similar interactions, it creates trackable behavioral analytics that then predict if a person’s intent is genuine or malicious. But behavioral analytics data isn’t collected or used to connect to a single identity. This makes it difficult for generative AI to learn from or replicate behavioral data.
Digital enterprises are struggling to understand the true threat level of weaponized and highly accessible genAI. NeuroID’s internal experts came together to provide their insights on the darkening shadows of genAI, and how to fight back against this fast-growing fraud behemoth.
Q1. How does today’s genAI differ from other technologies in its potential to impact digital fraud? Is it different or is this just another fraud risk like others you’ve seen before?
In addition, ChatGPT can easily generate lists of fake identities that bad actors can use to apply for loans. As far as creating content, or what is entered into a form, fraudsters can generate that content much more easily.
Nash Ali, NeuroID Head Of Operational Strategy: Building on what Jeff mentioned, there are some specific genAI fraud approaches that are very dangerous to fraud detection systems. GenAI can transform a static image into a very realistic representation rendering that can break the facial recognition systems used as password alternatives or enhanced authentication. If you’re using facial recognition to log into platforms that require liveness detection for verification, all fraudsters need is to steal your headshot from your Facebook profile and manipulate it to natural movements.
There’s also voice replication. With just a short recording of someone’s voice, it can recreate that voice with remarkable accuracy, down to specific tones and expressions. Voice recognition is used by call centers and some banks for password bypass, and now it’s exceedingly vulnerable.
There are even advanced genAI programs built specifically for fraud that can obfuscate user data, generate device fingerprints, and mask IP addresses. These programs can dynamically respond to an organization’s security controls, offering real-time suggestions on potential vulnerabilities. This adaptability means they aren’t a one-size-fits-all threat but instead, adjust tactics based on specific systems.
We’re starting to see more people attracted to the use of genAI and committing fraud. And it’s going to mean that fraud as a whole gets more sophisticated very quickly. With fraud becoming increasingly sophisticated due to genAI, we’re approaching an era where the only counter might be to employ AI against AI.
Q2. What have you been hearing from fintechs and digital enterprises? What are fraud teams worried about with genAI?
Brooke Baker, NeuroID Director of Customer Analytics and Innovation: GenAI makes common defenses, such as biometrics and PII-based tools, pretty vulnerable and puts a big strain on their use. Identity verification has now become a lot easier to fake. Companies know they need to take a different approach. They’re often either looking for more tooling, or incorporating more manual reviews (which are labor-intensive and costly).
A lot of organizations are still figuring out how to react and adapt to this new technology and its implications. And a lot of the fraud managers we talk to are very risk averse to the point where they’re more careful than they need to be. If they get hit with an attack, they’re going to lock things down until they can safely open it up, which means lots of lost revenue and customers. But it’s a fear-driven landscape and they aren’t sure what to risk.
Nash: Many enterprise clients I’ve consulted, especially those with well-established fraud prevention systems, are deeply concerned about the emerging threats. They recognize the limitations of today’s current traditional fraud stack technology in facing these challenges and are actively seeking solutions.
More than anything, this presents a significant opportunity for fraud mitigation service providers to develop targeted responses. But many of these providers are hesitant to acknowledge the evolving threat landscape shaped by genAI. There’s a prevailing sentiment that while these threats might be emerging, they haven’t gone mainstream, and current tools are still effective. But fraud service providers need to recognize that genAI is not only a current threat but one that will grow exponentially. Everyone in the anti-fraud field needs to be prepared to combat this threat rather than hoping it will go away.
Q3. How does genAI impact NeuroID’s behavioral analytics approach?
Jeff: ChatGPT is currently not trained on behavioral data. While companies could potentially train models on it, it’s unlikely. There are better models and techniques tailored to deal with faking behavioral data than genAI, and we’re already aware of and overcoming those fraud techniques.
It will be difficult for ChatGPT, or any similar tool, to become an all-encompassing fraudster tool for breaking behavioral analytics. To make any genAI effective in fighting our behavioral analytics solution, you would need statistically meaningful volume of outcome data that covers a spectrum of users: genuine, fraudulent, and automated bots. Moreover, for it to be truly relevant, you’d likely need specific datasets for institutions like Chase Bank or Wells Fargo. This data just isn’t readily available for public use.
Brooke: The collection of behavioral data is really difficult. Behavioral data entry patterns are thousands of inputs minutely different that only have meaning once compiled against a statistically meaningful volume of outcome data. This data hasn’t been available for genAI training models. It would require massive datasets that don’t currently exist, along with the ability to train on that data. It’s really hard to label something as genuine or bot or fraudster, and most behavior data is very miniscule compared to what NeuroID processes in terms of breadth and scope and all of the learnings we’ve had through the years. A fingerprint or face are single data points, which is why they’re easy to capture. Behavioral data is astronomically huge. It’s ginormous and messy to take that and put it into something that’s meaningful.
Nash: Fraud mitigation is always a race. Like all companies in the anti-fraud sector, we are working to stay ahead of genAI advancements that might introduce new avenues for fraud. We must anticipate the strategies bad actors will employ, like devising scripts that mimic human behavior or data entries that blur the lines between genuine users and fraudsters. Our goal is to refine our methods to stay ahead of those nuances. We’re winning this race, and with our current investments in resources we’ll continue to do so.
Given the threats posed by genAI-based attacks, it’s crucial for fintechs, banks, and anyone with a digital business to utilize every available signal to detect and counteract fraud. Traditional fraud prevention tools are proving inadequate against genAI, we’re already seeing that. Behavioral analytics offer valuable signals that can significantly enhance a company’s existing fraud detection methods. AI adds a level of complexity to fraud that traditional tools can’t handle. They aren’t built for it. Behavioral analytics are a vital addition to current fraud detection mechanisms against both traditional and AI-driven threats.
Behavioral analytics is a prime signal to enhance existing fraud stacks. Positioned at the start of the risk funnel, it guides every subsequent layer. For front-end tools managing consumer onboarding, it offers insights into consumer behavior, aiding in decisions about continuing the registration process, halting it, or requiring additional identity verification. Few signals address new customer identification as effectively as behavioral analytics.
And at further stages in the risk funnel, such as repeat consumer account access, payment initiations, and profile changes linked to Account Takeover (ATO) situations, our signals can enhance decision-making processes. Clients can integrate our behavioral analytics signals with their machine learning models to evaluate real-time events. This integration has consistently improved the predictiveness of their models.
In essence, while behavioral analytics benefit every stage of the fraud funnel, they are particularly impactful during new customer onboarding. And they are not threatened by genAI.
Q4. What opportunities for better fraud prevention are coming out of this AI-expansion?
Jeff: NeuroID creates artificial intelligence that determines if someone is part of a fraud ring or a fraudster using a fake identity. As new genAI technology advances, we will explore how to leverage this technology to improve our stance in the face of new threats.
Recently, I was asked to explore whether ChatGPT poses a threat to our behavioral fraud solution. I utilized ChatGPT, along with my in-depth knowledge of our system, to assess the situation. I quickly realized that ChatGPT lacks the necessary domain knowledge to present an immediate threat. Although we haven’t identified a current threat to behavioral analytics from genAI, we remain vigilant, aiming to understand and address potential threats promptly.
In general, I think you have to assume that traditional fraud prevention based on user content alone is obsolete. Those approaches are not going to stop fraud like they used to. Hence, it is even more important to consider behavior when making fraud decisions.
Nash: People assume that it’s going to take genAI a while to come to the forefront of the mainstream fraudsters. And that’s just not the case. GenAI is evolving at an exponential rate. The first question that comes up with clients these days is about genAI. I don’t think there’s a fraud manager out there who’s sleeping well at night. But at the end of the day, this isn’t the first time we’ve seen a new challenge and we have to just roll up our sleeves and get busy.
Brooke: Probably 80% of our clients have mentioned genAI to our teams as a concern. Which is partly why it’s important to partner with an expert like NeuroID who can put your fears into context, so you don’t overreact and lose revenue through extreme shut-down measures due to advanced fraud attacks.
Behavioral analytics gives you a new lens of visibility for fighting all fraud, including genAI. So if you’re pairing traditional methods and using behavioral analytics to help orchestrate your other data calls, it becomes a lot more meaningful to make sure you’re not vulnerable to what could be compromised with the new content being generated by genAI. Being able to pair it with your existing solutions enables you to orchestrate a path that fraudsters can’t exploit. You can create a dynamic journey that reflects the risk of the applicant and keeps you secure from fraudsters and from revenue loss.