How GenAI is Making Fraud Bots More Dangerous (And What You Can Do About it)
Blog

How GenAI is Making Fraud Bots More Dangerous (And What You Can Do About it)

70% of respondents to a 2023 industry survey said their financial institution (FI) lost more than $500,000 to fraud in the preceding 12 months. What will these figures look like now, when the bad actors have bigger, better, genAI enhanced weapons?

Our use case report on the threat of bots and why they’re especially damaging in scale and strategy: in early 2023, over a mere four-month period, 53% of NeuroID customers had an attempted bot attack. But what happens when genAI amplifies this threat? And what can you do to stay a step ahead?

GenAI makes laborious tasks trivial, both lowering the expertise threshold required for a given task and turning people-hours into people-minutes. Good for anyone focused on efficiency—whether their end goal is to write website code or create a malicious bot script.

Fraud bots are already dominating financial services fraud attacks. These are programs, created by fraudsters, which carry out a set of instructions—often masquerading as real humans. These “version 1” bots are already causing significant damage; in our four-month study of 15 customers, more than half of institutions who triggered alerts had come under a bot attack.

GenAI supercharges these attacks in three specific ways:

GenAI helps fraudsters create/correct code for their fraud bot: genAI is just as useful for malicious coders as it is for anyone else. Tools like ChatGPT and its dark web counterpart FraudGPT (yes, there is genAI built specifically to perpetrate fraud) can help create or refine any code, making their operations more efficient and effective.

Natural language processing (NLP) helps you create error-free text in any language, even if you don’t speak it: Thanks to NLP (which is a sub-discipline of genAI), fraudsters can craft compelling text in almost 100 languages. This allows them to broaden the scope and effectiveness of their efforts.

GenAI lets you create deepfake voices or videos: Deepfake technology, powered by genAI, lets fraudsters create realistic voice recordings or videos of actual people. This opens up new and exceptionally dangerous avenues for fraud. Authorized Push Payment (APP) scams are currently ranked the number one fraud risk in the world, with 20% of global consumers victimized in the past four years. GenAI, combined with the vulnerabilities of real-time payment rails, is a huge factor in the explosion of this fraud tactic.

And for all the non-technical fraudsters, there’s an emergent marketplace of tools like FraudGPT. For as little as $200, bad actors can purchase a suite of AI-powered fraud tools that write malicious code, create undetectable viruses, or generate phishing pages.
GenAI lauded for its ability to learn, iterate, and improve: this same benefit may prove devastating in the wrong hands.

Keeping in mind that the new generation of AI tools are in their infancy (ChatGPT, for instance, has only been available to the public since December 2022), the above use cases barely scratch the surface.

Now Recruiting: a GenAI Garrison

Fraudsters have been using bots to execute complex fraud attacks for years. But what is a bot attack? And what makes them so dangerous?

100% of bot attacks observed in NeuroID’s 4-month study were preceded by human tests. There are various ways this might happen, but they all fall under the category of probing, where the goal is to teach the bot where the defenses are the strongest (ie, what to avoid) and where there might be vulnerabilities (ie, where to attack). A human probes the target FI’s fraud defenses by inputting fake data, seeing what step-ups are taken, how the rejection is resolved, etc. Oftentimes, this testing is extensive—fraudsters know that any eventual attack will only work if it circumvents the FI’s defenses, so it’s worth their time to acquire deep knowledge of the FI’s fraud detection and prevention software.

Next, the attacker may program some bots to apply for products with the FI. These bots act as a scouting party, applying for products by exploiting the vulnerabilities first observed in the human tests. If the bots are able to onboard successfully, then the attacker knows where to direct their army for the full-scale assault.

Now—what would happen if attackers have genAI-empowered bots at their disposal, instead of the hard-coded bots that we see in most traditional bot attacks?

GenAI allows software to mimic the human ability to learn. Large datasets are used to train genAI software to make judgments on the fly. This means that a genAI-powered bot wouldn’t need a human to preemptively identify the vulnerabilities in a FI’s fraud defenses. The bot could, on its own, probe defenses and assess how to structure an attack to inflict maximum losses.

This could spell windfall profits for fraudsters, who are already hitting the majority of companies for between 6-10 percent of total revenue.

And keep in mind: sophisticated fraudsters often look for multiple vulnerabilities during the testing phase. This allows them to program bots to attack sequentially. So, during the attack, when the FI realizes what’s happening and scrambles to resolve the vulnerability, the bots have another soft target ready at hand. GenAI-powered bots could take this a step further by actively counterpunching and altering their plan of attack based on how the FI attempts to resolve the original vulnerability.

All with no human supervision required.

As genAI continues to proliferate across financial services, the line between bot or not will become increasingly blurred.

What can fraud professionals do about this today? Anti-fraud tools work against genAI bots when they leverage data that is outside the training parameters of the bots in question. Behavioral data, for instance, is used to differentiate human behavior from bot behavior when interacting with a website or app. The dataset powering this differentiation isn’t publicly available—not even to the engineers behind ChatGPT—so experts predict its unlikely that fraud bots can be trained to circumvent behavioral analytics checks anytime soon.

Behavioral analytics adds a layer of new insight to other fraud-fighting tools, providing insights into fraud that simply can’t be caught in other ways. If businesses and FIs can run a behavior-powered bot check in tandem with other fraud mitigation programs each time an account is accessed or a new product is applied for, even the most shrewdly-designed bots don’t stand a chance.

Despite the might of genAI, bots are not invincible. GenAI is powerful, but ultimately, fraudsters can only use it to add a new coat of paint to their old can of tricks. And NeuroID knows how to stay ahead of them at that game.

Get our latest insights in your inbox