generative ai fraud trends

2024 Generative AI Fraud Trends: What to Look For

Whether you’re an independent lender, an underwriter at a credit union, or a risk expert, the chances are you care a lot about staying ahead of generative ai fraud trends. That’s because generative ai is changing how fraud is done and is making traditional fraud-prevention methods less useful. As a leader at FortifID, I worry about these things all the time. What’s really scary is that eCommerce payment fraud could reach $343 billion globally between 2023-2027.

The public started talking more about the “authenticity” of content, along with “hallucinations” made by AI in 2023. Because criminals had early access to tools such as ChatGPT and its dark web counterparts, like WormGPT and FraudGPT, we saw generative ai fraud trends increase. The increase is alarming, especially if we compare fraud in 2023 to previous years.

Gen AI Is Changing Fraud: Why Past Methods Don’t Always Work

Let’s look at traditional fraud prevention to see why the game has changed. I know from talking to risk management experts at financial institutions and credit unions that what worked for years now often doesn’t work anymore.

Examples of Standard Anti-Fraud Methods

In the past, we stopped most fraud with these methods:

  • Rules-based systems: “If a new account makes multiple transactions over $1,000 in a day, then decline the transaction.” They were simple to understand. Although effective for simple attacks, criminals quickly learn to game the system, rendering this technique less effective.
  • Traditional Machine Learning (ML): “A pattern shows that stolen credit cards commonly make online transactions in this ZIP code range with these attributes.” While capable of using more data, this required extensive training with clean and properly-labeled datasets. Plus, when the patterns changed (like they do now with gen AI) retraining models took too long, often requiring human review. This can make fighting fraud feel almost impossible, especially when it takes weeks or even months to deploy defenses.

This frustrates fraud analysts. And with an ever-increasing pressure to balance friction and the cost of fraud losses, they can become overwhelmed quickly. You probably feel it too when waiting too long for a bank to clear funds or decline a suspicious transaction. I’ve seen companies use hundreds of different anti-fraud tools with mixed results. No one solution works on its own, because you only see parts of the customer’s story. For example, you only know the risk associated with how someone is acting during a payment process, but have very limited intelligence about their true intention before that point.

Why AI Risk Decisioning Is Necessary to Fight Generative AI Fraud

Generative ai fraud trends require AI risk decisioning platforms to effectively protect financial institutions and consumers. This is an emerging approach designed to create a central nervous system where all transaction data, regardless of the tool or vendor, feeds in. You need an end-to-end understanding. Risk management professionals then get to ask questions about things happening. The platform automatically uses that intelligence along with both ML and generative AI to give actionable recommendations and then take the best course of action in real-time. That’s powerful. That’s how we start staying a step ahead instead of playing a cat and mouse game.

Why Generative AI Is a Huge Challenge to Risk Management

Generative ai creates so many unique risks to fraud that it would take an entirely separate post to describe. There is a common misunderstanding. Many people think, “AI tools like ChatGPT can help us prevent fraud.” But these tools have actually opened new doors that make stopping it far more difficult. The tools provide instant access to unlimited historical transaction data. And that’s exactly what criminals needed.

Here are a few Gen AI methods used in fraudulent activities

  • Hyper-personalized Content: Fraudulent texts or emails designed to look and sound exactly like you (including tone) or messages that reference a real product purchase or event in a victim’s life increase effectiveness. The fact is they get people to do things more easily. This is far beyond what humans alone could do and requires far fewer resources, because criminals now no longer have to recruit lots of people to craft scams or conduct attacks.
  • Synthetic Content (like Deep Fakes) That Seems Authentic: A short video, picture or even audio clip can quickly be transformed to mimic or manipulate people doing and saying things they never did, at a pace that’s accelerating. For example, a bad actor can train generative ai using historical bank fraud and publicly-available personal data like images, audio, and video of executives (say someone that might sound familiar or appear authoritative) or government officials to commit fraud against corporations or individuals. There are already multiple ways generative AI tools can automate identity verification. For instance, with password spraying, AI generates lists of commonly used passwords by those working in a certain industry. Criminals then instantly test thousands of passwords to take over accounts.

As we all continue to shift more to a digital experience and demand frictionless payments , we make security more challenging. Even more alarming is that criminals are getting really creative when it comes to naming their attacks. Take for example a bizarre scam with a weird name currently growing globally called “pig butchering fraud” – a nasty combination of online romance, romance scams and crypto that slowly prepares victims before criminals disappear.

What the Rising Global Rate of Fraud Looks Like

The future is terrifying and many people fear generative ai fraud will create losses we can’t even comprehend right now. It’s also a big problem for biometric identity and verification solutions when people use their real digital identities but with slightly manipulated biometrics to appear authentic to automated processes. We’ve analyzed dozens of independent reports from leading authorities such as Deloitte, Mastercard, Juniper Research, and TransUnion. Interpol even has its own research to back it all up.

The numbers below give a sobering vision of what’s likely to unfold in the next 5 to 10 years if we don’t make fighting fraud more agile:

Fraud Type Expected Loss Date
Global online payment fraud losses $91 billion 2028
US eCommerce fraud totals $48 billion 2023
Global eCommerce payment fraud losses $343 billion 2023 -2027
Global AI fraud loss (conservative scenario) $1 trillion 2030
Synthetic Identity fraud $23 billion 2030
Credit card losses $43 billion 2026
Total amount of regulatory fines against institutions for KYC failures (including money laundering) $5 billion 2022

It’s time we accept generative AI as a fact. The next couple years will be critical. Companies that focus on finding creative and new ways to implement their own generative AI, and risk decisioning techniques, may end up coming out on top while those that delay may see increasing losses and even a mass exodus of customers, including at credit unions, for digital experiences they can better trust.

Fighting Generative AI-Assisted Fraud: Six Ways to Adapt

Fighting generative AI fraud isn’t about trying to find ways to shut off technology or asking customers to slow down payment processing. As consumers, we all need security and efficiency. But those goals aren’t always in harmony when trying to stop scams, return fraud, synthetic identities and the myriad ways people are leveraging AI to commit illegal acts.

Instead, it’s time companies accept a few important truths:

1. Embrace New Technology to Combat Generative AI Fraud

Companies have to use their own AI against AI attacks. Just like our team at FortifID is doing with our real-time knowledge fabric that analyzes massive amounts of global data, risk management groups have to be able to do this to keep up.

2. Prioritize Continuous Monitoring

Risk managers already use their internal dashboards all the time. Generative AI should enable dashboards to analyze, detect and report in real-time, across both internal data sets as well as publicly-available information on omnichannel fraud (like scams, romance schemes, business compromises and the dark web trends) without any action needed.

The platform also needs to be able to alert teams. But it’s no good just notifying a group of analysts, or sending reports to overworked managers. The risk decisioning engine must leverage this data with both traditional machine learning (especially good with highly structured data) and its own generative AI to continuously predict attacks. Risk management experts can now spend their days finding proactive solutions or reviewing alerts as opposed to reacting to the endless flood of incidents and losses that are now occurring.

3. Stop Relying on “Single Data Point” Verification

Companies are beginning to look beyond a customer’s isolated behaviors at different stages and finding solutions to study the entire customer journey to spot suspicious transactions before the moment the user makes their payment. This includes monitoring social media chatter, changes in behaviors from how the person usually logs into an account (new browser, unknown device) or atypical browsing or shopping behaviors. They have to get this kind of knowledge into the system. In order to help build out a risk profile early, institutions should consider leveraging vector databases in AI risk decisioning platforms. By taking known, bad transaction attributes and customer behaviors and transforming them into high dimensional data and storing that as vectors in vector databases, you have new knowledge that is capable of finding anomalies at scale.

4. Focus Beyond Transactions: Understand Customer Intention

Companies are taking advantage of machine learning and generative ai’s ability to combine and analyze diverse sets of information to help with credit card losses (especially credit card losses that criminals are making) by incorporating “real-world knowledge” like customer reviews and opinions. You should be able to understand not just how someone is acting, but also why.

If you are in the credit space, for example, a platform may use a variety of consumer chat forum feedback in conjunction with public record social media chatter about a person’s character or past online comments and reactions by consumers to better form a fraud assessment before they apply. A recent report found that a fake investment scheme showed a deep faked Elon Musk promoting something fake last summer.

5. Share Data Across Industry Entities to Stop Fraud

Risk management experts are collaborating within and outside their organizations by developing ways to instantly share trends. If a suspicious synthetic identity fraud attempt occurs at one company, a shared database should be instantly notified and a risk decisioning platform can quickly detect related transactions occurring within milliseconds of a customer creating their new account. Financial institutions that use platforms built to incorporate a shared global knowledge fabric stand a chance. In Hong Kong, there’s a similar story of someone sending millions after criminals faked a company employee.

6. Human- Understandable Explanations

Platforms have to show what they’re doing in a clear, easily-digested manner so fraud prevention experts, cybersecurity leaders, and compliance staff have a record. By leveraging natural language processing techniques and visual displays like graphs (including the use of graph databases to spot potential collusive relationships) as opposed to the wall of complex data analysts that previously struggled to analyze in older systems, everyone feels more comfortable.

You are probably starting to notice there are two competing forces at work. Customers expect both a seamless user experience while also demanding sophisticated security. That’s an increasingly fine line for financial institutions. That’s what’s driving the interest and rapid development in technologies that combine powerful tools like generative AI and machine learning with human ingenuity to effectively stop fraud before losses happen.

Conclusion

Generative ai fraud trends have shifted the scene from rule-based and traditional machine learning approaches to platforms capable of using their own generative AI with biometric identity and global knowledge data. We don’t fully understand everything generative AI can do. That’s what makes this period so exciting and unnerving.

Simplify your business and operating models to enhance customer service and structurally reduce cost

FID Apply

Customer onboarding solutions

FID Insights

Improve fraud rates and minimize data breach and penalties exposure

FID API

A single tunable API to validate and authenticate

Be a part of the transformation with FortifID

A data solution that addresses the complexities of the digital world.