Current Trends in AI Fraud

Artificial intelligence (AI) has rapidly emerged as one of the most transformative technologies of our time, reshaping the internet and the global economy. As AI’s capabilities expand, it’s caught the eye of fraudsters looking to exploit its potential for their own illicit gain.


Cybercriminals are leveraging generative AI tools like ChatGPT to conduct increasingly sophisticated and scalable fraud attacks against both individuals and businesses. The technology’s ability to generate flawless text, code, realistic audio, images, videos, and even entire websites, makes it a powerful tool in the hands of malicious actors. For companies in today’s digital market, the only way to keep up is to follow suit, and fight AI-fueled fraud with advanced, AI-powered technology.


AI-Driven Fraud Impacts Businesses and Consumers Alike

There’s a definitive connection when it comes to the increasing accessibility of GenAI and rising content fraud like spam and scams. Across the Sift Global Data Network, blocked user-generated content rose by 22% in Q1 2024 vs. Q1 2023. Fraud and risk professionals surveyed* by Sift also indicated they’ve been dealing with surging fraud, with more than three-quarters saying their businesses have been targeted by AI-backed attacks. Consumers surveyed** by Sift mirror this experience, with more than seven out of ten noticing an uptick in spam and scams. These rises in blocked content and consumer scam reports are likely in large part due to the emergence and increased adoption of GenAI tools by fraudsters, showing how pervasive this technology has become in the Fraud Economy.

Fraudulent content and AI-fueled scams spread across the internet

increase in blocked content in Q1 2024 vs. Q1 2023 across the Sift Global Data Network
of consumers have noticed an increase in spam and scams in the last year
of fraud and risk professionals believe their business has been targeted by AI fraud

How AI is Being Leveraged Across Fraud Tactics

Fraudsters are experimenting with various applications of AI. GenAI tools may commonly be associated with content scams like phishing attacks, social engineering scams, business email compromises, and deepfakes leveraging AI voice and image cloning, but they’re increasingly being used to streamline and strengthen other forms of attacks, such as account takeovers (ATO) and payment fraud.


As AI enhances social engineering attacks, it’s driving a rise in successful account takeovers, with the ultimate goal of draining stored value or committing financial fraud using stored payment information.

Top types of AI-generated fraud targeting businesses*

Uncovering the Full Scope of AI Fraud

AI-fueled fraud is impacting the majority of businesses, but at different rates. Certain industries, like marketplaces, dating apps, and social media sites, may be more prone to AI-driven attacks due to their reliance on user-generated content (UGC). While AI fraud doesn’t yet fuel the majority of the fraud types businesses face, it’s becoming a more frequent part of how attacks are executed, with over half of fraud and risk professionals reporting that they deal with this form of fraud on a daily or weekly basis.

Percentage of total fraud attacks estimated by businesses to be driven by AI

AI Poses a Threat to Businesses

Four out of five fraud and risk professionals report that AI-fueled fraud is a threat to their business, and for good reason. The power and speed of AI allows scammers to target companies and consumers in highly personalized ways by generating convincing fake content and automating attacks. By employing GenAI tools to create sophisticated phishing emails, photo and video manipulation, and voice cloning, fraudsters can more easily impersonate trusted customers and businesses to access sensitive information or account credentials. These scams are capable of circumventing traditional fraud detection techniques, posing significant risks such as financial losses and reputational damage.

AI is Making it Easier for Fraud to Go Undetected

Both fraud and risk professionals, as well as consumers, say it’s become more challenging to identify fraud, due in large part to how convincing scams have become with the application of AI. Once full of suspicious patterns like typos and syntax issues, AI is helping even traditional scams to become undetectable. Data shows that scams are now more successful than ever before—according to the Data Book, consumers lost $10 billion to scams in 2023, the highest ever total annual losses reported to the FTC.

Consumer-Driven Insights on AI-Fueled Fraud

As fraudsters develop new applications of AI to strengthen their attacks, their methods can be easily replicated, shared, and sold through Fraud-as-a-Service (FaaS) models on the deep web. This form of DIY fraud enables those without technical expertise to participate in fraudulent activity, creating a multiplier effect drastically widening the scope of attacks on businesses and consumers. By incorporating GenAI into these tactics, more bad actors will be able to successfully target victims at a greater speed.


For example, fraudsters have launched various large language model (LLM) copycat tools designed to create malicious content, like phishing emails and restriction-free malware. FraudGPT and WormGPT are two of the most common nefarious tools, which are advertised and sold on a subscription basis across the dark web, as well as deep web messaging platforms like Telegram.


Mechanics of an AI Scam: The AI Caller

Even fraudsters are tired of cold calling. A new FaaS offering has popped up on Telegram, selling access to an AI Caller that allows cybercriminals to target crypto customers, leveraging AI to act as the company’s legitimate customer service and converse with the targeted customer. This is used to automate and strengthen phishing attacks by prompting the victim to provide their one-time password after an alleged two-factor authentication (2FA) change.






Step 1 - Productize

A fraudster creates an AI Caller product designed to call a batch of target phone numbers and let AI do the talking.

Consumers Express Concern Over AI Scams

Most consumers (73%) believe they’d be able to identify an AI scam if they encounter one online, but many may be overconfident in their scam-spotting abilities. The majority of consumers (nearly four out of five) indicate they’re concerned AI will be used to defraud or scam them. Consumers are increasingly being exposed to misleading AI-generated images and imposter websites as well as pig butchering scams leveraging deepfake face-swapping and GenAI-enabled conversations, fooling even the most digitally savvy consumers.

Younger Generations Get Scammed at a Higher Rate

Nearly three-quarters of consumers believe they could identify a scam created using AI, but more than one-fifth are still falling for scams. And this dichotomy is even more prevalent in younger generations. Millennials and Gen Zers report they’re more confident in their ability to identify AI scams, but the data shows they’re actually scammed at a higher rate than baby boomers and Gen X. These younger generations tend to be more trusting of sites and apps, and less concerned about securing their information online. In fact, five in 10 Gen Zers and millennials say they trust online services to protect their data, versus just three in 10 older consumers. The study also found that Gen Z were three times more likely to get scammed online than baby boomers.

bar chart

Generative AI Scams on the Rise

Nearly a third of consumers claim they have had someone try to defraud them using GenAI, and nearly one-fifth of those fell for the scams. This is correlated with 13% of consumers saying they’ve entered their personal data or other sensitive information into a GenAI tool—increasing the likelihood their personally identifiable information (PII) will be used to phish them or gain access to their accounts and payment methods.


AI is rapidly being adopted as a fraud tool, and businesses that haven't begun leveraging this technology are at a heightened risk of exploitation. Now is the time to invest in AI, as it's the only technology capable of countering the threats posed by AI itself.

Brittany Allen

Senior Trust and Safety Architect at Sift

Fighting AI-Fueled Fraud with AI-Powered Solutions

More businesses are turning to AI solutions for their fraud and risk prevention needs. Gartner forecasts that spending on AI software will grow to $297.9 billion by 2027, with a compound annual growth rate of 19%. As AI-fueled fraud grows more sophisticated and costly to businesses, there’s a race to increase investments in secure fraud solutions.

According to the MRC, merchants typically adopt one to two AI/ML-based fraud tools or applications, and the use of these advanced solutions is expected to increase this year. Additionally, 55% of merchants identify improving the accuracy of AI/ML as the top priority for enhancing fraud management over the next year. Sift also found that fraud and risk professionals are looking at multiple methods to combat AI-powered fraud, including investing in additional tools, adopting ML and AI solutions, and adding headcount.

How fraud & risk professionals are combatting AI-generated fraud

Grow Fearlessly with AI-Powered Fraud Decisioning

As scammers continue leveraging AI to carry out scalable schemes, businesses must adopt advanced, multilayered fraud solutions to stay protected at every step of the user journey. Sift’s AI-powered platform transforms fraud prevention into a competitive advantage by offering reliable user-level insights that enable businesses to increase acceptance rates for trusted customers while proactively detecting risks and evolving fraud patterns.


By harnessing the power of AI, Sift provides a robust defense against AI-fueled fraud, safeguarding customer trust and enhancing brand loyalty by automatically preventing suspicious activities. With patented technology and Clearbox Decisioning, Sift offers deep transparency and adaptability, enabling businesses to make smarter decisions and confidently expand their user base while mitigating risk. Sift’s deep heritage in machine learning and user identity is backed by a data network scoring 1 trillion events per year and a commitment to long-term customer success.

*Sift polled 123 global fraud and risk professionals via online survey in May 2024.
**On behalf of Sift, Researchscape International polled 1,066 adults (aged 18+) across the United States via online survey in May 2024.