Q2 2024 DIGITAL TRUST INDEX

Managing Risk in the Era of AI-Fueled Fraud

Powered by FIBR, the Fraud Industry Benchmarking Resource

Consumer-Driven Insights on AI-Fueled Fraud

As fraudsters develop new applications of AI to strengthen their attacks, their methods can be easily replicated, shared, and sold through Fraud-as-a-Service (FaaS) models on the deep web. This form of DIY fraud enables those without technical expertise to participate in fraudulent activity, creating a multiplier effect drastically widening the scope of attacks on businesses and consumers. By incorporating GenAI into these tactics, more bad actors will be able to successfully target victims at a greater speed.

For example, fraudsters have launched various large language model (LLM) copycat tools designed to create malicious content, like phishing emails and restriction-free malware. FraudGPT and WormGPT are two of the most common nefarious tools, which are advertised and sold on a subscription basis across the dark web, as well as deep web messaging platforms like Telegram.

Sift-Q2-2024-Index-Report-_-Consumer-Driven-1
background_blue_with_yellow_boreder

Mechanics of an AI Scam: The AI Caller

Even fraudsters are tired of cold calling. A new FaaS offering has popped up on Telegram, selling access to an AI Caller that allows cybercriminals to target crypto customers, leveraging AI to act as the company’s legitimate customer service and converse with the targeted customer. This is used to automate and strengthen phishing attacks by prompting the victim to provide their one-time password after an alleged two-factor authentication (2FA) change.

Productize

Productize

Promote

Promote

Purchase

Customize

numbercirclefour

Engage

Search

Defraud

Step 1 | Productize

A fraudster creates an AI Caller product designed to call a batch of target phone numbers and let AI do the talking.

dark_wave

Step 2 | Promote

The fraudster advertises their AI Caller service on Telegram, charging an upfront fee plus by minute of use.

promote_2

Step 3 - Customize

Other fraudsters purchasing the service can select from different voices, languages, tasks, and well-known crypto exchanges to convince the victim it’s a legitimate customer service representative confirming a 2FA change.

customize

Step 4 - Engage

When the victim engages with the chatbot, they’ll be forwarded to the fraudster, who will ask them to provide their one-time password (OTP).

engage

Step 5 - Defraud

The fraudster then uses the OTP to log into the victim’s account and transfer their crypto assets to an account under the fraudster’s control.

defraud
stat_quote-with

Consumers Express Concern Over AI Scams

Most consumers (73%) believe they’d be able to identify an AI scam if they encounter one online, but many may be overconfident in their scam-spotting abilities. The majority of consumers (nearly four out of five) indicate they’re concerned AI will be used to defraud or scam them. Consumers are increasingly being exposed to misleading AI-generated images and imposter websites as well as pig butchering scams leveraging deepfake face-swapping and GenAI-enabled conversations, fooling even the most digitally savvy consumers.

Younger Generations Get Scammed at a Higher Rate

Nearly three-quarters of consumers believe they could identify a scam created using AI, but more than one-fifth are still falling for scams. And this dichotomy is even more prevalent in younger generations. Millennials and Gen Zers report they’re more confident in their ability to identify AI scams, but the data shows they’re actually scammed at a higher rate than baby boomers and Gen X. These younger generations tend to be more trusting of sites and apps, and less concerned about securing their information online. In fact, five in 10 Gen Zers and millennials say they trust online services to protect their data, versus just three in 10 older consumers. The study also found that Gen Z were three times more likely to get scammed online than baby boomers.

bar_chart
bar_chart-1
defraud_attempts

Generative AI Scams on the Rise

Nearly a third of consumers claim they have had someone try to defraud them using GenAI, and nearly one-fifth of those fell for the scams. This is correlated with 13% of consumers saying they’ve entered their personal data or other sensitive information into a GenAI tool—increasing the likelihood their personally identifiable information (PII) will be used to phish them or gain access to their accounts and payment methods.

Quotation_mark

AI is rapidly being adopted as a fraud tool, and businesses that haven’t begun leveraging this technology are at a heightened risk of exploitation. Now is the time to invest in AI, as it’s the only technology capable of countering the threats posed by AI itself.

Brittany Allen

Senior Trust and Safety Architect at Sift

Fighting AI-Fueled Fraud with AI-Powered Solutions

More businesses are turning to AI solutions for their fraud and risk prevention needs. Gartner forecasts that spending on AI software will grow to $297.9 billion by 2027, with a compound annual growth rate of 19%. As AI-fueled fraud grows more sophisticated and costly to businesses, there’s a race to increase investments in secure fraud solutions.

According to the MRC, merchants typically adopt one to two AI/ML-based fraud tools or applications, and the use of these advanced solutions is expected to increase this year. Additionally, 55% of merchants identify improving the accuracy of AI/ML as the top priority for enhancing fraud management over the next year. Sift also found that fraud and risk professionals are looking at multiple methods to combat AI-powered fraud, including investing in additional tools, adopting ML and AI solutions, and adding headcount.

How fraud & risk professionals are combatting AI-generated fraud

Investing in additional tools

Investing in ML & AI

Adding additional headcount

Grow Fearlessly with AI-Powered Fraud Decisioning

As scammers continue leveraging AI to carry out scalable schemes, businesses must adopt advanced, multilayered fraud solutions to stay protected at every step of the user journey. Sift’s AI-powered platform transforms fraud prevention into a competitive advantage by offering reliable user-level insights that enable businesses to increase acceptance rates for trusted customers while proactively detecting risks and evolving fraud patterns.

By harnessing the power of AI, Sift provides a robust defense against AI-fueled fraud, safeguarding customer trust and enhancing brand loyalty by automatically preventing suspicious activities. With patented technology and Clearbox Decisioning, Sift offers deep transparency and adaptability, enabling businesses to make smarter decisions and confidently expand their user base while mitigating risk. Sift’s deep heritage in machine learning and user identity is backed by a data network scoring 1 trillion events per year and a commitment to long-term customer success.

image_with_pattern

*Sift polled 123 global fraud and risk professionals via online survey in May 2024.

**On behalf of Sift, Researchscape International polled 1,066 adults (aged 18+) across the United States via online survey in May 2024.

What’s New at Sift

action

Drive Your Business Forward with FIBR | Powered by Sift

Sift’s one-of-its-kind Fraud Industry Benchmarking Resource lets you compare your payment fraud, fraudulent chargeback, account takeover, and manual review rates against Sift benchmarks by industry and region.

Discover Data