Q2 2025 DIGITAL TRUST INDEX

Navigating 
Digital Trust in the Age of AI

Powered by FIBR, the Fraud Industry Benchmarking Resource

Q2-hero-image

See the Full Report

Thank you, we will be in contact soon.

Consumer
Insights on AI

Scams are not only more prevalent in 2025, they’re also easier than ever to fall for. A majority of consumers (70%) say it’s become more difficult to identify scams in the past year, tying closely to the rise of GenAI. Fraudsters are using AI to craft highly convincing phishing messages, deepfakes, malicious code, and impersonation attempts that mimic trusted sources with staggering accuracy. As a result, 78% of people open AI-generated phishing emails, and 21% click on malicious links—evidence that even confident consumers are being deceived.

team_two_people 1
scams-are-harder-to-spot

Tracking Consumer Concern Over AI

Consumer concern over AI-fueled fraud remains high, though notably lower than last year. In 2025, 61% of consumers say they’re extremely or very concerned about AI being used to defraud or scam them—down from 79% in 2024. This drop may reflect growing familiarity with the technology, but it also raises questions about whether consumers are desensitized to or unaware of its risks. At the same time, 52% express serious concern about the security of their personal information when using GenAI tools. These figures suggest that while general uneasiness around AI may be subsiding, apprehensions over personal data exposure remain a significant and persistent issue.

Img 61 percent of consumers_2x
Img 52 Percent of consumers

Disclosing Personal Information to GenAI Tools

Despite privacy concerns, consumers are willingly sharing their personal data with GenAI tools—likely without fully understanding the risks. 31% of consumers admit to entering personal or sensitive information into a GenAI tool. Among the most common data shared are email addresses (55%), phone numbers (49%), home addresses (44%), and even financial information (33%). Alarmingly, 14% admitted to sharing company trade secrets, exposing both the individuals and businesses to security threats.

disclosing-personal-information-to-genai-tools

Consumers Struggle to Spot AI Scams Despite Overconfidence

As AI-powered scams surge, many consumers remain alarmingly confident in their ability to spot them. One-third (33%) of consumers say they’re confident they could identify an AI-generated scam, yet 20% admit to falling for phishing attempts in the past year. This disconnect illustrates how convincing AI scams have become, especially with deepfakes, which increased by 4x in 2024. One in three consumers (33%) now believe someone has attempted to scam them using AI, such as a deepfake, up from 29% last year. 27% of those individuals were successfully defrauded, a sharp rise from 19% the year prior.

overconfident-consumers-miss-ai-scams

Consumers Recognize the Importance of Data Networks—And Would Opt In

Half (51%) of consumers understand that many businesses collaborate through trusted fraud prevention networks to securely share data and stop cybercriminals from exploiting personal information across platforms. The same percentage (51%) say they would opt in to securely share their own data with such networks as long as it’s used exclusively for fraud prevention. These findings indicate growing consumer awareness and willingness to contribute to collective security efforts.

consumers-value-data-networks-and-opt-in
Quotation_mark

AI has become rocket fuel for fraudsters, allowing cyberthieves to bypass detection with speed, scale, and sophistication. Businesses need to harness AI to fight the new age of fraud all while reducing friction for their legitimate consumers, which is key to establishing identity trust and profitable growth.

Alexander Hall

Trust and Safety Architect at Sift

Group 48095790
background_blue_with_yellow_boreder

Identity Signals: Differentiating Fraudsters from Trusted Consumers

The increased and largely unchecked adoption of GenAI makes it easier for both opportunistic consumers and professional fraudsters to exploit businesses. Moreover, the signals that characterize legitimate users and bad actors are becoming increasingly muddied. With that being said, data shows that there are still distinct differences between trusted and suspicious activity.

Insights from the Sift Global Data Network reveal distinct signals that are indicators of fraud. On average, fraudsters use 36% more payment methods than non-fraudsters, and as high as 82% more when targeting digital commerce businesses, indicative of card testing. Fraudsters also show a pattern of using fewer IP addresses—20% fewer overall, and widening to 36% in digital commerce and 24% in online gambling. While legitimate users naturally switch networks as they move between locations, fraudsters often rely on bots or VPNs that consistently cycle through a limited set of IPs. This effort to obscure their identity becomes a signal in itself, flagging potential attacks. And in order to act under the “cloak of darkness,” most attacks occur late at night, between 10 p.m. and 5 a.m. local time—when fraud teams may be offline.

fraudster-identity-signals

Consumer Identity Signals: Generational Breakdowns

There are clear generational divides in how consumers engage with AI, perceive its risks, and attempt to protect themselves from AI-driven fraud. The more digitally fluent the generation, the more confident—but also more vulnerable—they appear to be. Gen Z and Millennials, who use AI tools most actively, report higher confidence in identifying scams, but also fall victim to fraud at the highest rates. In contrast, Gen X and Baby Boomers who are less engaged with AI express lower confidence, but are less prone to falling for scams due to more cautious online behavior.

Familiarity with AI may be creating a false sense of security. For younger, digitally-native generations, being more comfortable and trusting with new technology may be clouding their ability to recognize threats, exposing a critical gap between perceived and actual preparedness in the fight against AI-powered fraud.

generation-z-the-brash-ai-natives
millennials-the-over-confident-digital-optimists
generation-x-the-wary-digitally-pragmatic
baby-boomers-the-cautious-traditionalists
ai-powered-solutions-for-securing-identity-trust
background_blue_with_yellow_boreder

AI-Powered Solutions for Securing Identity Trust

As GenAI accelerates both innovation and risk, fraud prevention must evolve just as rapidly. Today’s threats are fast-moving, adaptive, and increasingly difficult for humans to detect. With AI fraud tools already in use by over half of merchants—and adoption expected to approach 80% by the end of 2025—static rules and black-box solutions are no longer sufficient. Fraud is now networked, personalized, and dynamic, requiring transparent, real-time solutions that can adapt as quickly as the threats they face.

Future-proof fraud prevention relies on building digital identity trust to better understand whether a user is trusted based on behavior, context, and cross-platform intelligence. The AI-powered Sift Platform does this by empowering teams to make smarter, faster decisions by analyzing identity signals across the entire user journey. With Identity Trust XD, businesses can connect fragmented data into a unified view of identity, uncovering anomalies and stopping fraud before it happens. This approach not only enhances detection, but reduces friction for trusted users, turning fraud prevention into a strategic advantage.

*On behalf of Sift, Researchscape International polled 1,033 adults (aged 18+) across the United States via online survey in May 2025.

What’s New at Sift

action

Drive Your Business Forward with FIBR

Compare your own data against Sift benchmarks with FIBR, the first Fraud Industry Benchmarking Resource of its kind delivering crucial fraud insights to businesses across verticals and regions.

Discover Data