Advancements in technology have significantly empowered fraud teams, but cybercriminals are becoming increasingly adept at using that same technology to bypass businesses’ preventative measures. Today, it’s easier than ever for anyone with an internet connection to engage in malicious activity, as fraud tools and technologies become more widely accessible.
The democratisation of fraud refers to the growing ease and accessibility with which individuals, regardless of technical ability, can carry out fraudulent acts. This trend is driven by several factors, including rapid technological innovation, the widespread availability of information, and the constantly evolving nature of fraud.
Much of this activity takes place in the shadowy corners of the deep and dark web, where forums and marketplaces offer fraud guides, bundles of personally identifiable information (PII), “fraud-as-a-service” tools, and a range of on-demand phishing services. The emergence of generative AI tools has further enabled more sophisticated, AI-powered fraud.
In this blog, we’ll explore these emerging fraud threats and share best practices for effective fraud detection.
Understanding the Democratisation of Fraud
The term “democratisation of fraud” refers to the ease with which individuals, regardless of technical expertise, can commit fraudulent acts. This trend is driven by:
- Technological Innovation: Tools and platforms that simplify complex fraud techniques.
- Information Availability: Online forums and tutorials that educate potential fraudsters.
- Evolving Fraud Tactics: Continuous adaptation of methods to exploit new vulnerabilities.
Much of this activity transpires within the deep and dark web, where marketplaces offer fraud guides, sets of personally identifiable information (PII), “fraud-as-a-service” tools, and various phishing services. The emergence of generative AI tools has further enabled sophisticated, AI-driven fraud schemes.
Fraud-as-a-Service (FaaS)
The deep web hosts an increasing number of on-demand tools and services purchasable by anyone aiming to commit fraud. This model, known as fraud-as-a-service (FaaS), encompasses how-to guides, technologies, and tactics packaged for resale to other fraudsters.
Platforms like Telegram have become hubs for such activities. For instance, fraudsters advertise services to illicitly purchase food and beverage orders at discounted rates using stolen payment information.
The Rise of First-Party Fraud in the UK
The surge in e-commerce has compelled merchants to adopt digital experiences to remain competitive. Unfortunately, fraudsters exploit online businesses through various forms of fraud that require minimal technical skills, including first-party fraud.
First-party fraud, or “friendly fraud,” occurs when legitimate cardholders make purchases and later claim them as unauthorized. Alarmingly, nearly half (48%) of UK adults believe it’s reasonable to commit first-party fraud.
This form of fraud exemplifies the democratisation of fraud, as individuals can perpetrate it without technical expertise. Deep and dark web forums often provide guides, simplifying the process for potential fraudsters.
Accessibility of “Fullz” on the Dark Web
Numerous dark web marketplaces sell comprehensive sets of PII, colloquially termed “fullz.” These datasets enable identity theft or payment fraud by providing full names, addresses, credit card numbers, CVV codes, and expiration dates.
Such information is typically obtained through data breaches, malware, or phishing campaigns. The availability of fullz allows even less tech-savvy criminals to purchase and misuse credit card information without directly engaging in data theft.
Phishing and Account Takeovers (ATOs) in the UK
Account takeovers involve unauthorized access to user accounts, often achieved through stolen credentials obtained via phishing or purchased from dark web marketplaces.
Once compromised, these accounts can facilitate further attacks, fraudulent transactions, or data theft. The UK has witnessed a significant uptick in such activities, with a 76% increase in account takeover cases reported in 2024.
Phishing campaigns and ATOs have become more accessible through phishing-as-a-service platforms, one-time password (OTP) bots, and generative AI tools. Criminals can now purchase phishing kits that require no technical knowledge, some of which can bypass multi-factor authentication (MFA).
The advent of AI tools has led to more convincing phishing emails and scams. Notably, the LabHost platform, operational since 2021, enabled over 2,000 criminals to create fraudulent phishing websites, victimizing approximately 70,000 British individuals.
Mitigating the Impact of Democratised Fraud
Businesses should expect that fraud will continue to evolve faster and present a greater threat. This is why it’s important for merchants to implement future-forward strategies and technology that’s capable of adapting with this changing landscape, such as the universal coverage of a real-time global network of fraud data. Machine learning provides the best approach by detecting patterns of abuse and signals from a diverse network.
From account defense to payment protection to dispute resolution, machine learning can detect account anomalies that are indicative of suspicious activity. Decision engines can apply custom rules based on risk scores, such as dynamic friction (i.e., enforcing MFA) or blocking risky transactions.
Tapping into the power of a global data network is critical for detecting these fraud signals. For example, Sift’s global data network ingests more than one trillion events per year, improving fraud detection accuracy by 40%.