The internet is abuzz with talk of the potential of generative AI and automation. But these advanced tools are also leading to an influx of sophisticated scams and downstream fraud. Last year alone, consumers reported losing 2.6 billion dollars to imposter scams, with those executed over social media and phone calls leading to the highest losses. And while AI and automation aren’t new tactics, they are rapidly advancing and making it more difficult to detect scams. 

To dive deeper into these fraud trends and how they’re impacting businesses, Sift recently hosted a roundtable discussion with fraud experts. During the webinar, Brittany Allen, Trust and Safety Architect at Sift, was joined by Sift customers Cassandra Goerdt, Sr. Mgr Payments Risk at Mindbody, and Kenneth Lau, Director of Trust & Safety at Zipcar. Together, they discussed how businesses can combat fraud driven by generative AI and bot-based attacks.

Attendees were polled live during the webinar on their experience experimenting with generative AI tools such as ChatGPT. The results showed that 78% had used an AI tool since becoming available to the public in November 2022, showing how pervasive the technology has become in a short time. 

How AI and automation are making fraud more successful

Fraudsters are still leveraging existing fraud tools like robocalls, but are rapidly adopting new technology—such as voice cloning and deep fakes—multiplying the reach of possible attacks. Sift research found this sharpening of fraud tactics is causing concern among consumers, with 78% worried that AI will be used to defraud them. With 68% of consumers noticing an increase in the frequency of spam and scams in the past six months, these concerns are valid. 

A major factor in this surge of scams is the democratization of fraud—the increasing accessibility and ease with which anyone, regardless of technical experience, can engage in fraudulent activities. Generative AI is a great example of the democratization of fraud, since anyone can use these tools to become a fraudster and scale convincing attacks with speed. 

AI tools can be used to strengthen attacks by writing and improving code, which can allow fraudsters to have more successful malware and remove the telltale signs of bot activity. There’s also the ability to create natural language, free of spelling or grammatical errors, which makes it difficult for the average person to distinguish that synthetic media isn’t authentic.

These new tools are having a real-life impact on businesses. “One thing that I see being a concern particular to our vertical is the ability for fraudsters to create realistic emails, since ChatGPT gives that ability to make it much easier to create conversational emails,” said Lau.

Goerdt also anticipates difficulty ahead for fraud detection. “We always gave tips to look for bad grammar and poorly written emails as a way of telling that it’s fraud or a scam. Now AI is going to make it that much more difficult.”

During the webinar, Allen showcased one real life example of how fraudsters are leveraging AI technology to create misleading content, and even images. “I have found the LinkedIn connection requests that I receive nowadays, in the past 6 months or so, are invariably using AI-generated photos for their profile pictures,” said Allen. 

The image below shows two headshots from LinkedIn requests Allen received recently, posing the question, which of these images are generated by AI: image A, B, neither, or both? 

In the poll of webinar attendees, barely half (49%) guessed correctly that both images were generated by AI. This lines up with Sift’s consumer data, which found that nearly half of consumers admit it’s become more difficult to identify scams in the past 6 months. Many aren’t confident they could identify a scam created by AI, and these figures show how difficult they can be to identify. For these AI-generated headshots specifically, some of the tell tale signs include a blurred background, shadows that don’t match their pose, mismatched earrings, different shaped ears, lopsided sideburns, unnatural hairline, and glitchy teeth, among other clues. 

Evolving fraud prevention in the age of AI and automation

Fraudsters will continue to automate and scale their attacks, but this doesn’t mean there aren’t preventative measures businesses can put in place to detect evolving fraud patterns. In a lot of ways, we’re in a perpetual cycle of evolving fraud, as Allen points out: 

“Fraud prevention still must evolve. We know that fraud changes year-over-year while we do see repeated evergreen trends, such as the use of stolen credit cards and social engineering. The methods that are employed to either get that information or to use it have to change on the fraudster’s side, because we keep taking action to hold them back.”

Especially in the competitive market we find ourselves in, holding onto customers is extremely important, and that brand abandonment can really hurt a business’ bottom line. In the event a customer becomes a victim of a social engineering scam and their account is taken over, 43% of them will walk away from that brand. Therefore, it’s up to companies to set standards on their own platforms in order to keep their business and customers safe from the downstream effects of AI-generated fraud attacks.

Risk teams can also take a page out of fraudsters’ book by leveraging AI and automation in their own fraud prevention processes. More incoming fraud doesn’t have to mean more manual review, as Goerdt emphasizes: “A lot of our approach here at Mindbody is heavily manual. Lots of manual agent reviews. Sift and other vendors already do a really good job at reducing those manual review hours.”

It’s also critical to work with solution providers that are capable of keeping up with the rapidly changing landscape of fraud. “I’m really happy to see how the different tools have evolved today, and how we’ve challenged ourselves, whether it be internal tools or looking for the right partners like Sift to improve those tools in our tool belt, if you will,” said Lau. 

Businesses can leverage real-time fraud prevention and machine learning (a practical application of artificial intelligence) to prevent AI and bot-based attacks. At Sift, we use machine learning models and a global data network in our fraud prevention platform to proactively identify and block fraudulent activity.

Watch the on-demand webinar for more insights on how AI and automation are changing the scope of fraud. 

Related topics

artificial intelligence

automation

bots

fraud detection

fraud prevention

You may also like