AI transformation

I don’t have a peer conversation these days that doesn’t at least touch on the disruptive presence and power of AI. Like every tech breakthrough before it, AI’s allure is rooted in its potential—putting business leaders in the unique, challenging position of balancing compounding risk of adoption with the untapped opportunity it offers. 

What we’re seeing now is an era of AI transformation. Like the great wave of digital transformation that dominated the past two decades, AI is also rapidly reshaping modern commerce, further disrupting how businesses and consumers interact, and changing how they define trust.

And while all technological transformation prioritizes building market share, offsetting costs through greater efficiency, and delivering superior consumer experiences, the AI transformation isn’t just a better version of the same, or a short path to an employee-less enterprise. Instead, AI bridges the many gaps between people and platforms, taking on skills and providing greater accessibility. It offers us the unique chance to rehumanize technology, making tools into trusted partners that don’t just do what we say, but learn from us, learn who we are, and tell us where we can truly go.

AI and Risk in the Enterprise: Shooting for the Moon, Landing on Mars

All that potential can be paralyzing, and it’s rocked our expectations for the here and now. Every day, most of us interact with some type of practical AI, like when we connect with customer service bots, or find our new favorite streaming show thanks to an advanced algorithm. 

Unfortunately, those same AI applications can be leveraged just as easily by fraudsters to target individual consumers and the online businesses they frequent. Sentient robots and significantly smarter systems aside, fraudsters are already using speech-to-text video creation and ultra-realistic deep fakes that rival the most impressive CGI. The technology’s ability to generate pristine text, accurate code, realistic audio, lifelike media, and even entire websites, makes it a powerful tool for creativity and efficiency—and in the hands of digital criminals, an effective weapon for theft. 

Just recently, a Hong Kong finance employee learned first-hand how convincing, and damaging, AI-fueled fraud can be. Cybercriminals used deepfake technology to impersonate his company’s chief financial officer (CFO), convincingly mimicking the CFO’s voice, which was used during a phone call to authorize the transfer of $25 million to fraudulent accounts. And true to form, the global Fraud Economy is already producing nefarious tools built to commit AI-backed fraud, like the aptly named WormGPT. An AI platform designed for malicious content creation, it operates like legitimate AI language models (e.g., Jasper, ChatGPT), but can be used to generate convincing phishing emails, fake documents, and other fraudulent content. Unlike mainstream AI models designed with ethical guidelines, WormGPT was created with no constraints—meaning no real limits and no true model predictability. Its parent company, FlowGPT, has been called the ‘wild west of genAI apps’, and is a community-driven platform that allows users to share and discover prompts for AI models (nefarious or otherwise). 

It all feels a little bit like what I think Neil Armstrong would’ve felt like on his first journey to the moon—that is, if he’d walked off the Apollo 11 and, instead of stepping onto moondust, found himself on the dusty red desert of Mars. 

On the digital stage, that’s what we’re starting to figure out: our longstanding plans for what artificial intelligence would be, how it would work, and what we could do with it were actually pretty conservative. We underestimated the total power of a technology that, in addition to processing the information we give it, can understand our intentions and ideas—sometimes before we fully understand them ourselves. We also didn’t necessarily expect the power of AI to be available to virtually anyone. 

The reality is that AI is both leveling and expanding the playing field between competitors, as well as between company and criminal. It’s democratizing access to advanced computing, lowering barriers to competitive intel and data, and creating new ways for consumers and brands to interact. But with AI’s unbridled potential comes significant risk that’s constantly changing—making it especially difficult to determine where to invest and where to hold back on adding AI in-house. 

Nothing is the Biggest Threat of All

Generative AI alone is projected to become a $1.3T market by 2032 (a CAGR of 42% over the next 10 years). Companies can end up paying millions of dollars for AI software that enhances internal processes and execution, and it’s still only one part of the equation. Transforming the risk posed externally by AI into a secure, sustainable revenue stream and growth lever is another.  

Before a business can own that risk-revenue equation, it’s critical to understand the nature and scope of the threat we’re talking about. I mentioned the digital transformation because, for me, it’s a historical blueprint of what’s to come, what to consider, and most importantly, what mistakes we should avoid repeating as the AI era unfolds. 

Namely, doing nothing. Leaders who have faced similar shifts in technology know that the biggest risk in innovation is inaction. Even today, enterprise companies that struggle to adopt cloud computing will spend more time and money managing on-premise infrastructure, hardware limitations, and operating costs than their competitors who took the plunge. 

But doing nothing is not an option when it comes to the era of AI. Threat actors globally have first-movers advantage. They’ve developed nuanced applications of AI to siphon data and money from businesses and consumers more quickly and more covertly—including the use of deepfakes to trick liveness-detection checks in KYC (Know Your Customer) tools and the creation of more convincing phishing emails. Many leaders feel ill-equipped to handle the sheer potential of fraud that could be committed against them with the vehicle of AI, and not without reason: according to analyst research, 56% of leading merchants are experiencing a substantial increase in AI-enabled fraud. 

Consumers have picked up on the threat: 30% say they shop online less frequently due to the pervasive threats posed by artificial intelligence. This understanding has led executive teams across industries to embrace a clear reality: to go head-to-head against AI-generated fraud, the enterprise needs scalable, AI-powered solutions.

Architecting the AI-First Future 

Digital fraud has annually cost businesses in every market thousands in direct financial losses and customer churn, lost LTV, and soured brand reputation. In 2023, the speed and scale achieved by threat actors online cost businesses an estimated $48B, with companies globally dedicating an average of 10% of their revenue solely to fraud operations. 

Reluctance around AI adoption is high: 91% of leaders surveyed by MIT aren’t yet using it in a significant way. But at least some of last year’s digital risk spend was absorbed by AI, with over half of business owners reporting that they were already leveraging the technology for cybersecurity and fraud operations. Simply put, AI adoption in the enterprise is inevitable, and so is the risk it presents.

This is the rust-red landscape of Mars. We have arrived on an unexplored planet with less information than we thought, deeper risk than we anticipated, and more opportunity than we could have hoped for. Digital leaders have a remarkable chance to influence their own success with AI, to build new partnerships and networks of innovation, and to own the future. 

Now that we’ve landed on the Red Planet, we have no option but to explore the possibilities ahead—especially in utilizing AI’s analytical strength to secure the consumers and companies made vulnerable by AI’s ascension. Businesses have an opportunity to transform risk into revenue, to gain growth and create great CX through AI. The next tech frontier has arrived, and like the digital transformation it reflects, today’s early AI adopters will be the architects of what it becomes. 

Are you ready?

Related topics

AI fraud

AI-powered fraud prevention

digital trust

the ai transformation

You may also like