The internet has sparked a revolution in the way we interact with each other socially, economically – and politically. Elections are won and lost based on the strength of candidates’ social media campaigns or the verbosity of their online supporters. The role of sites like Facebook and Twitter in politics, to say nothing of online news outlets, is now undeniable.
At the same time, the rising tide of data breaches and fake content calls into question the veracity of those channels. Is every social supporter legit? Are detractors really who they claim to be? As more private information is rendered public by online fraud and abuse, it’s time to confront the impact of data breaches and fraud on democracy.
An Arrhythmic Public Pulse
Any federal agency that proposes a new rule must take the public’s pulse on the issue by allowing for public comment. These online fora don’t just attract a few policy wonks: the Federal Communications Commission (FCC) opened its website to public comment on net neutrality and received over 22 million comments.
When the Pew Research Center found that half of those comments were fake, the Wall Street Journal (WSJ) began investigating other governmental agencies’ websites and found fake comments on nearly every one. For example, on a Department of Labor forum dedicated to the recent fiduciary rule, the WSJ found that a shocking 40% of comments were fake. In most cases, the comments were posted under the names of real people, but not by those real people.
Why flood a government agency’s website with fake comments? After all, while an agency must invite public comments, it’s under no obligation to read them. But public comments do help determine which rules are implemented and which are tossed aside. An influx of divisive comments can prompt Congressional or agency debate, slowing a rule’s implementation. The comments can also be used to persuade an administration or judge to reverse an existing rule.
An Ecosystem of Content Abuse
These fake comments are part of a growing ecosystem of content abuse. Businesses are increasingly struggling to hire and innovate to combat fake content. Last year, content fraud hit YouTube and Facebook harder than ever. Inundated by content abuse, Facebook developed an advanced machine learning solution to detect fake content. Meanwhile, Google planned to hire 10,000 analysts to help adjudicate between real and fake content on platforms like YouTube and Google Maps. Even the New York Times has invested in machine learning solutions to fight back against fake content in the comments sections of its articles.
And the problem isn’t just disgruntled political activists trolling comments sections. To maximize their damage across content platforms, fraudsters are getting sophisticated. In fact, using bots to quickly spread fake content online is becoming the new normal.
Bots: The New Astroturf Lobbyists
Clues seem to indicate that the fraudsters who posted fake comments on the Department of Labor’s website may have created bots to do so. In many cases, hundreds or thousands of similar comments were posted within seconds of each other. Equally telling, many “authors” posted comments in alphabetical order by last name. These factors seem to suggest a methodical, automated volly of fraudulent content.
In his interview with James Grimaldi of the WSJ, NPR reporter Ari Shapiro equated this practice to astroturfing. Astroturfing is a tried-and-true method of lobbying in which a company is paid to manufacture something that looks like a grassroots campaign, but is funded by larger interest groups. Essentially, the Department of Labor, the FCC, and other agencies have fallen victim to a new kind of astroturfing, one made possible by technology that has outpaced our ability to control its impact.
But fraudsters were relying on real names and email addresses — belonging to real Americans, living and dead — to commit content abuse. How did they get their hands on that information?
Spoiler Alert: It’s the Data Breaches
It didn’t take long to figure out the answer. To understand the scope of the content abuse on the FCC’s website, investigators checked the comments against a list of stolen identities from data breaches. Many of the names and email addresses matched the list. Comments were posted under the names of people from New York, Florida, Texas, and California.
Data scientist Jeffrey Fossett confirmed these findings. He randomly sampled a thousand comments from the FCC’s website, then compared them to data from HaveIBeenPwned (a popular site that keeps track of compromised accounts). Fossett found that 76% of the fake comments were posted using an identity that had been involved in at least one data breach. A whopping 66% percent were posted using identities that had been compromised in one particular breach: the River City Media breach from last year.
An Uncertain Future
Here’s what is clear: data breaches are on the rise. Here’s what isn’t: whether businesses and government agencies are taking steps to curb their effects.
In 2017, consumers and businesses woke up to the breathtaking reality that nearly every adult in the US has been impacted by a data breach. Data from sources as varied as NYU, OneLogin, Sonic (yes, the fast food chain), and an emergency siren in Dallas were exposed last year. And, of course, last year saw the Equifax data breach, which compromised data from 143 million consumers. Bell Canada, FedEx, Maersk, Oxford, the British ad agency WPP, a Russian energy company — all lost millions, sometimes billions, to data breaches.
Grimaldi of the WSJ says that the Department of Labor and Consumer Financial Protection Bureau have expressed concern over the way data from breaches is being mobilized for political gain. Members of Congress have “inquired about it.” But whether they will work to prevent data breaches from happening in the future, or to prevent this data from shading political debates, is not certain.
For now, it falls on businesses to take the crucial step of protecting their customers from data breaches. Unlike government agencies, businesses are uniquely empowered to guard against online fraud and abuse; they are often more flexible and better resourced. More importantly, though, companies have a responsibility to protect their customers from fraud. Users trust businesses to provide safe, reliable experiences. If they fail to do so, they risk losing their users — and alienating potential users, as well. Despite numerous opportunities and strong impetus to protect their users, though, many businesses are floundering.
We now live in a world where people’s trust in democratic institutions rests in the hands of CISOs, CTOs, and CIOs. Maintaining your customers’ trust isn’t just good business sense anymore. Our free speech may depend on it.