Have Twitter Bots Infiltrated the 2016 Election?

Some say Donald Trump has boosted his massive online following with automated accounts. But there’s more to it than that.

A different kind of bot protest in Europe  (Arnd Wiegmann / Reuters)

Donald Trump’s Twitter game is incredible. His posting frequency would make an RSS feed envious. And each tweet, unlike the dozens of campaign press advisories sent every day, has a significant chance of showing up in the news the next morning.

But what is most impressive is his ability to conjure thousands of retweets and likes from supporters—on literally anything he says. Take this:

Trump was notifying his fans that he would appear, in 20 minutes, on a show where he is frequently interviewed. This is normal and expected. And yet more than 2,000 people retweeted this post, and another 9,000 favorited it. That’s not quite Bieber-level, but it’s still pretty good.

The New York billionaire is undoubtedly popular. But some have offered a more subtle explanation for Trump’s virality. In April, Patrick Ruffini, a political digital consultant in Alexandria, Virginia, posted a spreadsheet of nearly 500 pro-Trump Twitter accounts that had tweeted, in unison, a message encouraging voters to file FCC complaints against robocalls from the Cruz campaign. Many of these accounts, Ruffini noted, had previously tweeted “17 Marketing Tips for B2B Websites.” They were bots—automated accounts that exist only to extend the social reach of whoever hires them. Twitter subsequently suspended many of them.

This isn’t the first time folks have speculated the Trump campaign hired bots to spread its candidate’s message. As for the Democrats, Hillary Clinton’s account reportedly has a million fake Twitter followers. While the numbers aren’t conclusive, it’s worth wondering: How much of the candidates’ popularity can be traced back to bots?

And does it even matter? After all, there are few things more fake than the unending cheeriness of a presidential campaign. Americans are now so familiar with the common set pieces—the crowded rally, the carefully timed roadside stop—that they’re largely taken for granted as part of the political process. But social media fakery is arguably a whole new sphere of American campaigns—one with its own dynamics that will only get more interesting in future cycles.

Faux followers can come with risks, as demonstrated by Andrés Sepúlveda, the convicted Latin American political consultant who reportedly wielded an army of 30,000 fake Twitter accounts to sway public opinion. And it would certainly deflate Trump’s persona if a substantial portion of his Twitter community turned out not to be real, especially considering how often he boasts about the size of his following and cites it among his qualifications.

“We like to say they act as a megaphone on social media,” said Clayton A. Davis, a Ph.D. student at Indiana University who studies Twitter bots. “We as humans tend to say, the more people talking about something, the more likely it is to be true. We know that that’s false, but that’s just how we work. You not only add volume, but you lend credibility to the message, when in reality, it’s really only one person.”

To test this, I used BotOrNot, a computer program Davis helped develop that leverages machine learning to determine the likelihood a given Twitter account is actually run by a computer. Funded in part by the National Science Foundation and showcased at this year’s WWW Developers Day conference, the program combs through users’ feeds, analyzing grammar, posting habits, and connections to other Twitter users. It scores accounts from zero to 100 percent, with zero being obviously a human and 100 indicating total botness. The cut-off point between the two is 47, according to Davis—a score of 48 or above means an account is more likely a bot than not.

The detector can slip up. It often confuses large organizational accounts like Barack Obama’s for bots, and sometimes stumbles on actual automated accounts like @netflix_bot, which any human would peg as machine-run. But that lack of human common sense is actually a strength, Davis said: Because the algorithm weighs hundreds of factors in determining botty behavior, it’s nearly impossible for bot-makers to game in the long run.

Running each candidate’s millions of followers through BotOrNot is technologically unfeasible, given Twitter’s data access limits. Instead, I drew a random sample from 270,000 retweets of the three main presidential contenders over Memorial Day weekend, testing about 11,000 with the detector.

The results: Using a 40-percent cutoff, BotOrNot indicated around a quarter of Trump and Clinton’s followers could be bots. But if the cutoff is bumped up to 60 percent—Davis recommends this, to reduce the possibility of @BarackObama-style false positives—Trump and Clinton’s bot counts drop to around 3 percent.

That is low. It may be evidence that most of Trump’s and Clinton’s retweets are generated by real people. But the bigger revelation concerns Sanders. Even though his Twitter account is almost as popular as Trump’s, with run-of-the-mill posts getting 1,000 retweets, it appears he has even fewer fake followers. Only 1.7 percent of all the people who retweeted him failed the bot test under the 60-point standard, half the rate of Trump and Clinton.

Sanders has both an enthusiastic fan base and a slightly lower profile than his two opponents, perhaps making him less likely to be spammed by unrelated marketing bots. But BotOrNot’s results also indicate his following might be ever so slightly more genuine than his opponents’—so when one of his tweets goes viral, one can be a bit more sure it isn’t some botnet ginning up interest.

The distinction between a bot and human isn’t always clear. Some accounts with high bot scores actually appear to be manned by a human and aided by automation, mixing original thoughts with rapid-fire retweets. Even users that are clearly bot-powered—75,000 favorites?—gave decidedly human responses when queried. The worst of these are a bit like a political campaign: just enough humanity to invoke empathy, but with all the efficiency and cost-effectiveness of a machine. But some are just real people doing their part to support their candidate, even if that means incessantly retweeting Trump’s OReilly appearances.

The bot-human hybrid is the future of election-year Twitter. The news feed, like ice-cream shops in battleground states, will be colonized by the modern political campaign for the digital equivalent of a photo op. But if BotOrNot is right, most of that is yet to come. For all its bot-hybrids, Twitter still appears relatively free from political manipulation, at least for the moment.

Andrew McGill is a former senior product manager at The Atlantic.