+91-9560121007

+1-929-243-5550

Bloomberg’s online tactics test the boundary of disinformation


First came the heavily edited video of Democratic candidates looking speechless at a debate when Mike Bloomberg points out he’s the only one of them who’s started a business. That was followed by tweets of fake quotes last week attributed to Bernie Sanders praising dictators.

And shortly before that came news that the Bloomberg campaign was paying social media influencers to hype the billionaire, a novel move by a presidential candidate that was never contemplated by election law.

In isolation, each of the tactics might appear harmless. Lighten up, Bloomberg’s campaign has responded to complaints about the videos and fake quotes — of course it’s tongue in cheek humor intended to make a point. There is no nefarious intent, the campaign said.

“Unlike Donald Trump and the [Republican National Committee], our campaign will not be using disinformation tactics to engage voters,” said Sabrina Singh, a spokesperson for Bloomberg.

But taken together, Bloomberg’s moves are testing the boundary between edgy campaign fare and disinformation, as well as the limits of regulated political advertising, an array of nonpartisan technology experts tracking the 2020 candidates’ online efforts told POLITICO. And it risks leading Democrats into perilous, anything-goes territory in the brave new world of online campaigning, they said.

“This is absolutely dangerous for the fair functioning of our political process,” Dipayan Ghosh, co-director of digital platforms and the Democracy Project at the Harvard Kennedy School, said of the video and tweets posted by Bloomberg. “And it could very well send the Democrats down the slippery slope of disinformation.”

That would further erode discourse online and contribute to an already distrustful electorate, he added.

Republicans including President Donald Trump have increasingly used disinformation online. One glaring example was a doctored video of a stammering Speaker Nancy Pelosi that President Donald Trump’s lawyer Rudy Guliani circulated in Mayon his Twitter account and then deleted. The Atlantic ran a lengthy magazine piece under the headline: “The Billion-Dollar Disinformation Campaign to Reelect the President.”

In response, Democrats have debated whether to engage in similar tactics. With his virtually unlimited budget Bloomberg has amassed an enormous digital operation: He is the only Democratic hopeful with the financial wherewithal to even begin to challenge the sophisticated digital machine Trump has built. And the former New York City mayor — eager to distill what will resonate with voters — is testing tactics and messages more aggressively than any of his Democratic rivals.

But the campaign’s flirtation with disinformation and its use of paid-for social media to spread his message have also raised questions about the lack of regulation applied to politicians and their campaigns on platforms like Twitter and Instagram.

Bloomberg drew notice among technology experts when he released a spliced-up video of footage from the Nevada Democratic debate last month. The video was intended to capture a question Bloomberg posed to his fellow candidates about him being the only candidate who started a business, but it exaggerated the length of their silence and used clips of their facial reactions from different moments of the debate. The video included crickets to try to signal it was a joke.

Days later, Bloomberg’s official Twitter account posted a string of false Sanders’ quotes about dictators, poking fun at the candidate’s actual praise of Fidel Castro for improving literacy in Cuba. “Bashar al-Assad has committed countless war crimes against his own people, but let’s not forget how he introduced paper recycling to reduce municipal waste,” read one of six quotes attributed to Sanders on Twitter with the hashtag #Bernieondespots.

Two of the tweets included a disclaimer that they were “satire” but the others did not. Bloomberg’s campaign said it deleted all of the tweets because the pushback was swift and it was never their intention to offend anyone. At the same time, the campaign disagreed that the tweets amounted to disinformation.

“Whether it’s a joke or not is not the question,” said Ghosh. “The question really should be, does this serve to potentially mislead a constituent, even one constituent?”

The danger with tweets like the fake Sanders quotes, said Mike Caulfield, head of the Digital Polarization Initiative for the American Democracy Project, is that a medium like Twitter “atomizes” posts — meaning a user will likely only view one post without context, never seeing the campaign’s claim that it was joking.

Only two candidates — Joe Biden and Elizabeth Warren — have issued official pledges to not use illicit tactics. In June, Biden promised no bots, deep fakes or disinformation. Warren has openly feuded with Facebook over its standards for allowing false statements in political advertising, and she released a policy proposal to crack down on disinformation.

Combating the expansive and fast infection of the seep of disinformation into American elections isn’t being “pollyannaish about blood sport in politics,” said Graham Brookie, head of digital forensic research for the Atlantic Council. It’s about being “extremely vigilant about anything that would be potentially misleading.”

Brookie didn’t consider the edited debate video disinformation, but said it walked up to the line. He did, however, call the Bloomberg tweets quoting fake Sanders’ comments “outright disinformation.”

“While a number of consumers might identify it appropriately as satire, you can’t depend on an audience to clearly and coherently consume information,” Brookie said.

As social media becomes a “more pervasive part of day to day life,” said Paul Barrett of New York University’s Stern Center for Business and Human Rights, the potential damage caused by disinformation becomes greater. People consuming information on social media are less discriminating, he said, creating more opportunity for them to be misled, which fosters a dangerous cynicism.

“It’s the kind of attitude that men and women on the street in Moscow have, like, ‘We can’t tell what’s true or not, whether what Putin says makes any sense, so we’re not even going to try,’” said Barrett, the author of a September report on disinformation in the 2020 election. “And you don’t want Americans to get to that stage because then we’re really subject to being exploited.”

Bloomberg’s payments to social media influencers raise different issues. To reach millions of potential voters, the campaign is paying Instagram influencers who run meme pages —which post humorous images, videos or texts that spread rapidly among internet users.

“While a meme strategy may be new to presidential politics, we’re betting it will be an effective component to reach people where they are and compete with President Trump’s powerful digital operation,” said Singh, Bloomberg’s spokesperson.

The influencers don’t fall under the disinformation debate, at least to date, but they’ve reignited a separate one about government regulation of campaigns’ use of social media. Bloomberg’s campaign said it follows Federal Trade Commission guidelines, telling the influencers it pays to disclose that Bloomberg paid for the content they’re posting.

“We push for maximum possible transparency across all of our content,” said Singh, Bloomberg’s spokesperson. And they’ve asked their influencers to use branded disclosure tools provided by each platform but certain companies limit which kind of accounts can use specialized disclosure tools. Bloomberg’s saturation of social media with paid-for memes prompted Facebook and its Instagram subsidiary to change its initial ban on all sponsored political content in order to allow “branded content” from political candidates.

But in the political space, the FTC in fact has no jurisdiction, said Jay Mayfield, a spokesman for the agency, meaning it wouldn’t regulate Bloomberg’s online campaign tactics.

That would fall to the Federal Election Commission. But the FEC is charged with enforcing regulations that were last changed in 2006. Facebook was two years old then, Twitter had yet to develop a retweet function, and Instagram didn’t exist.

Moreover, “commission regulations do not explicitly address social media influencers,” Myles Martin, an FEC spokesperson, said in an email. That leaves it up to campaigns to interpret old regulations as they please, and no real investigative enforcement body overseeing them.

FEC guidelines do state that public communications that “advocate the election or defeat of a candidate for federal office and placed on the Internet for a fee must include a disclaimer” informing who paid for the communication.

“The FEC has internet regulations that date from the flip-phone era and the actual law dates back to the era of teletype,” said Daniel Weiner, who previously served as senior counsel to the top FEC commissioner. “Bloomberg presents a particularly challenging situation. How should you regulate an influencer?”

If anyone were to regulate disinformation in campaigns, said Weiner, who is now deputy director of the Brennan Center’s Election Reform program, it would start with the FEC, which has the authority to recommend changes to the law and enforce the new regulations on campaigns.

But, the FEC can’t change its 2006 regulations on “internet activity” because the agency is gutted, lacking a working quorum. The FEC is not only “immobilized,” said Weiner, but completely unable to effectively act as a watchdog over the 2020 election.

“We are heading into Super Tuesday during the most expensive election cycle in history with one of the main supposed guardians of our political process MIA,” Weiner added later in a tweetstorm posted Sunday. “Not good.”