Experts sound the alarm on 2020 election-meddling: Combating online deception on the government level requires a ‘multi-level approach,’ says assistant chief elections officer

Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on email
Email
Share on print
Print

By Matthew Vollrath.

This article was originally published by Palo Alto Online.

Will deception on social media sites like Facebook and Twitter be a major threat in the upcoming 2020 election? According to Ann Ravel, former chair of the Federal Elections Commission, the troubling answer is “Yes.”

Ravel spoke about the looming problem at a June 27 event at Menlo Park’s Hewlett Center titled “Digital Deception in the 2020 Election.” Now the director of the Digital Deception Project at the Berkeley nonprofit MapLight, Ravel was joined by Katie Joseff, a digital intelligence researcher at the Palo Alto nonprofit Institute for the Future.

The two speakers discussed what they said were organized online deception campaigns, from both foreign and domestic sources, that significantly affected the outcome of the 2016 election.

“A lot of people are unwilling to admit that these campaigns impacted the election,” Ravel said. “I’m here to tell you they did.”

Similar campaigns targeting the 2020 election are already appearing, the speakers said.

Ravel and Joseff identified several forms of election-related online deception. One is the deliberate spread of misinformation — spreading incorrect details about polling times and locations. For instance, Ravel said, one campaign in 2016 targeted African Americans, telling them that the voting date had changed.

“We know it suppressed the vote, because the difference between African American voter turnout in 2012 and 2016 was almost 8%,” she said.

Other tactics include the spread of “deep fakes,” fabricated photos or videos that aim to create a false scandal, and the use of fake accounts, through which paid operatives spread politicized messages by masquerading as regular Americans. With today’s AI technology, the speakers said, many of these are automated “bot” accounts, which can post independently and proliferate rapidly.

A major goal of these tactics, according to Joseff, is to undermine faith in democracy. The strategy of “disinformation” began in Soviet Russia as an attempt to “destabilize trust in democratic countries,” she said.

Joseff also discussed another alarming tactic: harassment campaigns. In the 2016 election, hate groups and “troll farms” targeted specific demographics such as African Americans and immigrants, and threatened them online, she said.

“Targeted harassment campaigns play a role in silencing already niche communities,” Joseff said. This harassment, which sometimes involved death threats or sending law enforcement to the victim’s home, led to a significant decrease in voting turnout among these populations, she added.

What is the solution?

Addressing the problem of online deception and harassment, the speakers say, will require action on two fronts.

The first, Ravel asserted, is to pass federal legislation requiring greater transparency online. While at the FEC, Ravel said, she argued for such a law, and was vilified by one of her colleagues, who called her the “Chinese Censorship Board.” But, she noted, such a law is not without precedent. For political speech on radio, television, and in print, there are strict requirements for disclosing the identity of the groups and funding sources behind these messages.

“If you pay for political communications, you need to disclose who’s behind it,” Ravel said. Extending these requirements to the online realm only makes sense, she added.

The second is for social media companies themselves to take the initiative. Whether or not they are legally required to, Joseff and Ravel say, companies like Facebook should be making active efforts to detect fake and bot accounts, correct false information, and ensure that paid and political content is displayed transparently.

According to Brandi Barr, a policy communicator at Facebook, the social media giant is taking a number of steps in these areas.

Facebook has blocked millions of accounts, said Barr, both of individuals suspected to have a fake account and of groups of accounts displaying “coordinated inauthentic behavior.” The company has also taken down 45,000 posts “attempting to mislead people about where and how to vote,” employs third-party fact-checkers to identify false content, and keeps a public archive of the ads it displays and the sources and targets behind them, she said.

Since 2016, said Barr, Facebook’s efforts have shifted from passive identification based on user reports to active detection of deceptive behavior using “a myriad of signals.”

Governments are also turning their focus to this problem as a threat to the integrity of elections, says Jim Irizarry, assistant chief elections officer for San Mateo County. “We realized you don’t have to get into the voting system if you can influence attitudes towards candidates and campaigns,” he said.

Combating online deception on the government level requires a “multi-level approach,” Irizarry said. 

San Mateo and other counties are working with the Cyber Security Division of the California Secretary of State’s Office to report election-related misinformation on social media, he explained. Anyone who encounters misinformation can now report it to the secretary’s office through a new online system at sos.ca.gov/elections.

As a national issue, however, Ravel is not optimistic that the problem of online deception will be solved in time for the election. The reality is that repairing this situation will likely take many years, she said.

“I’m trying to sound the alarm that this is not an easy fix,” Ravel said. “We need to get much more information about exactly who they are targeting and why.”

CONTACT THE CAMPAIGN

EMAIL : [email protected]
PHONE : (408) 459-9076

Join Ann’s campaign for equality and a fair economy by adding your name here. Record your video for Ann today!