June 2, 2021
The 2016 elections represented a wake-up call for Americans about the power of social media to influence our thinking on issues big and small, and about the impact foreign-based bad actors can have on our political discourse. Five years after that seminal moment, Facebook has just released an in-depth look back at what their company has learned about information operations (IO) since, including identifying some worrying trends and releasing interesting data on the primary sources, targets, and tactics of this bad behavior online.
Before we dive into this new report, it is worth saying that Facebook has had enormous difficulties keeping pace with the flow of disinformation and nefarious activity on its platform (including racist and sexist language and threats). Its standards have often appeared haphazard at best, and dangerous or fake content frequently stays online much longer than it should. With those caveats, this most recent report does appear to be a real and helpful attempt to provide a bit of transparency into their fight against the coordinated inauthentic behavior (CIB) that has exploded over the past several years.
Here are some of the report’s most interesting data findings, covering the time between 2017 and mid-2021:
- Facebook took down and publicly reported on over 150 CIB operations that violated their policies.
- These CIB operations originated from over 50 countries, with the top five source countries of origin being Russia, Iran, Myanmar, the United States, and Ukraine.
- The United States, Ukraine, and the United Kingdom were the countries most frequently targeted by foreign information operations.
- Myanmar, the United States, Ukraine, Brazil, and Georgia were the countries most frequently targeted by domestic information operations.
- In the year leading up to the 2020 elections, Facebook exposed over a dozen CIB operations targeting U.S. audiences, an equal number of which originated from Russia, Iran, and the United States.
Facebook defines “information operations” as: “Coordinated efforts to manipulate or corrupt public debate for a strategic goal.” This activity can include fabricating entirely false content to help or hurt a specific candidate; creating fake accounts that promote real stories to make it seem like a point of view or candidate has more support than it really does; or posting incendiary content to increase polarization and domestic unrest.
While we in the United States tend to think of these information operations as mostly foreign based, the reality is much more varied, according to Facebook. They write:
“About half of the influence operations we’ve removed since 2017 – including in Moldova, Honduras, Romania, UK, US, Brazil and India – were conducted by locals that were familiar with domestic issues and audiences. These were political campaigns, parties, and private firms who leveraged deceptive tactics in the pursuit of their goals.” (p. 21)
Not only is there an increasing number of CIB that’s wholly domestic in nature — created by their own governments, political parties, mercenaries for hire, or private companies to drive narratives close to home — but foreign actors are also becoming savvier at using local voices (such as like-minded media organizations) to help their causes:
“Particularly sophisticated foreign actors are getting better at blurring the lines between foreign and domestic activity by co-opting unwitting (but sympathetic) domestic groups to amplify their narratives.” (p. 20)
The report included other noteworthy analytic assessments, ones we’ve seen replicated on other platforms and in previous studies of information operations:
- IO is not solely focused on content related to elections but often includes other issues such as military conflicts (an area where we’ve seen Russia engage in disinformation for a long time) or even sporting events.
- These campaigns are becoming more targeted in terms of audience and content and more sophisticated in terms of operational security.
- Relatedly, as tech and social media companies have tried to increase oversight of their platforms, nefarious actors have increasingly looked to a diverse set of outlets and platforms to spread their message.
One trend worth devoting more focus to is “perception hacking,” a tactic where bad actors take advantage of the fear in some corners of the public that our election systems are vulnerable to widespread manipulation, despite the lack of evidence that it’s true. In these instances, these actors create fake accounts and content designed to give people the perception that these influence campaigns exist — when they don’t — and as a result convince them not to trust their institutions. Here’s one example that Facebook highlights:
“In the waning hours of the 2018 US Midterm elections, we investigated an operation by the Russian Internet Research Agency that claimed they were running thousands of fake accounts with the capacity to sway the election results across the United States. They even created a website — usaira[.]ru — complete with an “election countdown” timer where they offered up as evidence of their claim nearly a hundred recently-created Instagram accounts. These fake accounts were hardly the hallmark of a sophisticated operation, rather they were an attempt to create the perception of influence.” (p. 22)
As I’ve argued many times, the disinformation that Facebook is studying can only succeed if it has a willing and susceptible audience. In that vein, we have some new and disturbing data about the raging market in the United States for false online information. A new study published recently in the Proceedings of the National Academy of Science and led by a University of Utah professor found that as many as three in four Americans “overestimate their ability to spot false headlines — and the worse they are at it, the more likely they are to share fake news.”
The study involved surveys of 8,200 people who were shown headlines as they would appear in a Facebook feed and asked to rate their own ability to determine if these stories were true or false. The investigative team wrote in their findings:
“We show that overconfident individuals are more likely to visit untrustworthy websites in behavioral data; to fail to successfully distinguish between true and false claims about current events in survey questions; and to report greater willingness to like or share false content on social media, especially when it is politically congenial…In all, these results paint a worrying picture: The individuals who are least equipped to identify false news content are also the least aware of their own limitations and, therefore, more susceptible to believing it and spreading it further.”
In other words, those Americans more likely to believe false information online are also the most likely to share it. That’s why the real solution to this challenge isn’t just in shutting down troll farms or taking fake accounts offline; it’s teaching Americans better civics, media, and current affairs literacy so they don’t provide countries like Russia with such a target-rich environment. Facebook has taken an important step in naming and shaming these purveyors of these information operations, but to repeat my broken-record mantra: It’s on average Americans to do better, and it’s not clear we have the willingness or the ability to do so.
Marie Harf
International Elections Analyst, USC Election Cybersecurity Initiative
Marie Harf is a strategist who has focused her career on promoting American foreign policy to domestic audiences. She has held senior positions at the State Department and the Central Intelligence Agency, worked on political campaigns for President Barack Obama and Congressman Seth Moulton, and served as a cable news commentator. Marie has also been an Instructor at the University of Pennsylvania and a Fellow at Georgetown University’s Institute of Politics and Public Service.