Last August, researchers from the threat intelligence firm FireEye uncovered a vast social media influence campaign, conducted by a network of inauthentic news outlets and fake personas with ties to Iran. Their findings were a stark reminder that these kinds of tactics aren’t limited to Russia. Now FireEye has published a sequel of sorts, documenting the evolving methods of disinformation actors are using across social media platforms and other outlets to promote Iranian interests online. And platforms are still racing to keep pace: On Tuesday, Facebook announced a takedown of 51 Facebook accounts, 36 Facebook pages, seven Facebook groups and three Instagram accounts that it says were all involved in coordinated “inauthentic behavior.” Facebook says the activity originated geographically from Iran.
Facebook says its actions stemmed in part from FireEye’s investigation as well as its own ongoing metadata analyses and behavioral tracking. The company has done other Iran-linked takedowns in recent months. The network removed Tuesday was not as large as some of those, according to the company: about 21,000 users followed one or more of the Facebook pages; 1,900 joined one or more of the groups; and about 2,600 people followed at least one of the three Instagram accounts. FireEye’s findings were broader in scope, focusing on a network of English-language social media accounts, particularly Twitter accounts, which were created between April 2018 and March 2019. And the fake personas FireEye tracked also found their way into English-language media outlets. FireEye emphasizes that it only observed activity in line with Iran’s interests and is not drawing a direct connection to any actor or government. Similarly, Facebook traced the pages and accounts it took down to Iran, but did not say who was actually behind them or what their goals were.
“It’s important we recognize that not all information operations look the same and that there are a range of different techniques that are being used,” says Nathaniel Gleicher, Facebook’s head of cybersecurity policy. “So as we’re thinking about tackling this problem the platforms need to think about different tools we can use to respond.”
Lily Hay Newman covers information security, digital privacy, and hacking for WIRED.
Compared to their findings last August, the FireEye researchers now see some digital personas in these campaigns promoting both progressive and conservative views, directly impersonating people’s online accounts, and even getting their views published in US and Israeli mainstream media. FireEye also observed the personas posing as journalists to initiate contact with individuals. Facebook saw similar activity and noted that the focus on one-to-one communication was a noteworthy, though not unprecedented, evolution for disinformation efforts coming out of Iran.
“The significance of this report is it presents a whole other set of tactics and techniques that we think the public should be aware of,” says Lee Foster, manager of information operations analysis at FireEye. “It’s not confined to any one type of medium or platform—they [disinformation actors] will try to incorporate different tactics across the entire information space.”
The campaigns FireEye has been tracking largely espouse anti-Israeli, pro-Palestinian, and anti-Saudi stances, which aligns with Iranian state policies. Facebook reported observing similar positions and content in the accounts and pages it took down. FireEye also saw personas promoting the Iran nuclear deal, and condemning numerous Trump administration actions, including the White House’s decision to designate Iran’s Islamic Revolutionary Guard Corps as a foreign terrorist organization. Occasionally, though, the accounts also promote pro-Trump, anti-Iran messaging, perhaps attempting to garner more followers or taking a page from the Russian disinformation playbook by trying to be less predictable and promote chaos.
“The sheer diversity of tactics is what’s important. This whole problem of influence operations is a societal problem we need to tackle collectively”
Lee Foster, FireEye
Some of the fake Twitter accounts had been set up to specifically impersonate politicians, including a handful of Republican candidates for the House of Representatives in 2018. In one situation from September 2018, the actors set up an account to look just like that of California 9th Congressional District candidate Marla Livengood. The malicious account even copied the text from some of Livengood’s real tweets to look as similar and legitimate as possible. Facebook says it did not see this particular type of behavior from these actors on its platform. The approach is reminiscent of a popular impersonation scam on Twitter, though, showing how malicious actors are inspired by and repurpose all sorts of classic deception techniques.
The fake personas have also expanded their criticism of and focus on mainstream media outlets over the past year. But while some initiatives work to discredit and undermine news outlets, other personas attempt to influence discourse by commenting on articles or even publishing essays themselves. The FireEye researchers traced disinformation-linked personas to letters and even one column published in outlets like The Times of Israel, the New York Daily News, and the Los Angeles Times.
Though the FireEye researchers haven’t attributed these disinformation efforts to a particular source yet, they emphasize that the crucial takeaway is the speed and fluency with which the actors expand to new methods and approaches. And with more and more nations and criminal groups deploying digital disinformation campaigns all the time, it’s important to keep up with the latest permutations to see the incremental logic behind how the efforts are evolving.
“I’m probably the last person to ask about what blows your mind, because we’re looking at this stuff all the time,” Foster says. “The sheer diversity of tactics is what’s important. This whole problem of influence operations is a societal problem we need to tackle collectively”
More Great WIRED Stories