If you’ve participated at all in comments online over the past year, the certainty is near 100% that you’ve seen other people or have been called yourself, a “troll,” “shill,” or maybe even a <gasp> “Russian.”
Accusations like these are rampant online, as is the paranoia which fosters them, thanks in no small part to a cloud of sensationalist media coverage and our seemingly intrinsic need to find bad guys lurking around every corner…
Showtime’s most recent season of Homeland — season 6, episode 9 (2017) — portrays a shadowy quasi-governmental, private tech startup called the Office of Policy Coordination. Located six floors underground in a nondescript office building outside Washington, DC, the company is found to be responsible for secretly running a massive army of phony sock-puppet accounts across social media, posing as ordinary people in order to advance a nefarious political agenda.
Airing originally in March of this year, the subplot is obviously inspired by events which transpired in cyberspace around the 2016 U.S. presidential election (along with Brexit, and possibly others), where malicious state-sponsored actors allegedly attempted to disrupt the democratic process.
We know the real world analogue of Homeland’s fictional Office of Policy Coordination to be the now infamous Internet Research Agency, or as they’re sometimes called in the media, the ‘Trolls from Olgino.’
Given the confusing, conflicting, and convoluted information out there about this alleged Russian interference, I took it upon myself to do the only logical thing any normal person would do: make a Carrie Mathison-style “crazy wall” inside my shed next to my chicken coop to try and sort it all out.
Okay, sure, it’s not quite as crazy as Carrie’s bipolar-driven Abu Nazir wall, but it’s my first time exteriorizing my own inner crazy wall. So cut me some slack. I had to start somewhere. And I can definitely say: the process was not only extremely useful in developing my understanding, but also oddly very therapeutic.
Persona Management Software Systems
In the subsequent Homeland episode (s06e10), Carrie’s friend and accomplice Max (Maury Sterling) states: “I’ve heard rumors of social media boiler rooms like this in Russia and in China, but not here. And definitely not on this scale.”
I don’t want to tv-splain too much because I know this is just drama, but based on my research into the subject — using all open source, publicly available information, which I’ve documented with a near religious zeal over the past three weeks — Max’s statement overlooks some important facts which are likely to be known by those working IRL in the security and intelligence fields.
Namely, that in 2010, the U.S. Air Force posted a solicitation to build what amounts to exactly the type of sock-puppet app portrayed in Homeland. Or as they called it on the Federal Business Opportunities website, Persona Management Software (fbo.gov, reproduced on Archive.org, June 2010).
It is, essentially, a social media and propaganda battle-station. From the solicitation:
“Software will allow 10 personas per user, replete with background , history, supporting details, and cyber presences that are technically, culturally and geographacilly consistent. Individual applications will enable an operator to exercise a number of different online persons from the same workstation and without fear of being discovered by sophisticated adversaries. Personas must be able to appear to originate in nearly any part of the world and can interact through conventional online services and social media platforms. The service includes a user friendly application environment to maximize the user’s situational awareness by displaying real-time local information.”
Through a combination of VPNs, untraceable IPs, and traffic routed through regional proxies, such a service would enable mass identity-spoofing, using persistent personas, each of which has a detailed personal and social media character history for complete verisimilitude.
Though another company was ultimately awarded the contract (Ntrepid), there was a very relevant document leak by Anonymous from a security contractor called HB Gary Federal in 2011, in which that company’s own vision for such a persona management system was fleshed out in detail.
“For this purpose we custom developed either virtual machines or thumb drives for each persona. This allowed the human actor to open a virtual machine or thumb drive with an associated persona and have all the appropriate email accounts, associations, web pages, social media accounts, etc. pre-established and configured with visual cues to remind the actor which persona he/she is using so as not to accidentally cross-contaminate personas during use.” …
“These accounts are maintained and updated automatically through RSS feeds, retweets, and linking together social media commenting between platforms. With a pool of these accounts to choose from, once you have a real name persona you create a Facebook and LinkedIn account using the given name, lock those accounts down and link these accounts to a selected # of previously created social media accounts, automatically pre-aging the real accounts.”
The proposal goes on to describe various “character levels” within their system, based on utility and level of content development:
Level 0: Quick use, no background persona required.
Level 1: Slightly more fleshed out, with multiple accounts across different services correlated to one another, with privacy set to high on accounts so as not to disclose too much information publicly.
Level 2: More detailed persistent persona with background; fleshed out with blend of automated and human-generated content history.
Level 3: Most detailed, developed and realistic; capable of having human-to-human (online) interactions, with multiple correlated social accounts and a realistic personal, and professional background if needed.
We can assume with a high degree of certainty, that if such advanced persona management software systems have been under development since at least 2010, that they have very probably advanced somewhat in the seven years which have passed since. To say the least…
Are they at the level of what’s depicted in Homeland’s “Sock Puppets” episode?
Hard to say —without penetrating the secret offices alleged to be using them!
Government manipulation of social media
Whether or not our television fantasies here hew close to actual reality — and Americans have been or are currently intentionally manipulated by secret factions in the United States (e.g., the “Deep State”) — a recent report by Freedom House, a US government-sponsored NGO, announced evidence that governments of some 30 countries currently use astro-turfing techniques to manipulate opinion on social media.
For the most part, the operations of these covert cyber troops are said to have a domestic-focus, with the notable exceptions of Russian interference in the 2016 United States presidential election, Brexit, also likely the French and German presidential campaigns, and more recently around the Spanish independence push in Catalonia.
But the story with regards to Russia goes deeper than that…
Much, much deeper.
Reports from inside the troll farm
Over the past several years, operational details from inside the Internet Research Agency have been provided by a series of leaks from former employees, infiltrations by journalists, and break-ins by hacktivists.
Ex-IRA employee Alan Baskaev described to The Daily Beast in October 2017, an outrageous work environment, in which (among other things) the organization allegedly produced a fake Hillary Clinton sex tape intended to go viral.
Russian media site RBC.ru published in October 2017 a Russian-language expose of the IRA, which has become something of a canonical source in online discussions of the topic (I used Google Chrome auto-translate extension to read it). Some useful context on RBC: their offices were raided by the Russian government in 2016 after publishing documents from the Panama Papers, connecting Putin’s son-in-law to offshore assets, and ending in the sacking of their then editor-in-chief, and mass resignation of significant portion of their journalistic staff. RBC was owned until June 2017 by billionaire Mikhail Prokhorav — owner of the Brooklyn Nets basketball team, and failed 2012 presidential election opponent to Putin.
Collaborating with Adrian Chen of the NY Times in his seminal June 2015 article, “The Agency,” environmental activist Lyudmila Savchuk took a job with the IRA, documented and leaked information to the public describing the organization’s internal structure and techniques. As in the USAF and HB Gary documents, we learn that agency employees used VPNs to mask their location while propagating through phony social media accounts propaganda talking points, keywords and targets provided by daily technical task sheets.
“…thousands of young men and women are learning how to be supporters of the ruling United Russia party, future politicians and senior government officials. […]
These young people are taught to open up accounts in all social networks, make as many friends as possible and thus spread information with maximum efficiency,” explained Vasily Yakemenko, founder of the Nashi youth group and head of the Federal Agency for Youth Affairs that runs the camp.”
Also from the 2013 Novaya Gazeta reporting, we learn that Soskovets’ own North-Western Service Agency was seeking employees to open up offices similar to the Internet Research Agency in Moscow and other cities. It is unknown how many other organizations like the IRA are in operation. Soskovets in that article discusses humans being used in place of bots, because they are much more difficult to detect than bots, which platforms are able to find and suspend easily.
Nashi leaks of 2012
Though not specifically linked to the IRA, the Nashi youth movement leaks of 2012 (which appeared just before Putin’s challenging but successful 2012 re-election for a controversial third term) provide supplemental evidence of quasi-governmental youth organizations orchestrating prototypical astro-turfing and media manipulation campaigns, as well as pro-government counter-protests. Exactly like the techniques which have been documented above by the IRA, both on and offline, but engaged at the time in embryonic form against Russian mass anti-election fraud protests of 2011–2013 and events in the Ukraine.
We see echoes in BBC reporting from March 2012 of the types of attacks which came to be common place years later during the U.S. presidential election:
“These bots succeeded in blocking the actual message feed with that hashtag,” he wrote.
The rate at which pro-government messages were posted, about 10 per second, suggests they were being done automatically rather than by individuals…”
Via the above sources, we can determine a few key facts which can be used to track and organize our data.
It has held at least two different addresses, both in St. Petersburg: starting sometime in 2013, at 131 Lakhtinsky Prospekt (Olgino district), and moving probably in 2014 to a larger office with more staff at 55 Savushkina.
Also referenced as sharing this address is an organization called FAN, or Federal News Agency (which Adrian Chen goes into more in his NYT 2015 piece), as well as People’s News, and potentially others which seem to cooperate to some extent in at least aggregating one another’s stories.
Outside of this, what we might call “facts” reported vary pretty widely. Though all seem to agree more or less on the overall structure and work carried out by the Agency, numbers of staff range anywhere from 50 up to 900 at different times, and according to different services.
Paid at wages well above area norms, participants worked as “internet operators,” fulfilling in 12 hour shifts content quotas which varied depending on the section they worked in: whether they were lower-level social media commentators, or more full-fledged bloggers, or worked on other kinds of content such as video.
Wired in September 2017 reported that the Internet Research Agency was supposedly officially disbanded in approximately 2015 (presumably due to bad press), and re-named Glavset, but operates still out of the same address.
Short list of personnel named in the media allegedly involved with the IRA:
Last but not least, as further proof the knowledge and technology to pull off these types of online campaigns is alive and well in Russia, we turn to the case of Moscow Information Technologies, an IT group which supports the Mayor of Moscow.
Anonymous International/Shaltai Boltai also in 2014 leaked some emails between media outlets and government-linked Moscow Information Technologies which worked with Mayor Sobyanin to manipulate public opinion about his administration. Among many other activities, Moscow Times reported in May 2017:
“Sobyanin’s administration heavily invests in swaying the agenda on Yandex.News, Russia’s biggest online news aggregator.
“MIT devised a scheme wherein Moscow’s neighborhood councils (most of them totally loyal to the mayor and to United Russia) set up dozens of similar news websites that are capable of firing off volleys of nearly identical news articles promoting the mayor’s initiatives. This onslaught fools Yandex’s algorithm into thinking that something important is happening. The news aggregator doesn’t differentiate between the sources, and thus assumes there’s a news event that deserves top billing in its ranking system, if hundreds of different outlets are reporting on a single event.”
Fake news rings
The tactics described by ex-employees of the Internet Research Agency, combined with other leaks relating to Nashi, and those above by Moscow Information Technologies seem to paint a technical picture which just so happens to mesh handily with fake news endeavors around the world, particularly those famously run out of Macedonia.
The Guardian in July 2017 suggested Robert Mueller was looking into possible ties between these types of fake news sites, to Russian and far-right websites in the United States leading up to the election. Quoting from that article:
“Mattes, a former Senate investigator, did some digging into the sudden phenomenon of eastern European Sanders enthusiasts. He found a spike in activity on the anonymous browsing tool Tor in Macedonia that coincided with the launch of the fake news campaign, which he believes could represent Russian handlers contacting potential east European hosts to help them set up automated websites.”
“He has also found a high degree of apparent coordination in the dissemination of fake news between official Russian propaganda outlets and “alt-right” sites in the US.
“They synchronise so quickly it looks as if they know when a particularly story was going to come out,” he added. “And they all parrot the Kremlin narrative.”
“When I traveled to Macedonia last summer, Borce Pejcev, a computer programmer who has set up dozens of fake-news sites — for around 100 euros each — said it wasn’t quite that simple. Macedonians don’t invent fake news stories, he told me. “No one here knows anything about American politics. They copy and paste from American sites, maybe try to come up with more dramatic headline.” Fox News, TruePundit.com, DailyCaller.com, InfoWars and Breitbart, he said, were among the Macedonians’ most common source material (“Breit-bart was best”).”
Another NY Times article from September 2017 explains how Breitbart’s Stephen Bannon latched onto false news and rumor-mongering out of Twin Falls Idaho, the so-called Fawnbrook incident:
“The Twin Falls story aligned perfectly with the ideology that Stephen Bannon, then the head of Breitbart News, had been developing for years, about the havoc brought on by unchecked immigration and Islamism, all of it backed by big-business interests and establishment politicians. Bannon latched onto the Fawnbrook case and used his influence to expand its reach.”
“Other conservative content farms, including WorldNetDaily, maintained ties to the Trump election effort. Campaign finance records show that Great America PAC, a Trump-backing Super PAC, paid WND, known as the largest purveyor of Obama birth certificate conspiracy theories, for “online voter contact.”
At the end of the day, whether all of the above are somehow coordinated, or if it’s just a coincidence is a moot point since the end effect is largely the same.
“Senator Mark Warner, the top-ranking Democrat on the Senate Intelligence Committee, said Tuesday that the “million-dollar question” about the Facebook ads centered on how the Russians knew whom to target.”
Speculations are of course rife regarding the nature and connections between the Trump campaign, which was obviously served by disinformation and trolling campaigns, and agents of the Russian government. Did the Russians know which voters in which states to concentrate their efforts on? And if so, how exactly did they get this data?
Though the link is for now tenuous, one avenue of official investigation has gone after the potential role of big data company, Cambridge Analytica, which first worked on Ted Cruz’s campaign, later on Trump’s, and which may or may not have worked on Brexit. Incidentally, Breitbart’s Bannon was at one time VP of Cambridge Analytica, and held between a $1 and $5M stake in the company.
Of course, the Russians may not have needed any outside help when it comes to monitoring internet activity. Since 2011, the Russian government has cracked-down hard on internet freedoms. For starters, all ISPs in Russia are required by the government to run a system called SORM (Wikipedia) which the Federal Security Service can use to access web traffic:
“It allow[s] the agency to unilaterally monitor users’ communications metadata and content, including phone calls, email traffic and web browsing activity. […] In 2014, the system was expanded to include social media platforms…”
Though it is mysteriously unavailable at the time of this writing, we also have an interesting solicitation by the Russian government from 2014 for monitoring software partly entitled (auto-translation), “automatic selection of media information, studying the information field, monitoring blogs and social media.”
“Information materials will be preliminarily processed, they will be grouped on specific topics: the president, the administration of the president’s administration, the prime minister, opposition protests, governors, negative events in the country, incidents, criticism of the authorities.”
Without having access the technical data which those platforms must have, we can speculate with a high degree of probability what signals and indicators Facebook, Twitter and Google must be able to use to identify potential malicious Russian accounts (with the disclaimer that each of these can be spoofed):
IP (geolocation) — made unreliable by VPNs, of course.
Currency used for transactions — can be faked as well.
Russian media outlet Vedomosti said in May 2014 that the techniques pioneered by the Russian government proved to be so successful at home after the mass protests that they exported them to the European and American markets.
Vladimir Putin has long maintained that the internet is a CIA ploy, as an excuse to enforce ever-tighter controls over the technology. He also claims color revolutions, mass protests against the Russian government (as well as the Arab Spring) were orchestrated by foreign actors.
I haven’t gone down the 🐇 🕳 of whether Putin’s claims are true, but the development of such tools around 2010–2011 in the United States for use against foreign targets is certainly an interesting correlation.
Based on my research, there is a stunning lack of original reporting available on these topics which are of potentially grave international importance.
News outlets — even major “reputable” ones — seem to just be reporting on one another’s reporting. It’s a hall of mirrors all the way down. And it’s not just on this topic: it’s the whole news ecosystem.
Fake news and so-called ‘meme warfare’ aren’t some accident of our post-modern mainstream media, but the obvious through-line of technologies whose goal is to amorally propagate information regardless of quality or veracity.
Fact-checking as a counter to misinformation, disinformation, propaganda and fake news is not a fool-proof process. It is made all the more difficult when there are very few, or only obscured sources available to the public. (See #6)
I’m not crazy about what Wikileaks has done politically, but as a tool for organizing leaked documents for further research by members of the public, it’s exactly what is needed.
Wikipedia articles are as good as the sources they cite.
Fact-TRACKING may ultimately prevail over fact-checking. That is, in a world of dwindling original sources, and an endless multitude of rip-offs and copies, perhaps there is an epidemiological approach that could be applied to tracking the origin and distribution of blocks of information (e.g., “facts,” factoids, sound-bites, or memes for that matter). Blockchain for news, anyone?
“The internet will continue to be a confusing information-psychological warzone until the networked-ness of information is made visible so that people can easily and instantly see where stuff’s coming from and who/ what it’s associated with and what effects their interacting with it may have.”
Strictly speaking, this isn’t a “Russia issue” at all. Any malicious actor could weaponize these vectors. It’s an information issue. And it’s here to stay until we do something about the entire system, not just the symptoms
“GAFA ou GAFAM, acronyme constitué des géants les plus connus (Google, Apple, Facebook, Amazon, Microsoft) ; ou encore chinois et surnommés BATX pour Baidu, Alibaba, Tencent et Xiaomi ; ou bien les Natu (Netflix, Airbnb, Tesla, Uber).”
“To explain how they work, Ben Nimmo, a fellow at the Atlantic Council’s Digital Forensic Research Lab, uses a shepherding analogy. “A message that someone or some organization wants to ‘trend’ is typically sent out by ‘shepherd’ accounts,” he says, which often have large followings and are controlled by humans. The shepherds’ messages are amplified by ‘sheepdog’ accounts, which are also run by humans but can be default-set “to boost the signal and harass critics.” At times, the shepherds personally steer conversations, but they also deploy automation, using a kind of Twitter cruise control to retweet particular keywords and hashtags. Together, Nimmo says, the shepherds and sheepdogs guide a herd of bots, which “mindlessly repost content in the digital equivalent of sheep rushing in the same direction and bleating loudly.””
“The report that came back focused on the Low Orbit Ion Cannon, a tool originally coded by a private security firm in order to test website defenses. The code was open-sourced and then abandoned, but someone later dusted it off and added “hivemind mode” that let LOIC users “opt in” to centralized control of the tool. With hundreds or thousands of machines running the stress-test tool at once, even major sites could be dropped quickly.”
“Volodin, a lawyer who studied engineering in college, approached the problem as if it were a design flaw in a heating system. Forbes Russia reported that Volodin installed in his office a custom-designed computer terminal loaded with a system called Prism, which monitored public sentiment online using 60 million sources. According to the website of its manufacturer, Prism “actively tracks the social media activities that result in increased social tension, disorderly conduct, protest sentiments and extremism.” Or, as Forbes put it, “Prism sees social media as a battlefield.””
Difficult to find other sources on the subject of Volodin’s Prism. NYT is plenty canonical for present purposes, but seems like Forbes source should be easier to trace.
“At present, the Russian special services have no control over these sites , however, conduct external monitoring events, and look for the ” holes” in the protection of resources to deal with the political opposition , they can already .Note , some media reported earlier to establish a system to monitor social media developed by “Medialogia” . Program “Prism” supposedly allows you to track detached blog sites and social networks by scanning 60 million sources and tracking key statements users. Under the “eye” of the program were blogs users «LiveJournal», «Twitter», «YouTube», other portals . One of the alleged instances of the program installed in the office of the first deputy head of the department of internal policy of the presidential administration Vyacheslav Volodin , RBC reports “
RBC has the recent famous IRA article, so perhaps I can find whatever the source might be here (if real).
“The Russian Federal Protective Service (FSO) is asking software developers to design a system that automatically monitors the country’s news and social media, producing reports that study netizens’ political attitudes. The state is prepared to pay nearly one million dollars over two years to the company that wins the state tender, applications for which were due January 9, 2014.”
“Professionals, using specialized systems, will have to provide FSO with a personal compilation of messages from bloggers, which will allow daily monitoring of significant events on specific topics and regions. In addition, monitor negative or positive color of events. Information materials will be preliminarily processed, they will be grouped on specific topics: the president, the administration of the president’s administration, the prime minister, opposition protests, governors, negative events in the country, incidents, criticism of the authorities.”
This text from their corporate site seems to match pretty well the Prism NYT description at top:
“Blog monitoring and analysis reports
Medialogia offers regular blogosphere monitoring and analysis for companies. Monitoring sources: more than 40,000 social media, including LiveJournal, Twitter, VKontakte, [email protected], Ya.ru, industry blogs and forums.”
Is this a real company and product? Hard to really tell.
In his words, Russia’s government has paid special attention to countering new “Twitter revolutions,” those similar to the ones that occurred in the Middle East in the beginning of the decade.
“The Arab Spring demonstrated that Facebook, Twitter and other instant messaging services allow a lot of content that threatens social and political stability. The main thing is that we don’t have an effective model for blocking such processes,” said Demidov.