“Senator Mark Warner, the top-ranking Democrat on the Senate Intelligence Committee, said Tuesday that the “million-dollar question” about the Facebook ads centered on how the Russians knew whom to target.”
Were they being fed statistical targeting information, and by whom? Or were they just guessing?
Maybe that investigation will uncover some evidence it can share with the public.
“Despite once saying that it was “crazy” to believe Russians influenced the 2016 election, Facebook knew about a possible operation as early as June, 2016, the Washington Post reports. It only started taking it seriously after President Obama met privately with CEO Mark Zuckerberg ahead of Trump’s inauguration. He warned that if the social network didn’t take action to mitigate fake news and political agitprop, it would get worse during the next election. Obama’s aides are said to regret not doing more to handle the problem.”
Appears to be Washington Post September 2017 source of above:
“These issues have forced Facebook and other Silicon Valley companies to weigh core values, including freedom of speech, against the problems created when malevolent actors use those same freedoms to pump messages of violence, hate and disinformation.”
… “Facebook’s efforts were aided in part by the relatively transparent ways in which the extremist group sought to build its global brand. Most of its propaganda messages on Facebook incorporated the Islamic State’s distinctive black flag — the kind of image that software programs can be trained to automatically detect.
In contrast, the Russian disinformation effort has proven far harder to track and combat because Russian operatives were taking advantage of Facebook’s core functions, connecting users with shared content and with targeted native ads to shape the political environment in an unusually contentious political season, say people familiar with Facebook’s response.”
… “The sophistication of the Russian tactics caught Facebook off-guard. Its highly regarded security team had erected formidable defenses against traditional cyber attacks but failed to anticipate that Facebook users — deploying easily available automated tools such as ad micro-targeting — pumped skillfully crafted propaganda through the social network without setting off any alarm bells.”
This is interesting:
“He described how the company had used a technique known as machine learning to build specialized data-mining software that can detect patterns of behavior — for example, the repeated posting of the same content — that malevolent actors might use.
The software tool was given a secret designation, and Facebook is now deploying it and others in the run-up to elections around the world. It was used in the French election in May, where it helped disable 30,000 fake accounts, the company said. It was put to the test again on Sunday when Germans went to the polls. Facebook declined to share the software tool’s code name. “
… “Instead of searching through impossibly large batches of data, Facebook decided to focus on a subset of political ads.
Technicians then searched for “indicators” that would link those ads to Russia. To narrow down the search further, Facebook zeroed in on a Russian entity known as the Internet Research Agency, which had been publicly identified as a troll farm.
“They worked backwards,” a U.S. official said of the process at Facebook.”
“The problem appears to have been that Facebook’s spam- and fraud-tuned machine-learning systems could not see any differences between the “legitimate” speech of Americans discussing the election and the work of Russian operatives.”
Regarding WP quote above:
“I take this to mean that they identified known Internet Research Agency trolls, looked at the ads they posted, and then looked for similar ads being run, liked, or shared by other accounts.”
This is a very good direction of conjecture, if you ask me:
“Regular digital agencies (and media companies) routinely use Facebook ad buys to test whether stories and their attached “packaging” will fly on the social network. You run a bunch of different variations and find the one that the most people share. If the Internet Research Agency is basically a small digital agency, it would be quite reasonable that there was a small testing budget to see what content the operatives should push. In this case, the buys wouldn’t be about direct distribution of content—they aren’t trying to drive clicks or page likes—but merely to learn about what messages work.”
“And the last possibility is that the Internet Research Agency wanted to make a buy that it knew would get Facebook in trouble with the government once it was revealed. Think of it as corporate kompromat. Surely the Internet Research Agency would know that buying Facebook ads would look bad for Facebook, not to mention sowing the discord that seems to have been the primary motivation for the information campaign.”
I’m sure the truth is some blend of all of the above, and we may not be privy to it any time soon.