Tim Boucher

Questionable content, possibly linked

Tag: facebook Page 1 of 2

Content ranking system broken

Gaming, verification, blockchain. Tech Crunch, May 2017, re: Userfeeds (Warsaw) company:

The system of content ranking and discovery on the web using links, likes, upvotes etc. is broken because algorithms are gamed by third parties (such as bots and false news providers) to change what you see on social media. That has meant platforms like Facebook ranking fake news higher than real news because it is, for instance, more sensational than boring and complex reality. We all know where that led…

Data selfie

This looks pretty cool, though I am not a Facebook user for pretty much exactly this reason. Data Selfie: project description from The Next Web. Direct site link.

It makes a predictive personality model based on your observed FB browsing habits, and only stores it on your computer. Let’s you export and delete.

Nikita Podgorny – IRA employee?

Wikipedia Web brigades page, current to November 2017:

“In 2015 Lawrence Alexander disclosed a network of propaganda websites sharing the same Google Analytics identifier and domain registration details, allegedly run by Nikita Podgorny from Internet Research Agency. The websites were mostly meme repositories focused on attacking Ukraine, Euromaidan, Russian opposition and Western policies. Other websites from this cluster promoted president Putin and Russian nationalism, and spread alleged news from Syria presenting anti-Western viewpoints.[37]”

Footnote [37] above links out to Global Voices, July 2015:

“It took less than a minute of searching to link the e-mail address to a real identity. A group on Russian social networking site VKontakte [archive] lists it as belonging to one Nikita Podgorny.

Podgorny’s public Facebook profile shows he is a member of a group called Worldsochi—the exact same name as one of the websites linked by the two Google Analytics codes I examined.”

… Most notably, Podgorny is listed in the leaked employee list of St. Petersburg’s Internet Research Agency, the pro-Kremlin troll farm featured in numerous news reports and investigations, including RuNet Echo’s own reports.

Leaked employee list linked above. (In Russian, image)

 

CrowdTangle, viral content, and malicious actors

October 2017, Washington Post:

“The logic of CrowdTangle’s model is relatively simple (even if the underlying math and software code gets complicated). CrowdTangle tracks clusters of Facebook pages and specific keywords. It gathers historical data on how stories, posts and images tend to perform on these sites, and then highlights the stories, posts and images that are doing best against their own expected baseline performance rate. [The] company then packages this information into a daily email, alerting [its] clients to the content which is likely to perform best on a day-to-day basis.”

Jonathan Albright’s Medium account has some great material.

Facebook’s famous missing 470 banned Russian accounts or pages

September 2017, Alex Stamos, official Facebook post:

“In reviewing the ads buys, we have found approximately $100,000 in ad spending from June of 2015 to May of 2017 — associated with roughly 3,000 ads — that was connected to about 470 inauthentic accounts and Pages in violation of our policies. Our analysis suggests these accounts and Pages were affiliated with one another and likely operated out of Russia.”

CNBC October 2017, tries to link 200 Twitter accounts to those 470 FB:

“Some of those same suspicious accounts on Facebook, however, also have ties to another 200 accounts on Twitter, a realization it shared with congressional investigators last week.”

Recode September 2017:

“Beyond publishing its findings, Facebook shared more granular details with its peers — standard practice for many tech giants, which generally band together to address online threats, such as hackers. With the aid of that information, Twitter discovered about 200 Kremlin-aligned accounts directly tied to some of the profiles Facebook previously identified. None of those suspicious Twitter accounts had purchased sponsored tweets, the company told lawmakers.”

So what are the full 470 items on FB’s suspended list? Twitter released their 2,700~ list already.

Many outlets are reporting today, including this Bloomberg November 2017 post, that Facebook will allow some users to see if they directly followed malicious accounts linked to the Internet Research Agency:

“The tool will appear by the end of the year in Facebook’s online support center, the company said in a blog post Wednesday. It will answer the user question, “How can I see if I’ve liked or followed a Facebook page or Instagram account created by the Internet Research Agency?” That’s the Russian firm that created thousands of incendiary posts from fake accounts posing as U.S. citizens. People will see a list of the accounts they followed, if any, from January 2015 through August 2017.”

Sounds like that list is maybe not yet available publicly at this time. I wrote to Library of Congress to see if it’s already been entered into the public record. Maybe they can help…

Facebook written testimony before Senate Intelligence Committe

Entity: GAFA / GAFAM / BATX / NATU

This is basically a pre-figuration of the Four Providers, if you ask me. French wikipedia web geants page:

“GAFA ou GAFAM, acronyme constitué des géants les plus connus (Google, Apple, Facebook, Amazon, Microsoft) ; ou encore chinois et surnommés BATX pour Baidu, Alibaba, Tencent et Xiaomi ; ou bien les Natu (Netflix, Airbnb, Tesla, Uber).”

Million dollar question – Facebook ad buys

September 2017 CNN reporting on BLM ads targeting Baltimore & Ferguson.

“Senator Mark Warner, the top-ranking Democrat on the Senate Intelligence Committee, said Tuesday that the “million-dollar question” about the Facebook ads centered on how the Russians knew whom to target.”

Were they being fed statistical targeting information, and by whom? Or were they just guessing?

Maybe that investigation will uncover some evidence it can share with the public.

Engadget September 2017 article claiming FB knew well in advance of the election what was happening with the ad buys.

“Despite once saying that it was “crazy” to believe Russians influenced the 2016 election, Facebook knew about a possible operation as early as June, 2016, the Washington Post reports. It only started taking it seriously after President Obama met privately with CEO Mark Zuckerberg ahead of Trump’s inauguration. He warned that if the social network didn’t take action to mitigate fake news and political agitprop, it would get worse during the next election. Obama’s aides are said to regret not doing more to handle the problem.”

Appears to be Washington Post September 2017 source of above:

“These issues have forced Facebook and other Silicon Valley companies to weigh core values, including freedom of speech, against the problems created when malevolent actors use those same freedoms to pump messages of violence, hate and disinformation.”

… “Facebook’s efforts were aided in part by the relatively transparent ways in which the extremist group sought to build its global brand. Most of its propaganda messages on Facebook incorporated the Islamic State’s distinctive black flag — the kind of image that software programs can be trained to automatically detect.

In contrast, the Russian disinformation effort has proven far harder to track and combat because Russian operatives were taking advantage of Facebook’s core functions, connecting users with shared content and with targeted native ads to shape the political environment in an unusually contentious political season, say people familiar with Facebook’s response.”

… “The sophistication of the Russian tactics caught Facebook off-guard. Its highly regarded security team had erected formidable defenses against traditional cyber attacks but failed to anticipate that Facebook users — deploying easily available automated tools such as ad micro-targeting — pumped skillfully crafted propaganda through the social network without setting off any alarm bells.”

This is interesting:

“He described how the company had used a technique known as machine learning to build specialized data-mining software that can detect patterns of behavior — for example, the repeated posting of the same content — that malevolent actors might use.

The software tool was given a secret designation, and Facebook is now deploying it and others in the run-up to elections around the world. It was used in the French election in May, where it helped disable 30,000 fake accounts, the company said. It was put to the test again on Sunday when Germans went to the polls. Facebook declined to share the software tool’s code name. ”

… “Instead of searching through impossibly large batches of data, Facebook decided to focus on a subset of political ads.

Technicians then searched for “indicators” that would link those ads to Russia. To narrow down the search further, Facebook zeroed in on a Russian entity known as the Internet Research Agency, which had been publicly identified as a troll farm.

“They worked backwards,” a U.S. official said of the process at Facebook.”

The Atlantic, September 2017.

“The problem appears to have been that Facebook’s spam- and fraud-tuned machine-learning systems could not see any differences between the “legitimate” speech of Americans discussing the election and the work of Russian operatives.”

Regarding WP quote above:

“I take this to mean that they identified known Internet Research Agency trolls, looked at the ads they posted, and then looked for similar ads being run, liked, or shared by other accounts.”

This is a very good direction of conjecture, if you ask me:

“Regular digital agencies (and media companies) routinely use Facebook ad buys to test whether stories and their attached “packaging” will fly on the social network. You run a bunch of different variations and find the one that the most people share. If the Internet Research Agency is basically a small digital agency, it would be quite reasonable that there was a small testing budget to see what content the operatives should push. In this case, the buys wouldn’t be about direct distribution of content—they aren’t trying to drive clicks or page likes—but merely to learn about what messages work.”

And:

“And the last possibility is that the Internet Research Agency wanted to make a buy that it knew would get Facebook in trouble with the government once it was revealed. Think of it as corporate kompromat. Surely the Internet Research Agency would know that buying Facebook ads would look bad for Facebook, not to mention sowing the discord that seems to have been the primary motivation for the information campaign.”

I’m sure the truth is some blend of all of the above, and we may not be privy to it any time soon.

Volodin’s Prism

Continuing a branch from Internet Research Agency source reference sheet.

Chen, 2015, NYT article:

“Volodin, a lawyer who studied engineering in college, approached the problem as if it were a design flaw in a heating system. Forbes Russia reported that Volodin installed in his office a custom-designed computer terminal loaded with a system called Prism, which monitored public sentiment online using 60 million sources. According to the website of its manufacturer, Prism “actively tracks the social media activities that result in increased social tension, disorderly conduct, protest sentiments and extremism.” Or, as Forbes put it, “Prism sees social media as a battlefield.””

Difficult to find other sources on the subject of Volodin’s Prism. NYT is plenty canonical for present purposes, but seems like Forbes source should be easier to trace.

I don’t trust 4chan as a source, but on /pol/ May 2014 there is what may be an auto-translated paragraph, which reads:

“At present, the Russian special services have no control over these sites , however, conduct external monitoring events, and look for the ” holes” in the protection of resources to deal with the political opposition , they can already .Note , some media reported earlier to establish a system to monitor social media developed by “Medialogia” . Program “Prism” supposedly allows you to track detached blog sites and social networks by scanning 60 million sources and tracking key statements users. Under the “eye” of the program were blogs users «LiveJournal», «Twitter», «YouTube», other portals . One of the alleged instances of the program installed in the office of the first deputy head of the department of internal policy of the presidential administration Vyacheslav Volodin , RBC reports “

RBC has the recent famous IRA article, so perhaps I can find whatever the source might be here (if real).

Medialogia is a new entity here.

Searching more turns up this January 2014 piece from globalvoices.org (not sure who/what that is).

“The Russian Federal Protective Service (FSO) is asking software developers to design a system that automatically monitors the country’s news and social media, producing reports that study netizens’ political attitudes. The state is prepared to pay nearly one million dollars over two years to the company that wins the state tender, applications for which were due January 9, 2014.”

Link to the site where the tender is listed. Name, auto-translated from Russian:

“Providing services for providing the results of automatic selection of media information, studying the information field, monitoring blogs and social media”

Organization:
Special communication of the FSO of Russia

Mailing address
Russian Federation, 107031, Moscow, Bolshoy Kiselny lane, house 4,

[…]

The contact person
Karygin Mikhail Yakovlevich”

Globalvoices also links out to iz.ru January 2014 article (auto-translated).

“Professionals, using specialized systems, will have to provide FSO with a personal compilation of messages from bloggers, which will allow daily monitoring of significant events on specific topics and regions. In addition, monitor negative or positive color of events. Information materials will be preliminarily processed, they will be grouped on specific topics: the president, the administration of the president’s administration, the prime minister, opposition protests, governors, negative events in the country, incidents, criticism of the authorities.”

Advox / Globalvoices (supported by Ford Foundation), which I’m starting to agree with, also says, in regards to the above iz.ru article:

“Izvestia’s coverage of the story bears all the hallmarks of Kremlin-friendly reportage, sandwiching comments by one critic of the FSO between two supporters of monitoring the Internet.”

Globalvoices links to this as the Medialogia website.

This text from their corporate site seems to match pretty well the Prism NYT description at top:

Blog monitoring and analysis reports

Medialogia offers regular blogosphere monitoring and analysis for companies. Monitoring sources: more than 40,000 social media, including LiveJournal, Twitter, VKontakte, [email protected], Ya.ru, industry blogs and forums.”

Is this a real company and product? Hard to really tell.

Tacking this on here, though not strictly related – it came up in similar searches and seems worth saving: Russia Beyond, December 2016 on new Russian cyber-security doctrine.

In his words, Russia’s government has paid special attention to countering new “Twitter revolutions,” those similar to the ones that occurred in the Middle East in the beginning of the decade.

“The Arab Spring demonstrated that Facebook, Twitter and other instant messaging services allow a lot of content that threatens social and political stability. The main thing is that we don’t have an effective model for blocking such processes,” said Demidov.

 

 

Page 1 of 2

Powered by WordPress & Theme by Anders Norén