I’ve been following with interest the so-called #IBoR conversations on Twitter, aka the Internet Bill of Rights. As the White House website petition ends today, and it failed (as of this writing) to achieve a 1/3 of the required 100,000, I thought maybe people should start talking about why it failed. And maybe how we could make something better.
Or somebody. I’m not saying I’m volunteering. What the hell do I know anyway… I’m just an “internet researcher.”
For posterity, here is the failed draft, accredited to “A.M.” on the site:
Internet forums and social networks which provide free access to the public are a digital place of assembly, and individuals using such methods for public communication should not be subjected to censorship due to political beliefs or differing ideas. Conservative voices on many large public website platforms are being censored, based solely on a differing opinion. Some of these platforms further employ tracking mechanisms for monitoring an individual’s digital history, which can be used to censor the individual’s public communication through various censorship practices, sometimes without knowledge or awareness. These actions directly violate personal liberty and stand at contrast with the bill of rights.
We the people demand action to bring our digital future into the light.
Thought it might be useful for my own brain if no one else to break this down into claims and assumptions – pick it apart a little bit before putting it back together again. I’ll do it internet research-style…
Domain applicable: Internet forums and social networks.
Claim: “which provide free access to the public”
Question: How accurate is that claim? “Free access” makes it sound like there is no cost to the user of the system. That is, it costs nothing to join. While that may be the case, the provider offers those services, assuming the cost of the infrastructure to build, maintain and continue to offer the service.
I would feel better with maybe phrasing such as “which offer open access” to the public. “Offer” also puts us in the mood of an economic exchange, or transaction. One party offers something, and the other accepts or denies the offer based on relative value perceived by each party.
“Open access” would then mean something like “anyone can join, because we don’t check you first.”
Not addressed:
- Who owns platforms & services?
- Are they privately owned? What rights does private ownership entail?
- What are the costs associated with offering “open access” to a service at no cost to users at scale?
- Do corporations, as a legal extension of the natural human persons of which their membership is composed, themselves have rights?
- Such as the right to stipulate acceptable usage policies (“Terms of Service”) in exchange for providing open access?
I’m not saying I have all the answers there. But these questions of private property, and the rights of natural persons and corporate persons to exercise linked rights have not been addressed in this draft. And I think it’s one of the fundamental reasons it failed. A document like this, whose ultimate aim is to be an Expression of the Truth™, must be rooted in a clearer understanding of the problem and the facts.
Now, I’m not a factologist, but I know how to do Google Searches. I found this thing on Investopedia (whose credibility I have no idea) about something interesting in real estate called a “bundle of rights.”
“…a set of legal rights afforded to the real estate title holder. It can include the right of possession, the right of control, the right of exclusion, the right of enjoyment and the right of disposition. “
Hm, so that’s interesting. Right to control, right to exclusion.
I’m also not a lawyer, but control and exclusion, hmmm…. reminds me of something. Can’t think of what… (Sorry for being snarky)
It sounds like owners of property have the right to control usage and to exclude from use that property.
If we acknowledge that those probably fundamental rights of private property pretty much do apply to platforms (hint: they do), then we have to start re-configuring already the rest of the document.
It’s not that the fundamental idea is false, or bad, or wrong. I actually very much agree with the main thrust of it. If we want to proceed and succeed, though, we have to reconfigure the approach, better ascertain what the problem is, and define more clearly what we’re asking.
Claim: “are a digital place of assembly”
Question: What related rights apply? Right to peaceful assembly. From the Library of Congress website:
The First Amendment to the United States Constitution prohibits the United States Congress from enacting legislation that would abridge the right of the people to assemble peaceably.[1] The Fourteenth Amendment to the United States Constitution makes this prohibition applicable to state governments.[2]
So the domain of applicability is first of all U.S. Congress, and secondly to state governments. Both of those are public actors. Not private entities.
It’s an interesting argument, but again doesn’t altogether address the idea that owners of private property can control and exclude use. (Will also come back to this idea another time when I’m not so bleary eyed. There’s a recent tech law case that backs this up.)
For now, the claim about “digital places of assembly” is not completely verified. Though the impulse behind it is duly noted.
Statement: “individuals using such methods for public communication”
I don’t want to be (too) pedantic, but there’s a lot to unpack there.
What does “public communication” mean?
It could mean a lot of things.
- Communicating with the general public, or people in a general way. In other words, broadcasting or publishing. Both broadcasting and publishing are resource-intensive and infrastructure-dependent activities.
- Public communication could also conceivably mean talking with friends and family. (Though, is that really ‘the public?’) If two parties to a message or communication are in physical proximity, they could speak, or, at greater distances, shout. Beyond that, there is a cost associated with carrying a message from point or person A to point or person B. Again, it requires energy (as in radio waves), as well resources & infrastructure (a transceiver, wires, antenna, electrical energy source. None of those things are free of charge, nor are sending letters by post, using the phone system, etc.
So I guess my main point again is there is always a cost associated with communication. Even if it’s only trying to put it into the right words…
Ask: “should not be subjected to censorship”
So I’m labeling that as an ask and not a claim because there’s a “should.” It’s a thing that is not now true.
I’m still not clear on when/where/at what point we can definitively say that “censorship” is happening. If, like the rest of the First Amendment, it’s domain of applicability is Congress and state governments, I still think we’re barking up maybe the wrong tree to say that: private companies offering services with an attached acceptable use policy are doing something wrong or bad within the greater system of the law by exercising their rights as owners of private property to control and exclude usage. It’s not a fundamentally strong argument in a system (society) more or less completely based on rules around how ownership of private property works. We might need to try to re-word this and cut down to a stronger root of the matter.
Claim: “Conservative voices on many large public website platforms are being censored, based solely on a differing opinion.”
While I understand the feeling people have that informs this, I think there is some strong confirmation bias going on here. People feel like their point of view is being silenced, when in actuality in the vast majority of cases, it is their conduct, and not belief or opinion. Conduct relates again to the property owner’s right to control and exclude (nevermind the right to “enjoy”), which is set forth usually in the acceptable use policy/terms of service/rules/community guidelines, etc.
Problem: Different sites/services/platforms have different rules of conduct, different teams of moderators, and different tolerances of risk. This can be confusing for users. There are few standards which unify definitions or examples of acceptable and unacceptable types of behavior across the various platforms. In saying that, I think this is a real problem that industry can work on. Perhaps embedded in a larger contextual framework, this could become a useful component of a more fleshed out IBoR draft.
Claim: “Some of these platforms further employ tracking mechanisms for monitoring an individual’s digital history…”
This much of that sentence, at least, is a true claim. Many, most, or perhaps even all(?) services monitor and track usage of their service. The primary use case of this, of course, is to be able to provide that service.
One thing I like about the upcoming EU GDPR rules for protection of personal information is that companies are required to have fair processing statements, which require them to more explicitly say what they collect, why, and what they do with it.
This is, in my opinion, a very healthy and interesting development in the tech industry that Europe is leading the way on. I know how much it hurts the American ego to believe that Europe is ahead on anything, but this regulation’s enforcement (goes into effect May 25, 2018) will be interesting to watch, as it even compels foreign companies (each operating under their own national laws) to abide also by its principle when offering services to EU or EEA (European Economic Area) residents. In effect, companies complying with this rule set may begin operating at and offering to (at least EU) users a higher default data protection standard than they would normally under American law. So that’s a good thing, and it ironically might come to Americans by way of foreign governmental bodies instead of a #wethepeople internet petition.
Claim: “can be used to censor the individual’s public communication through various censorship practices, sometimes without knowledge or awareness.”
This claim goes back to tracking and monitoring, which GDPR talks about fairly extensively, if doesn’t give perfectly clear answers on.
This claim also rests on a questionable assumption: that because a technology can be used to do something, that it is being used to do that.
More broadly, if a company sees you’re not respecting its usage agreement by engaging in prohibited conduct on the platform, it doesn’t need to consider and for the most part isn’t interested in whatever your beliefs might be. It’s not important.
A question I’ve always wanted to know the answer to: if it’s (today) overwhelmingly self-identified “conservatives” (and I understand less and less what that term even means anymore) who feel they are being silenced by platforms, would it be fair to wonder whether that group is also more frequently engaging in conduct prohibited by the usage agreements?
Now, it might be a valid argument to say that a given platform usage agreement might lean in a certain direction politically. But if it’s private property, and they have the right to control, exclude, and enjoy, on what realistic legal grounds would we, should we, or could we urge them to change? What about platforms whose political biases we agree with? Should we also therefore reciprocally give other oppressed groups likewise consideration? I know the answer isn’t clear (or maybe for you it is and this is all hogwash(, but I think the lack of offering a clear framing of the full question, the full need, and the desired solution is what has lead to the failure of #IBOR.
Claim: “These actions directly violate personal liberty and stand at contrast with the bill of rights.”
What does “violate personal liberty” mean, specifically?
We believe that personal liberties are somehow linked to the Bill of Rights, which, again, applies to congress, and to some extent state governments. So there is a “contrast,” to be sure, but it is not what is implied in the above claim. It’s a different domain of applicability. The Bill of Rights limits what government can do.
The Internet Bill of Rights, should such a document arise (hint: actual real drafts of such a document do exist – written by, gasp! the UN – I’ll find and post links another day), would instead be asking government to limit what business can do. In effect, asking government to take away the “personal liberties” of corporations (compromised of humans exercising their liberties) through legislation – which I grew up always understanding being something conservatives were supposed to hate. So that’s confusing too.
Ask: “We the people demand action to bring our digital future into the light.”
I’m a bit disappointed this was the closing ask of the document.
“Demand action” is not a strong phrase. It doesn’t put the needs and rights of the person, the internet user – I feel – enough in the driver seat. It’s asking someone else to take action. But it’s a really frustratingly vague action. “Bring our digital future into the light.” What is this, a Rainbow Gathering? Did you bring your crystals? How will we know whether or we’ve reach this magical place of love and light?
That’s not to bash on the author of this document (to whom I do apologize for this completely sincere but necessary line-by-line public take-down), so much as it is to hopefully 🔨 hammer home the necessity of polishing the underlying desire into a 💎 of greater ✨, which might have much greater significance and reach into all of our lives.
Thanks for reading. Remember to like me on Steemit. (jk)
Don’t stop #IBOR’ing.*
And while you’re doing that, go set up your own blog, on your own server. Learn the costs.
*Where and as I have time, I’ll try to do more to contribute to “the movement,” rather than sitting silently on the sidelines.
Russian Social Media Disruption Report
By Tim B.
On 24 November 2017
In Announcement, Assessment, Clues, Conjecture, Definition, Entity, Event, Example, Feeling, History, Information, Item, Link, Locale, Message, News, Operation, Other, Overview, Pattern, Program, Proof, Quote, Reference, Research, Review, Thing
From Russia☭ With Love 💔
If you’ve participated at all in comments online over the past year, the certainty is near 100% that you’ve seen other people or have been called yourself, a “troll,” “shill,” or maybe even a <gasp> “Russian.”
Accusations like these are rampant online, as is the paranoia which fosters them, thanks in no small part to a cloud of sensationalist media coverage and our seemingly intrinsic need to find bad guys lurking around every corner…
Disrupting democracy
Showtime’s most recent season of Homeland — season 6, episode 9 (2017) — portrays a shadowy quasi-governmental, private tech startup called the Office of Policy Coordination. Located six floors underground in a nondescript office building outside Washington, DC, the company is found to be responsible for secretly running a massive army of phony sock-puppet accounts across social media, posing as ordinary people in order to advance a nefarious political agenda.
Here’s a two minute clip for reference:
Airing originally in March of this year, the subplot is obviously inspired by events which transpired in cyberspace around the 2016 U.S. presidential election (along with Brexit, and possibly others), where malicious state-sponsored actors allegedly attempted to disrupt the democratic process.
We know the real world analogue of Homeland’s fictional Office of Policy Coordination to be the now infamous Internet Research Agency, or as they’re sometimes called in the media, the ‘Trolls from Olgino.’
Given the confusing, conflicting, and convoluted information out there about this alleged Russian interference, I took it upon myself to do the only logical thing any normal person would do: make a Carrie Mathison-style “crazy wall” inside my shed next to my chicken coop to try and sort it all out.
Okay, sure, it’s not quite as crazy as Carrie’s bipolar-driven Abu Nazir wall, but it’s my first time exteriorizing my own inner crazy wall. So cut me some slack. I had to start somewhere. And I can definitely say: the process was not only extremely useful in developing my understanding, but also oddly very therapeutic.
Persona Management Software Systems
In the subsequent Homeland episode (s06e10), Carrie’s friend and accomplice Max (Maury Sterling) states: “I’ve heard rumors of social media boiler rooms like this in Russia and in China, but not here. And definitely not on this scale.”
I don’t want to tv-splain too much because I know this is just drama, but based on my research into the subject — using all open source, publicly available information, which I’ve documented with a near religious zeal over the past three weeks — Max’s statement overlooks some important facts which are likely to be known by those working IRL in the security and intelligence fields.
Namely, that in 2010, the U.S. Air Force posted a solicitation to build what amounts to exactly the type of sock-puppet app portrayed in Homeland. Or as they called it on the Federal Business Opportunities website, Persona Management Software (fbo.gov, reproduced on Archive.org, June 2010).
It is, essentially, a social media and propaganda battle-station. From the solicitation:
Through a combination of VPNs, untraceable IPs, and traffic routed through regional proxies, such a service would enable mass identity-spoofing, using persistent personas, each of which has a detailed personal and social media character history for complete verisimilitude.
Though another company was ultimately awarded the contract (Ntrepid), there was a very relevant document leak by Anonymous from a security contractor called HB Gary Federal in 2011, in which that company’s own vision for such a persona management system was fleshed out in detail.
Quoting from Daily Kos’s 2011 post on the subject, which quotes the HB Gary emails themselves (archived on Wikileaks):
Character levels
The proposal goes on to describe various “character levels” within their system, based on utility and level of content development:
We can assume with a high degree of certainty, that if such advanced persona management software systems have been under development since at least 2010, that they have very probably advanced somewhat in the seven years which have passed since. To say the least…
Are they at the level of what’s depicted in Homeland’s “Sock Puppets” episode?
Hard to say —without penetrating the secret offices alleged to be using them!
Government manipulation of social media
Whether or not our television fantasies here hew close to actual reality — and Americans have been or are currently intentionally manipulated by secret factions in the United States (e.g., the “Deep State”) — a recent report by Freedom House, a US government-sponsored NGO, announced evidence that governments of some 30 countries currently use astro-turfing techniques to manipulate opinion on social media.
For the most part, the operations of these covert cyber troops are said to have a domestic-focus, with the notable exceptions of Russian interference in the 2016 United States presidential election, Brexit, also likely the French and German presidential campaigns, and more recently around the Spanish independence push in Catalonia.
But the story with regards to Russia goes deeper than that…
Much, much deeper.
Reports from inside the troll farm
Over the past several years, operational details from inside the Internet Research Agency have been provided by a series of leaks from former employees, infiltrations by journalists, and break-ins by hacktivists.
Most recently:
Nashi leaks of 2012
Though not specifically linked to the IRA, the Nashi youth movement leaks of 2012 (which appeared just before Putin’s challenging but successful 2012 re-election for a controversial third term) provide supplemental evidence of quasi-governmental youth organizations orchestrating prototypical astro-turfing and media manipulation campaigns, as well as pro-government counter-protests. Exactly like the techniques which have been documented above by the IRA, both on and offline, but engaged at the time in embryonic form against Russian mass anti-election fraud protests of 2011–2013 and events in the Ukraine.
We see echoes in BBC reporting from March 2012 of the types of attacks which came to be common place years later during the U.S. presidential election:
(See also: IRA support for and infiltration of social movements linked to Calexit, Texas secession, Black Matters, and Native American groups)
The facts about the Internet Research Agency
Via the above sources, we can determine a few key facts which can be used to track and organize our data.
Short list of personnel named in the media allegedly involved with the IRA:
A leaked IRA employee list (in Russian) is reproduced here for reference (source I believe is Savchuk leak).
Moscow Information Technologies
Last but not least, as further proof the knowledge and technology to pull off these types of online campaigns is alive and well in Russia, we turn to the case of Moscow Information Technologies, an IT group which supports the Mayor of Moscow.
Anonymous International/Shaltai Boltai also in 2014 leaked some emails between media outlets and government-linked Moscow Information Technologies which worked with Mayor Sobyanin to manipulate public opinion about his administration. Among many other activities, Moscow Times reported in May 2017:
Fake news rings
Macedonia
The tactics described by ex-employees of the Internet Research Agency, combined with other leaks relating to Nashi, and those above by Moscow Information Technologies seem to paint a technical picture which just so happens to mesh handily with fake news endeavors around the world, particularly those famously run out of Macedonia.
Russian coordination?
The Guardian in July 2017 suggested Robert Mueller was looking into possible ties between these types of fake news sites, to Russian and far-right websites in the United States leading up to the election. Quoting from that article:
Breitbart
Rolling Stone reporting in November 2017 suggests that Macedonian fake news sites were often sourcing material from U.S. based website Breitbart:
Another NY Times article from September 2017 explains how Breitbart’s Stephen Bannon latched onto false news and rumor-mongering out of Twin Falls Idaho, the so-called Fawnbrook incident:
WorldNetDaily
As reported by The Intercept, November 2016:
At the end of the day, whether all of the above are somehow coordinated, or if it’s just a coincidence is a moot point since the end effect is largely the same.
Micro-targeting
CNN, in September 2017 asked an important question regarding Russia-linked IRA Facebook ad buys targeting Baltimore and Ferguson:
Speculations are of course rife regarding the nature and connections between the Trump campaign, which was obviously served by disinformation and trolling campaigns, and agents of the Russian government. Did the Russians know which voters in which states to concentrate their efforts on? And if so, how exactly did they get this data?
Cambridge Analytica
Though the link is for now tenuous, one avenue of official investigation has gone after the potential role of big data company, Cambridge Analytica, which first worked on Ted Cruz’s campaign, later on Trump’s, and which may or may not have worked on Brexit. Incidentally, Breitbart’s Bannon was at one time VP of Cambridge Analytica, and held between a $1 and $5M stake in the company.
Here’s a video with a bit more info about CA’s methodology of micro-targeting individual voters based on psychological profile and tailoring campaign messaging directly to them:
Other likely suspects within the Trump administration appear to be, variously, Jared Kushner and Brad Parscale who worked on the data operation for the campaign. As well as Michael Flynn, who worked in a brief advisory role for Cambridge Analytica.
(See also: Correct the Record, Hillary PAC which used astro-turfing techniques)
Internet monitoring in Russia
Of course, the Russians may not have needed any outside help when it comes to monitoring internet activity. Since 2011, the Russian government has cracked-down hard on internet freedoms. For starters, all ISPs in Russia are required by the government to run a system called SORM (Wikipedia) which the Federal Security Service can use to access web traffic:
Though it is mysteriously unavailable at the time of this writing, we also have an interesting solicitation by the Russian government from 2014 for monitoring software partly entitled (auto-translation), “automatic selection of media information, studying the information field, monitoring blogs and social media.”
On this, iz.ru published in January 2014 a description:
Detecting signals of malicious actors
Facebook just announced that by the end of the year, they will offer a tool for users to see if they liked or followed accounts or pages linked to the Internet Research Agency. According to their written testimony before the Senate Select Intelligence Committee and an official blog post, Facebook said they have identified and suspended 470 accounts or pages. Twitter testified as to having identified and suspended with the help of third-party information some 2,752 accounts (full list).
Without having access the technical data which those platforms must have, we can speculate with a high degree of probability what signals and indicators Facebook, Twitter and Google must be able to use to identify potential malicious Russian accounts (with the disclaimer that each of these can be spoofed):
Key takeaways
In conclusion:
The best conclusion I think we could draw from this investigation is one I’ll borrow from Kester Ratcliff’s article on open source intelligence for beginners:
Strictly speaking, this isn’t a “Russia issue” at all. Any malicious actor could weaponize these vectors. It’s an information issue. And it’s here to stay until we do something about the entire system, not just the symptoms
Until then, I’ll keep working on my crazy wall.
I have a feeling we’re going to need it…