Tim Boucher

Questionable content, possibly linked

Tag: china

Chinese Imperial Music Bureau

“The earliest mentions of a government office of music or at least an official in charge of music or a department of music is found in Chinese mythology. Huang Di is claimed to have appointed a Governor of Music, named Ling Lun.[1] As Governor of Music, Linglun seems to have been charged with designing and overseeing the production of actual instruments, as well as the development of the musical scale. Emperor Shun is said to have founded a Ministry of Music, to which he appointed a Minister Kui to head. The main purpose of this institution was to teach the heirs apparent proper conduct and harmony (in both sense of the word), and as such it served as a mythological model for both the future Music Bureau and the imperial education system.”

Taobao Villages in China

Quartz article about Taobao Villages.

Quote from Medium article on same topic:

“Their [Alibaba] Taobao marketplace, which allows even micro businesses to reach customers across China, generated what we call now “Taobao Villages.”

Taobao Villages are precarious villages where the main activity is manufacturing products to be sold on the website. It has now become key to the development of China’s lower class and strengthened the relationship with the government.

Trust-breakers (Chinese Social Credit System)

Legal effects of automated processing, a comparison.

I’ve been reading about China’s emerging social credit system, Sesame Credit.

“The score is used to rank citizens of China based on a variety of factors like loyalty to the Chinese government and loyalty to Chinese brands based on social media interactions and online purchases. The rewards of having a high score include easier access to loans, easier access to jobs and priority during bureaucratic paperwork.”

Here are a couple articles to get you started:

Blah blah blah, obligatory Black Mirror reference. Now that we have that out of the way, from the CNBC link:

“When rules are broken and not rectified in time, you are entered in a list of ‘people subject to enforcement for trust breaking’ and you are denied access from things. Rules broken can lead to companies being unable to issue corporate bonds or individuals not being allowed to become company directors,” Creemers said.

Basically, a bunch of apps and agencies work together to rank your behavior and profile you socially as either a trust-keeper or trust-breaker as described above. Via the FP link:

“By 2020, the government says that social credit will “allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step.””

I had a dream years ago that I used a urinal at a shopping mall, and the system automatically administered a drug test on me, which I failed. Not being a cell phone user, I needed to then borrow a friend’s phone to make a call and the system linked my voice-print to my biometric/pee test and I was disallowed from using my friend’s phone. Such a unified system may be a few years off still, but the possibility is becoming tantalizingly real. I might even say it’s, on some level of implementation, pretty much inevitable.

I’ve been following a parallel strand of research these past few months. It’s partly intuition, partly investigative leg work, but it’s lead from public records databases used by private eyes 🕵 to the vast store-houses of data kept commercially against named individuals by data brokers. I’m still largely in the dark about how data brokers operate, and, er, broker — despite hours spent around the subject on Youtube. But I have to assume that those storehouses of information about people have to be searchable — at a price. Whether it’s pseudonymized in aggregate, or traceable to an identified or identifiable individual, all this information exists somewhere out there, waiting to be linked up and put to use.

Meanwhile, half a world away, Europe is set to roll out it’s GDPR next May (2018) which will quite possibly make very difficult – or at least very different – such a social credit system were it to be rolled out to customers in the European Economic Area.

I explored this question elsewhere, of processing of personally-identifying information linked to automated decision-making, and profiling, with “legal effects”. So I won’t completely rehash it here, but to quote Article 22 of the GDPR:

“The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”

The recital for that article mentions explicitly as an example the “automatic refusal of an online credit application” as something that has a legal effect.

I guess this is worth quoting more extensively from second half of Recital 71:

“Such processing includes ‘profiling’ that consists of any form of automated processing of personal data evaluating the personal aspects relating to a natural person, in particular to analyse or predict aspects concerning the data subject’s performance at work, economic situation, health, personal preferences or interests, reliability or behaviour, location or movements, where it produces legal effects concerning him or her or similarly significantly affects him or her.”

So this is pretty much explicitly describing an all-encompassing “social credit” system such as is currently being live-beta tested on Chinese society. In other words, Europe is baking into their privacy & data protection regime this idea that the fundamental rights of humanity (from which privacy/data prot. are derived) are incompatible with automated decision-making based on data processing with (potentially negative) legal consequences.

That’s huge.

To me, as we move into the Algorithmic Society (and it’s many diverse, fascinating and horrifying forms, instantiations and iterations), this will be a fundamental tension as humanity transitions to greater and greater levels of algorithmic control, automation and governance of day-to-day life.

Quoting from Art. 22, 3:

“…at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.”

The subject still blows my mind. Partly because we now live in (or very soon will) a society where such rules have become necessary. The algorithmic age where Trust Breakers ™ can’t buy train tickets, or make a phone call. But in exchange for keeping your score up, you’re eligible for ultra-fast lane physical access from Boston to DC in 15 minutes, with no control checkpoints, minimal surge pricing, only light deep packet inspection and limited throttling. [See full Terms & Conditions.]

Or a world where slow, boring, crappy, unreliable human bureaucratic decision-making is baked into break-points in societal algorithms to ensure some sort of fairness, humanity, tolerance, resilience (and maybe forgiveness?) into what will otherwise most assuredly become a mesh of AI’s vying for planetary control…

I’m sorry, that’s just where my mind goes when I pull out my 🔮. I think it’s why I like the GDPR as a document in the first place: it reads like a dystopian cyberpunk text that young punks in the future will repeat back verbatim to quasi-governmental robots that are beating the snot out of them because their social credit score has fallen too low.

Whoops, I went off into la-la land there again at the end*. But what can I say? I’m on vacation! 🌴🍹


* You try talking about this without landing on the subject of killer robots. It’s not so easy. It’s like Godwin’s law, but for killer robots and data processing.

Powered by WordPress & Theme by Anders Norén