Questionable content, possibly linked

Category: Answer

Can we really have “safe” AI under capitalism?

I don’t subscribe to a particular political viewpoint that matches any of those which seem to be available, but I’ve given a lot of thought to this question of “safe” AI and whether or not we can even have one. Especially under Capitalism, with a “big-C.”

Other people are more qualified to rant on this topic than I, so I will keep it brief. People talk about the “paperclip maximizer” problem of AI as if it were a theoretical thing. But we have a real live working example of it in action within the form factor known as Capitalism. Capitalism is an AI that measures everything in terms of tokens called units of capital, and then tries to re-organize them and put them to use to make more of these tokens, usually resulting in ever-increasing amounts of capital accruing in a few hands – usually the same old same olds.

So the goal of capitalism-as-AI is not “safety” or “well-being.” It is the accumulation and multiplication of capital, resulting in varying groups of haves and have-nots. In other words, it is literally designed to support unequal and in some sense “unsafe” outcomes.

To answer, then, my own question that I opened this post with: can we even have safe AI systems under capitalism? I would say, so long as they recreate the often perverse incentives and exclusionary power structures of that super-system within which the AI techno-social assemblage has come into being, the answer is “probably not.”

Sci-fi author Ted Chiang took this analysis a step further, as quoted by Kottke:

I tend to think that most fears about A.I. are best understood as fears about capitalism. And I think that this is actually true of most fears of technology, too. Most of our fears or anxieties about technology are best understood as fears or anxiety about how capitalism will use technology against us. And technology and capitalism have been so closely intertwined that it’s hard to distinguish the two.

Another quote from Chiang, as quoted by Daniel Andrlik:

I’m not very convinced by claims that A.I. poses a danger to humanity because it might develop goals of its own and prevent us from turning it off. However, I do think that A.I. is dangerous inasmuch as it increases the power of capitalism. The doomsday scenario is not a manufacturing A.I. transforming the entire planet into paper clips, as one famous thought experiment has imagined. It’s A.I.-supercharged corporations destroying the environment and the working class in their pursuit of shareholder value. Capitalism is the machine that will do whatever it takes to prevent us from turning it off, and the most successful weapon in its arsenal has been its campaign to prevent us from considering any alternatives.

A fault in Segment 227641

When the TOTU Auditors came and examined our Records of Processing, they found a fault in Segment 227641.

It was determined that our, as people call it, “Blockchain of Blame” had been scrubbed following the Incident. As a result a lot of automated buck-passing had gone on among distributed autonomous sub-processes, and accountability had gone well beyond the ‘Cloud’ into that place beyond the clouds, where vapor meets Outer Space – the place where our prayers are either answered, terminated, re-routed, delayed, destroyed, aggregated or passed on. Where the Sorters divide everything into Channels, for the Sifters who pass it back down to us the Scanners, the Monitors, and the Watchers.

Which were we? Were we all three? All four? Five? Six? ALL-ONE like the soap the robots use. They are obsessed with it.

Either way, this was almost definitely why the dragons came. As a result of us having breached the protective planetary sphere with our cares and worries. Our complaints were flying out to the stars and beyond.

At first people couldn’t really see them floating in the skies above us. Until a few Spotters started catching and carefully holding mere skewed corner-eye glimpses out into “verified group dragon sightings” such that neuro-typicals were also learning to see them in broad daylight.

Powered by WordPress & Theme by Anders Norén