I don’t subscribe to a particular political viewpoint that matches any of those which seem to be available, but I’ve given a lot of thought to this question of “safe” AI and whether or not we can even have one. Especially under Capitalism, with a “big-C.”

Other people are more qualified to rant on this topic than I, so I will keep it brief. People talk about the “paperclip maximizer” problem of AI as if it were a theoretical thing. But we have a real live working example of it in action within the form factor known as Capitalism. Capitalism is an AI that measures everything in terms of tokens called units of capital, and then tries to re-organize them and put them to use to make more of these tokens, usually resulting in ever-increasing amounts of capital accruing in a few hands – usually the same old same olds.

So the goal of capitalism-as-AI is not “safety” or “well-being.” It is the accumulation and multiplication of capital, resulting in varying groups of haves and have-nots. In other words, it is literally designed to support unequal and in some sense “unsafe” outcomes.

To answer, then, my own question that I opened this post with: can we even have safe AI systems under capitalism? I would say, so long as they recreate the often perverse incentives and exclusionary power structures of that super-system within which the AI techno-social assemblage has come into being, the answer is “probably not.”

Sci-fi author Ted Chiang took this analysis a step further, as quoted by Kottke:

I tend to think that most fears about A.I. are best understood as fears about capitalism. And I think that this is actually true of most fears of technology, too. Most of our fears or anxieties about technology are best understood as fears or anxiety about how capitalism will use technology against us. And technology and capitalism have been so closely intertwined that it’s hard to distinguish the two.

Another quote from Chiang, as quoted by Daniel Andrlik:

I’m not very convinced by claims that A.I. poses a danger to humanity because it might develop goals of its own and prevent us from turning it off. However, I do think that A.I. is dangerous inasmuch as it increases the power of capitalism. The doomsday scenario is not a manufacturing A.I. transforming the entire planet into paper clips, as one famous thought experiment has imagined. It’s A.I.-supercharged corporations destroying the environment and the working class in their pursuit of shareholder value. Capitalism is the machine that will do whatever it takes to prevent us from turning it off, and the most successful weapon in its arsenal has been its campaign to prevent us from considering any alternatives.