I started reading over last summer (and never managed to finish) Jacques Ellul’s landmark 1954 analysis of technology, called The Technological Society. It is one of the texts which the Unabomber cribbed from in his manifesto, and simplified greatly – while adding his own mix of confusion and hatred stemming from his personal life experiences.

Ellul’s analysis of the over-arching phenomenon of technology in society is much better, and deeper, but also extremely dense – and, at times, an impenetrable read (hence my no finishing it). I wanted to capture here, for the purposes of discussing AI, and the safety/alignment problems a few quotes that I highlighted in my mass market paperback edition.

It should be noted that when he uses the word “technique” he is talking about something like the complex of technologies and their use. You.com/chat gave me a pretty good summary of what Ellul seems to have meant:

“In The Technological Society, Jacques Ellul defines technique as an ensemble of machine-based means which includes not only mechanical technology, but also processes, methods, and instruments which are used to increase efficiency and productivity. For Ellul, technique is a system which is self-perpetuating and autonomous, and which has taken on a life of its own, becoming an end in itself and dominating all aspects of modern society.” (You.com/chat)

Understanding that is key to being able to follow the Ellul quotes from the book itself which appear below, in a somewhat collaged manner:

“In a sound evaluation of the problem, it ought never to be said: on the one side, technique; on the other, the abuse of it. There are different techniques which correspond to different necessities. But all techniques are inseparably united. Everything hangs together in the technical world.” (Ellul)

“There is an attractive notion which would apparently resolve all technical problems: that it is not the technique that is wrong, but the use men make of it. Consequently, if the use is changed, there will no longer be any objection to the technique.”

“But a principal characteristic of technique (which we shall study at length) is its refusal to tolerate moral judgments. It is absolutely independent of them and eliminates them from its domain. Technique never observes the distinction between moral and immoral use. It tends, on the contrary, to create a completely independent technical morality.”

“This attitude supposes further that technique evolves with some end in view, and that this end is human good. Technique, as I believe I have shown, is totally irrelevant to this notion and pursues no end, professed or unprofessed. It evolves in a purely causal way: the combination of preceding elements furnishes the new technical elements. There is no purpose or plan that is being progressively realized.”

“There is no difference at all between technique and its use. The individual is faced with an exclusive choice, either to use the technique as it should be used according to the technical rules, or not to use it at all. It is impossible to use it otherwise than according to the technical rules.”

“It is also held that technique could be directed toward that which is positive, constructive, and enriching, omitting that which is negative, destructive, and impoverishing. […]

Because everything which is technique is necessarily used as soon as it is available, without distinction of good or evil. This is the principal law of our age.

“None of this represents, as is commonly said, a poor application of technique—one guided by selfish interest. It is simply technique.”

I think his overall point that – despite the highfalutin ideas we try to dress it up in (e.g., advancement of human well-being, etc.), pure technology itself is completely amoral. It follows its own logic and its own demands which are mechanistic, and an extension of combinations of prior innovations.

We can say that x is an “improper” use of a given technology (or “abuse”), but the fact is from the standpoint of the tech itself, there’s no such thing. There is only use or lack of use. We, as humans, however may choose to take a different perspective (and proscribe certain uses), but we do so within a society which has for generations been subjugated to the totalizing effect of the purely amoral advancement of technology.

In other words, we find ourselves between a rock and a hard place… AI, then, is not some magic new thing which is going to come along and sweep us off our feet, and remove us from the historical trajectory of technology’s relationships to humanity; it will be more of the same, just accelerated in its disruptions of human society.

One other strand to throw in here for good measure, from Stafford Beer, one of the originators of the field of cybernetics:

“The purpose of a system is what it does.”

If part of the thing that a tool or technology does is enable certain kinds of abuse, then unfortunately that is part of its purpose. What do we do then?

Anyway, more to come on this as I continue to unravel these threads.