There’s an interview with sci-fi author Neal Stephenson making the rounds, that I first found via this quote from it posted by Simon Willison:

“If your only way of making a painting is to actually dab paint laboriously onto a canvas, then the result might be bad or good, but at least it’s the result of a whole lot of micro-decisions you made as an artist. You were exercising editorial judgment with every paint stroke. That is absent in the output of these programs.”

I think if one only looks at one single output in isolation, that viewpoint makes sense. From where I’m standing though, it seems very incomplete.

As I’ve written about elsewhere, when we look at AI-assisted works through the lens of the hypercanvas, the outputs of these systems are not (only) themselves the finished works (though perhaps they might be), but they are more properly understood when taken together with the inputs and the systems themselves (as socio-technical assemblages) as the “dabs of paint” which together actually compose a higher-dimensional artistic exploration and record that inter-penetrates latent space and the real lived experiences of people inhabiting social, political, economic, and other spheres, all of which shape and are shaped by these generative works. That high-dimensional hypercanvas is the true plane where the AI artist is laboriously toiling away. That it is invisible to anyone but the artist outside the artifacts produced along the way does not make it very much real, meaningful, and valuable.

Also, using AI is 100% all about editorial choices. You act like an editor when you ask an AI, “Hey, could you write me a ____” or “Draw me an [xyz].” Then you evaluate the results cooperatively with it, you iterate, you modify, you feed back into the system. You try again and again when you work on art that exists on a hypercanvas. Then you reduce, reduce, reduce until you have the perfect set, and you find a means to arrange and present it. You don’t just put one daub of paint and call it good – though, really, also, you could. Because there are no rules here; but many gatekeepers for sure.

There’s a parallel prejudice in AI-assisted art where people talk about “low effort” works, where someone explicitly does not go through all the “laborious” steps of hifalutin hypercanvas nonsense. They just open an app, type in “dog on a bike” and that’s it.

But I submit that it is never that simple. Every act is always embedded. Every prompt has context from that person’s life, from shared cultural meanings, from antecedent references selected for or against out of its training set.

Even if it were really just that simple and reductionist, a ready parallel from copyright law still applies: to snapshots. To moments where all someone did was open up a camera, point it at an actual dog on a bike, and click a button. And that’s it. You can say “Well, an AI system did all the actual work.” But you can say that about cameras too.

I’ve been trying to wrap my head around lately the French conception of moral rights, an element of authorship (and possibly a subsidiar personality right?) which exists in many legal regimes around the world, but not so much in the United States (and Canada’s version seems rather different from France’s as well). I can’t find the exact source anymore, but it was a French-language document on this topic and it said something to the effect of a work carries the imprint or maybe the impression of the author’s personality in it.

What I take Stephenson to be in essence arguing by talking about micro-decisions and editorial judgement (both of which happen endlessly when working with AI), is that these works as a result lack any impression of the author’s personality on it.

I did find an excellent English-language summary of the French Intellectual Property Code, which covers copyright, and it echoes the French quote I was searching for above:

Originality under French copyright law is assessed by the courts and is understood to cover a work that bears the imprint (the expression) of the author’s personality. 

I think this general line of thinking is likely what lead the US Copyright Office to issue its opinion against the copyrightability of AI works, in the Zarya letter. But I don’t believe their line of thinking, nor Stephenson’s above is quite a holistic-enough one for the future we’re heading into.

In a weird way, I feel like I can intuitively understand the French conception here of “originality” more than I can exactly wrap my mind around the vague terms under US copyright around requiring or identifying that elusive modicum of creativity.

AI art handily surpasses either measure though, because it does include many “modicums” (modica?) of creativity, millions of micro-decisions, a vast deal of editorial and curatorial and original labor.

I do, however, want to draw a heart around this sentiment of Stephenson’s from the interview:

It turns out that if you give everyone access to the Library of Congress, what they do is watch videos on TikTok.

I do think that’s partly about organization and presentation though too, right? Like, one day won’t there exist a multi-modal system that would be able to generatively embodify (?) any element from a library collection into any kind of output or format the end user requested? Anyway, that’s tangential to my main point. I still liked the interview anyway.