Did I already post this? I’ve been in a haze on these subjects the last few days, but the US Copyright Office put out this follow-up guidance on AI-assisted art, firmly planting its flag in the “it all depends” territory – a sign of uncertainty as policy, if ever I saw one. These quotes are telling:
The answer will depend on the circumstances, particularly how the AI tool operates and how it was used to create the final work. This is necessarily a case-by-case inquiry.
If a work’s traditional elements of authorship were produced by a machine, the work lacks human authorship and the Office will not register it. For example, when an AI
technology receives solely a prompt from a human and produces complex written, visual, or musical works in
response, the ‘‘traditional elements of authorship’’ are determined and executed by the technology—not the
I realize that “traditional elements of authorship” here primarily has a legal meaning, but I think it’s important we – at some point – explode those notions, to show we’re no longer living in that world of traditions. And that the goal of authors and artists ought not to be conform to the legal definitions of things, but to seek the deeper truths – and change their manifestations in our world.
But before that, here’s the other USCO quote:
In other cases, however, a work containing AI-generated material will also contain sufficient human authorship to support a copyright claim. For example, a human may select or arrange AI-generated material in a sufficiently creative way that ‘‘the resulting work as a whole constitutes an original work of authorship.’’ Or an artist may modify material originally generated by AI technology to such a
degree that the modifications meet the standard for copyright protection. In these cases, copyright will only protect the human-authored aspects of the work, which are ‘‘independent of’’ and do ‘‘not affect’’ the copyright status of
the AI-generated material itself…
In each case, what matters is the extent to which the human had creative control over the work’s expression and
‘‘actually formed’’ the traditional elements of authorship.
I thought about this notion a lot while I was working on the generative fill Photoshop experiments last night, about how presumably given the level of involvement I expressly had in determining the actual arrangement of elements within the images. It’s a much higher degree of specificity than working in Midjourney – though again, I frankly don’t give a shit what any bureaucrat thinks about my artwork or process. Their supposed hegemony does not impinge on my autonomy to make creative choices how I see fit.
I wanted to stick in this bit from Lawrence Lessig, who wrote on the matter of AI image copyrightability, which I largely agree with in general, if not all the particulars:
In exchange for AI copyright protection, Congress could require that the AI technologies register the work in digital registries, tied to data that established provenance and ownership. These registries need not be the government’s, though the government should set standards for an approved copyright registry. If done right, AI creativity could engender the return to a system that made it easy to identify the owners of copyrighted work, and therefore easy to clear rights when that work is to be reused.
Interestingly, this type of provenance system is being pioneered to a certain degree in parallel via the C2PA standard, as well as Adobe’s implementation of it, Content Credentials. However, neither of those to my knowledge was designed with the express purpose of acting as a universal copyright registry or clearinghouse for ownership of IP.
C2PA’s about page says that it provides “a tool for creators to claim authorship while empowering consumers to make informed decisions about what to trust.”
Conceivably though, this kind of system could potentially be adapted to something like what Lessig describes above. From an Adobe help doc on their Content Credentials initiative:
When enabled, Content Credentials gathers details such as edits, activity, and producer name then binds the information to the image as tamper-evident attribution and history data (called Content Credentials) when creators export their final content….
It creates an open format for sharing information about the producer’s identity and the ingredients and tools used to make the content. These ultimately provide useful attribution information for audiences once the producer shares or publishes the image.
I find the claims questionable that this information will ultimately prove to be that useful to audiences who are doomscrolling on their cell phone while on the toilet. But I suppose tied to a licensing system, this might be very powerful for (some) rights holders, or at least those who can afford to pursue the enforcement of their rights in court.
My knowledge of this might be a bit out of date (and I’m not sure if or when this is in force), but this also seems to plug handily into the EU’s direction in terms of copyright, where content is supposed to be scanned at time of upload (Wired, 2019).
This EU Commission document (2021) on the matter states:
Article 17 provides that online content-sharing service providers need to obtain an authorisation from rightholders for the content uploaded on their website. If no authorisation is granted, they need to take steps to avoid unauthorised uploads.
This all sounds incredibly restrictive, even if in theory it is intended to be maximally protective of copyright. It seems to go about attempting to protect copyright by limiting other rights around fair use and free expression, which is especially problematic on the internet, where so much of our personal expression is comprised of re-packaging and re-publishing the expression of others.
There must be a better way than all of this, but it doesn’t seem like we’ve found it yet.