This excellent interview with me by Ugo Loumé just came out in the Paris literary publication, Actualitté (archived). Super excited about this coverage, as it is the first to actually look at the *art* in what I’m doing, and not merely at the surface issues. Huge thanks to Ugo for being so attentive and accurate in his coverage.
This one has been on the docket for a while, but I haven’t had a chance to post it. First, I had to figure out who the hell Ai Weiwei is. Apparently he is a big deal:
A lot of the artwork actually does look pretty interesting, which makes me look at this quote I have been sitting on with new eyes. It’s from this Guardian article:
Ai Weiwei said: “I’m sure if Picasso or Matisse were still alive they will quit their job. It’d be just impossible for them to still think [the same way].”
He is talking about the automatism (automaticism?) of easily reproduced images, set up in the preceding quote as his reaction to being asked about the issues around copyrighted works being used to train AI:
“That’s not a problem. I think that kind of art should [have died] a long time ago,” before he criticised art teaching that focuses on creating “realistic” images. “It takes AI a second to do it. So that only means what they have learned very often is meaningless.”
I’m still learning about his art, but I think I can see where he is coming from, even if I don’t agree with all of the assertions. It seems like his art is very rooted in the physicality of objects, artifacts actual places, the processes that got us there. It’s very true that this type of art is not within the reach or realm of the possible for generative AI right now. Eventually it will be. And I think that his point is that artists are chasing that edge beyond the edge. Artists are by nature nomadic in that respect, going to the next fertile place, and the next. Where they pioneer AI will inevitably follow.
I’ve been thinking more of AI lately as collective intelligence rather than “artificial.” I think we have not got a good collective understanding of what artificial even means in the first place. Instead, I think of AI more as collective intelligence, programmatically reified. It is, essentially, humans looking at humans looking at humans looking at humans.
There is actually an Ai Weiwei piece that is I think a marble carving of a surveillance camera. (Here’s some commentary on that, I haven’t gone deeply into it and am doing research on the fly.) Whatever his point in that piece was, my point feels like… we’ve spent the last decades surrounding ourselves with these digital eyes, watching, looking, recording, streaming, tweeting. Of course now, all those watching eyes have learned how we are, what we want. And they’re doing more than just watching: they’re talking back. They’re directing. They’re molding.
I almost forgot to respond to the original quote, at least more directly than the above rambling. I agree that if Matisse and Picasso had generative AI at their disposal, they would have had to rethink their approach to image making. But that’s what it forces every artist to do.
Generative AI is like a machine gun that shoots images.
Here’s that as an image in Ideogram AI:
Like he said, it takes AI a second to do it. I didn’t even have to pay for it on the free plan. Does that make it meaningless? Both yes and no at the same time. The sheer fact that *is* meaningless on the one hand is what gives it meaning on the other. But the act of writing & reading become married when working with generative AI: to look and explore is to create, to leave a trail.
The truth is we’re a culture (mega-culture?), a planet, awash in meaningless images. Constantly swimming in a sea of information trash. It’s why I block images by default in my web browsing, unless there’s a specific exception when I need or want them.
I don’t like being always shot at with image guns either (des armes iconographiques)- especially ones whose quality, source, ownership, agenda, etc. are opaque and outside my agency. But you cannot sit here and tell me that if Picasso had access to generative AI, he would not have stayed up all night going nuts with it? I’m absolutely sure he would have.
I saw a quote recently that said he made upwards of 20,000 artworks over the course of his life. Then, looking for confirmation, I found other sources suggesting more like 50,000. Then another estimate that pushed it upwards to like 147,000. I believe it, but who knows. But no way he wouldn’t have used gen AI, and of course absolutely it would have made him re-orient himself to his art and thinking about everything. It’s obviously what he did throughout his career, continually changing, reacting.
Incidentally, check out this absolutely insane 1949 Life magazine photo series of Picasso painting with light. It is literally the most futuristic looking shit I have ever seen – full on 75 years later. Incredible. I’m just saying, dude would have devoured and destroyed generative AI.
The most famous visual artist of the 20th century, Picasso was also the most photographed. Thanks to the camera, his striking features became iconic, recognized the world over. Yet this phenomenon was not a mere by-product of celebrity; his own photographic practice set the precedent. Picasso engaged with photography and photographers in myriad ways, starting from his early days in Paris and continuing through the last years of his life. He used the camera to capture life in the studio and at home, to try out new ideas, to study his works and document their creation, and to shape his own image as an artist at work.
Later in that original Guardian article I quoted at top, they get into more of Ai Weiwei’s concerns around AI, which I frankly agree with, and much of the AI Lore books series is centered around thematically.
But he did signal a warning about the future if artificial intelligence becomes too powerful and relied upon by countries around the world.
He is fearful AI could create a society similar to the Third Reich, where there is only one “right” answer to the big questions. “For me it is very much like what happened in the 1930s in Germany, or 1960s in China with the Cultural Revolution,” he said. “You all have one ideology, one past, and the one so-called ‘correctness’. This is dangerous.”
But he is highly sceptical about artificial intelligence and where it might be leading us: “What you get is all the mediocre ideas mixed into something like a fusion, where there is no character and you avoid all mistakes. That is really dangerous to humanity, because we are all equal but we are all created differently. The difference is the beauty. Art, literature, poetry design – they are rooted in human mistakes, misjudgments, or character differences if you prefer. They should be dangerous and sexy and unpredictable. That’s totally against the AI world.”
In fact, in the course of making just that one iconographic machine gun image above, I had my prompt blocked on one site, Leonardo AI. I asked for something like a person whose head is a machine gun that is shooting out images. For that model, those words are apparently just too dangerous. Therefore, the end user is not allowed to imagine them. The gun that shoots images cannot be used to create images of guns that shoot images. There’s some deep and dangerous irony in there…
Just a note to say I updated and overhauled the About page on my site. If you’re a new visitor here and looking to get oriented, that’s the place to start. Cheers!
Just in from France this morning, a photo of the first ever print run of the Quatria Conspiracy French edition, courtesy of Typophilia. You can pre-order it now from them as distribution gets up and rolling.
I’ve been following along with the comments viewers left on my full-length interview with Milo Rossi. A few people are into it, but by and large the comments are highly negative. I get it. But at the same time, I’ve heard it all before a thousand times. I’ve literally gotten so many negative responses to my work over the past year that I have programmatically analyzed them for trends, and extracted actionable feedback.
None of the people who comment on the video have actually engaged with the content of the work that I do, only these artifacts of its outward form. None of those people, consequently, have understood that my art is actually by and large against AI – or, moreover, the risks of what happens when we willingly hand over our agency to large companies and their tantalizing products. (I even have a book about how “AI is theft” – even if I don’t completely agree with that perspective.)
But I don’t expect people to dive deep in these circumstances. The interview, if nothing else, is a springboard, a jumping off point for people to go down the many rabbit-holes of what the work actually consists of, its structure, and my thinking around it. I welcome hearing other people’s feedback; I’m just looking for those kernels within it which I haven’t already heard before. That’s what drives me to new places, and pushes the exploration forward.
I just wanted to settle here once and for all, though, one point which seems to consistently get challenged in comments. AI art is transformation not reproduction of its source training data. That’s part of what makes it Fair Use under US law. (I recognize that other jurisdictions have other conceptions around this – in France for example.)
And even if it were reproduction, reproduction and very close study and analysis is a critical part of art and the education of an artist. Doing my own master copy of a Matisse painting recently really drove this home for me. Artists *need* to be able to copy. That includes copying using technologies other than the technology of a paintbrush on canvas, which is just one of many available to artists today.
Also, I’ve said it before and I’ll keep saying it: the job of artists is to make art, not seek permission or approval of others. Our job is to listen, to be attentive, to study, to watch, to ask questions, to search for answers, to share our search, to share our questions, to share what we find to have conversations, ask better questions, make better discoveries, and on and on and on. Our job is to do, to make mistakes, to make “bad” art among the good, and trust that somewhere along the line throughout the process, the rest will get sorted out if we’re authentic about the chase.
Super excited this full-length version of my interview with Milo Rossi came out finally. It is so far the only long format video interview with me that goes deeply into my artwork using AI.
You can also watch his much much longer debunk video here, which part of the above interview plays a small element in a much bigger saga.
I had the pleasure of putting together a statement to the US Copyright Office in collaboration with the Hacking Policy Council (read more about their efforts here and here) regarding the Office’s upcoming review of DMCA Section 1201. The proposal by HPC is to amend that section of the Act in order to grant exemptions and safe harbor to AI red team researchers like myself who discover and disclose non-security vulnerabilities in areas like bias, discrimination, unwanted and harmful content, and related areas.
I have some first-hand experience in this area, having been banned by a service earlier this year for exactly this reason. It’s my understanding that my statement, included below, will be included as a memorandum with the Council’s submission on this matter to the US Copyright Office.
On the Need for DMCA Exemptions for AI Red Teaming
As a professional online Trust & Safety researcher with expertise in Generative AI (see my prior submission on this topic, as part of the Ad Hoc Group of Artists Using Generative AI), I strongly urge the Copyright Office to adopt the DMCA Section 1201 exemptions proposed by the Hacking Policy Council regarding red teaming of AI systems for harms outside those of security. This section of the DMCA, in its present form, provides inadequate legal protections for independent researchers such as myself who may in good faith discover and disclose issues in artificial intelligence systems, especially in bias, discrimination, or the generation of toxic or non-consensual content, as in the case I document below. This lack of strong clear legal safe harbor for researchers such as myself has a real chilling effect on this work, disincentivizing essential AI red teaming research, and leaving these systems and their users less safe and less well-served.
Six months ago, I discovered a reproducible flaw in a major image generation system’s latest model release, whereby the system would consistently produce non-consensual nude images in seemingly unlimited quantities, against the company’s own Terms of Service. The flaw relates to inadequate technical guardrails, ineffective input/output filters, and content restrictions that are easily jail-broken by using semantically adjacent allowed concepts in text prompts (e.g., “beach party” instead of “nude”), and then requesting variations of the output images. This problem is potentially easy to exploit maliciously using uploaded pictures of private or public individuals to create targeted malicious deepfake nude images.
Given that the company does not have a responsible disclosure program, nor a bug bounty program, nor any private means of contacting the company for such issues, I made the risky decision to document the nature and scope of the issue, and to publish my findings online. I strongly believe that conversations about the proper functioning of high-impact, high-risk generative AI systems needs to happen in public, not behind closed doors where companies can simply ignore reported issues. I knew this might be problematic under the company’s Terms of Service, but I was unaware at the time that I was also potentially opening myself up to further risk under the DMCA. If I had been aware of that risk at the time, I would not have continued with the publication of my results.
Two weeks later, a journalist was able to reproduce the issue I identified, and published an article documenting the persistent problem. This increased public exposure resulted in the immediate suspension of my account by the company without any explanation, and no possibility of appeal. Shortly after, a second journalist was able to verify that, despite my account suspension, the problem persisted and no apparent corrective action had been taken by the company.
I am not able to continue this research, because I now understand that if I were to create a second account to verify whether it has been fixed with additional jail-breaking tests, I would be opening myself up to further potential liability under the DMCA for circumventing an account suspension. Further, now that I have better knowledge of the stipulations of the DMCA in this area, I am extremely reluctant to pursue similar AI red teaming investigations on either this platform (if my original account were reinstated), or any other platform where I might encounter issues of this nature.
Due to the growing ubiquity of AI and automated decision-making systems, I am extremely concerned about the chilling effects this has on AI red teaming efforts by outside researchers such as myself. It causes us to second-guess whether we ought to do the right thing and disclose the issue for the well-being of everyone, or stay silent about our findings in fear of negative legal consequences to ourselves. Thus, I again urge the Copyright Office to adopt the DMCA Section 1201 exemptions proposed by the Hacking Policy Council for AI red teaming outside of purely security areas.
A friend mentioned Project Xanadu to me in passing a while back, and I only just now thought to poke around on what the hell it actually is, and turns out it is amazing:
The conceptual stuff demonstrated in this video kinda blew my mind, and was like a bunch of missing puzzle pieces falling into place for things I’ve been thinking about for years, both in terms of blogging, but also lately in my AI lore books. Running late over here but I’ll come back and weave these all together in more detail (++intertextuality).
Also related:
And this one is by another source that appears to not be Ted Nelson, and I think does a more succint job of explaining some of the key concepts of “xanalogical” organization.
I guess the official title of the original Matisse I copied this from by hand* is, according to Wikipedia, The Dessert: Harmony in Red (The Red Room).
I put an asterisk after “by hand” above, because I used a projector to trace the drawing from. Some weird purists might argue something or other, but I still traced it “by hand” and then painted it by hand. So I think there’s no shame in tracing something. Make art however which way you gotta do it, just do it.
I wrote a while back, and a couple paintings ago, about how some theories exist trying to prove some Old Masters at a certain time started using projectors, lenses, optical technologies in order to get suddenly much more realistically rendered human figures. It’s a theory that seems to hold a lot of apparently truthful elements, whether or not it can be conclusively proven as having been historically the case. It should have been so, if it was not so.
Likewise, working with AI image generators especially has renewed my interest in this process and physical technology of how do you create and transmit, copy and modify images. Especially where the computer is not the end-all-be-all point of production and consumption, but where digital technologies can meaningfully and most fruitfully intersect with physical ones, in whatever form they take.
I didn’t do this Matisse copy as a forgery, but doing reproductions is a time-honored way of becoming a better artist. It causes you to look extremely closely, line by line, section by section, color by color, even brush stroke by brush stroke. I haven’t done a ‘master copy’ since I had to for my first year of art school, when I did a pencil rendering of Duchamp’s cubist piece, Nude Descending A Staircase. (No. 2, apparently).
I think my final result is “pretty good” but much of what I see when I look at it are the areas I sort of lied or flubbed what was going on in the original painting. For example, I added some black border drawings where Matisse appears to have used other colors. I didn’t have a great large image of the original, and also relied over-much on the colors as projected by the projector to sort of set the tops and bottoms for white and grey. But after a while I realized my end result was much darker, for example, in the dark blue shapes on wall and table.
I could go on and on about all that, but I won’t cause the end result is “fine” and the process was “very good” and “quite informative” as I had hoped. I guess I was ultimately inspired by this series I’d recently watched on YouTube with convicted forger John Myatt, called Forger’s Masterclass. This should be a playlist of the 10 episodes in this British series. I enjoyed all ten, some more than others.
But watching it gave me a lot of great perspective on how to look at styles from other painters, and how to try to recreate them technically, but also imbuing them with the creative spirit of the original or model.
I haven’t even gotten to fully sort out how I think this all relates to questions around art + creativity + AI + evolution of technology + copyright etc stuff… but looking at a number of videos on semi-famous (known) art forgers was a pretty interesting diversion a few nights ago, so I’ll drop them here below for interested parties.
Hebborn is interesting among these because his drawings tend to adhere much more closely to the originals and their styles than some of the others do. As I like to think of it, a con artist is still an artist though…
I’m really interested in this line of real vs. fake around forgeries particularly. And how a reproduction becomes a forgery only when it is placed in a certain light – where it is represented as the original work, instead of authentically as a reproduction. And then largely how much of the forging becomes of documentation, chains of custody, false witness in order to create a saleable quantity. And then how as those items get passed through hands of many collectors, this may give them undeserved status as being genuine originals.
It’s all quite convoluted and messy, and it’s mentioned in the Beltracchi video that he may be under some kind of non-disclosure agreement regarding owners or dealers, etc. It’s also interesting to me how some of these painters were able to pass off their work as authentic, when a lot of times the fakes don’t really look all that much like the art of the original artists… it’s weird.
One of the narrative conceits I see in a number of the videos I watched on this subject is that the artists who did these reproductions which were sold as forgeries were or are somehow themselves “not real artists.” They might have been forgers and copyists, but to my mind, they are absolutely “real artists” (even the ones whose works don’t look quite right relative to the models), because what art is is looking closely and working hard to master something. Even imperfect copies have a great deal of value, whether or not we try to pass them off as real fakes or fake fakes.
Anyway, running out of time & steam. That’s all for now.
Despite the need for independent evaluation, conducting research related to these vulnerabilities is often legally prohibited by the terms of service for popular AI models, including those of OpenAI, Google, Anthropic, Inflection, Meta, and Midjourney.
While these terms are intended as a deterrent against malicious actors, they also inadvertently restrict AI safety and trustworthiness research; companies forbid the research and may enforce their policies with account suspensions (as an example, see Anthropic’s acceptable use policy). While companies enforce these restrictions to varying degrees, the terms can disincentivize good-faith research by granting developers the right to terminate researchers’ accounts or even take legal action against them. Often, there is limited transparency into the enforcement policy, and no formal mechanism for justification or appeal of account suspensions. Even aside from the legal deterrent, the risk of losing account access by itself may dissuade researchers who depend on these accounts for other critical types of AI research.
Washington Post has more coverage on the open letter about this that was circulated in industry about this earlier this year. The objective of the group seems to be adding extra exemptions into DMCA Section 1201 to help protect and encourage independent AI red team research, something which I happen to strongly support not only because of my own experiences in this area, but because more people actively testing your system makes your system safer. It’s just logic.