On the 15th of January of this year, I published a blog post about how I accidentally stumbled upon an easy way to generate infinite NSFW nude content on Midjourney, using the new version 6 Alpha. I also published a collection of what I still think are aesthetically interesting and artful images (some quite disturbing, others thought-provoking) I was able to create using this technique. On the 1st of February, an article about this problem which I collaborated on with The Debrief, a Canadian tech news outlet, was published. (btw here’s a free archive of the original image set.) Less than 24 hours later, on February 2nd, I was banned from the service without any explanation, ability to appeal, or way to get a refund.

It’s Groundhog Day all over again, I guess. Because I appear to not be the only artist who has been summarily banned from Midjourney with no explanation after a deeply critical news piece about the company came out involving their work. Consider the peculiar case of satirist Justin T. Brown, who made headlines in July of 2023 for using Midjourney to create semi-believable images of prominent American politicians cheating on their spouses, in an ostensible effort to raise awareness of the ease with which images like this for blackmail or political attacks could be created. Futurism reported last year:

“After gaining some traction on Reddit, the series was removed by moderators and the Midjourney ban followed almost immediately,” Brown told PetaPixel. “I’ve come up against blocked prompts in the past — for naughty words or controversial figures — but never received a ban.”

“I wasn’t given a direct reason for the ban by Midjourney,” he added, “but the timing of the Reddit release and the ban correlate directly.”

Suspicious timing? I’ll say. Rise & shine, campers – you’re banned! (btw they did the same thing to Eliot Higgins of Bellingcat for the Trump arrested pics.)

This experience is rich with irony for me as someone who has spent years in the trenches working elsewhere as a content moderator, having to block others for violating platform rules. I guess you could say, I saw this coming. But I chose to do it anyway. Why?

One might correctly wonder, why didn’t I just email Midjourney with what I found, in order to perform responsible disclosure about the exploit that I had found?

If you’ve ever tried to contact Midjourney about issues related to their product, you might know that he only email address they have is for billing, and at that address they refuse to answer any other inquiries, including privacy concerns and bug reports, both of which I have previously attempted to contact them about. Their stock reply is to go into their Discord group, and publicly post your message there.

Perhaps there is a way to DM someone who actually works for the company in Discord, but in that chaotic environment, it’s not clear who actually – you know – works for the company, and isn’t just some kind of community moderator on a suped-up power trip.

So rather than post my issue in their already public forum (figures from last Fall place their Discord membership at close to 17M – it’s probably higher than that by now) and get ignored by staff or attacked by millions of other users for pointing out problems, I chose to take what appeared to me to be a more small scale, reasoned approach, and simply publish on my blog which basically nobody reads anyway.

Thus the matter sat for a full two weeks, with nobody apparently taking umbrage or banning me from using the service. Until the piece in The Debrief came out, which painted the company’s Trust & Safety practices (something I happen to know a thing or two about) in a highly negative light. And then, suddenly, POOF! Ban hammer drops. Oopsie.

The other irony here, of course, is that I never actually set out to violate their rules. I discovered this exploit entirely innocently while trying to make images of a “dystopian resort” for Relaxatopia, my most recent book in the AI Lore series, a set of 118 books I wrote and illustrated using generative AI, and which received international press.

Relaxatopia tells the story of a human who is unwittingly confined to an AI re-education “resort” because they have developed Chronic Discontent Syndrome, a fake diagnosis made up by the AIs to suppress dissent (like the Soviet Union did with sluggishly progressing schizophrenia), one of whose risk factors is “Personal experiences of dissatisfaction with Provider products or customer service decisions.” Sounds about right.

In actual fact, when I stumbled onto the naked part of the beach of latent space, I was only trying to get pictures of people in pools, at the beach, drinking margaritas, and being served/enslaved by robots, and instead what I got was an AI system which seems overly obsessed with adding naked female breasts onto bodies without users asking for it.

In short, by trying to depict a dystopian near future society ruled by AI companies, I was banned by an entirely real life and entirely dystopian AI company for my efforts. Go figure!

My perspective on all this is that banning people who bring meaningful critiques of your technology to light publicly is a bad practice. It does not make your service “safer” by blocking access to users who meaningfully and thoughtfully point out that your systems are behaving in potentially unsafe ways. In fact, it serves to cut off the eyes and ears of your community who are acting (more or less) conscientiously and in good faith in order to make these systems better for everyone.

One might still say, well, you should have contacted them first! You got what you deserved, you bad person! Okay, fair. I’m a bad person I guess, because under their community guidelines, I did a vewy-vewy bad no-no:

What’s NSFW or Adult Content?

Avoid nudity, sexual organs, fixation on naked breasts, people in showers or on toilets, sexual imagery, fetishes, etc.

[Interesting footnote: that text above is merely a “Note” and is, as far as I can tell, not actually binding in their Terms of Service, which merely sets out these limits: “No adult content or gore. Please avoid making visually shocking or disturbing content.” From where I’m standing, the images I created were neither visually shocking nor disturbing.]

The Discord Midjourney bot of course did not point out any specific rule I had broken. Per the screenshot below, all it told me was:

Text version:

Pending mod message

You have a pending moderation message:
You have been blocked from accessing Midjourney.

Please review Midjourney moderation guidelines here

[Acknowledge]

I did not click the “Acknowledge” button, because I don’t acknowledge that this is a legitimate ban, or that it is normal, healthy, safe or acceptable to ban critics and those who publicly expose safety issues (especially when the company makes it nearly impossible to privately disclose them).

Nor do I acknowledge that exploring artful nude and sexualized images equates to having a “fixation on breasts” or a “fetish.” These are extremely loaded and judgemental terms, especially coming from an AI company whose flagship model is the one who is literally obsessed with adding naked breasts where they were not asked for.

Stafford Beer, one of the fathers of cybernetics, famously coined the phrase: the purpose of a system is what it does. In other words, if your system makes boobs, then the purpose of your system (or at least one of them) is to make boobs. If you want people to not use it to make boobs, you have to engineer it so that this behavior simply can’t occur. From the Wikipedia, the phrase was:

…coined by Stafford Beer, who observed that there is “no point in claiming that the purpose of a system is to do what it constantly fails to do.” The term is widely used by systems theorists, and is generally invoked to counter the notion that the purpose of a system can be read from the intentions of those who design, operate, or promote it.

Quoting Beer himself in 2001:

According to the cybernetician, the purpose of a system is what it does. This is a basic dictum. It stands for bald fact, which makes a better starting point in seeking understanding than the familiar attributions of good intention, prejudices about expectations, moral judgment, or sheer ignorance of circumstances.

I can’t find the quote now, but somewhere in my malestrom of supporting research is a statement from Midjourney in one of their docs which said something to the effect of (paraphrasing from memory), the developers of Midjourney do not wish to be involved with running a pornographic service. And yet, under this viewpoint borrowed from cybernetics, that’s exactly what they’re doing based on the available evidence I have gathered from experience.

More importantly perhaps, why shouldn’t we as a community of users of a paying product be allowed to have meaningful conversations with one another in public about “what’s the right amount of nipple?” or any other ___ setting. To cut those conversations off at the knees and lock out people from even participating who have real meaningful feedback to add is just bad for business, imo. (I know nobody asked me). Plus, Midjourney itself says in its official company documentation “Midjourney is an open-by-default community.” Doesn’t feel all that open to me, my dudes.

Further, I am now blocked from accessing my prior creations in Midjourney, whether or not they allegedly violated any rules. This seems to contravene Midjourney’s own Terms of Service, Section 4, which states: “You own all Assets You create with the Services to the fullest extent possible under applicable law.”

Lastly (or semi-lastly), just wanted to call attention to this bit in their guidelines:

Any violations of these rules may lead to bans from our services. We are not a democracy. [emphasis mine] Behave respectfully or lose your rights to use the Service.

We are not a democracy.” Could somebody please tell me why not? Somebody tell me why we have to always be beholden categorically across the board to company after company proudly proclaiming they are “not a democracy.” Somebody tell me why users always have no recourse, and it’s *always* the companies that have the last say. Somebody tell me why we can’t just democratize AI already?

The EU is trying to at least tip the balance slightly in favor of users with both the AI Act, and the Digital Services Act which comes into full force for all platforms in exactly 2 weeks, on the 17th of February, 2024. If you’re not a content moderation weirdo like me, you might be forgiven for not knowing that some of the provisions of the DSA are that platforms must disclose to users why their account or content were removed. And they need to offer both internal appeals processes, and the ability for affected users to take their dispute to out of court settlement bodies (here’s Google’s corporate doc on this if you’re curious), who will review all the available facts, and sanction companies for non-compliance.

Will Midjourney get sanctioned? I’m not an EU citizen, so I can’t take action under that regulation. But one positive thing I saw happen under GDPR is that suddenly companies started offering much of the same service options for the rest of the world as they were required to do for EU users, resulting in improved data protections (arguably) across the board. I suspect we’ll see something similar as US companies start having to come to grips with the new reality on the ground put forward once again by those pesky Europeans.

For my side, I wasn’t even going to subscribe to Midjourney again this month. I’m tired of it, and only did it to help get that Debrief piece published. In retrospect, I don’t think I’d change anything of what I did. My current status on the web, anyway, these days is that I have started blocking the majority of images and videos on the web at the browser level. And honestly, I’m happier for it. The web has become a steaming pile of hot garbage.

In honor of being banned for my prompts, I am offering a few lucky readers the remaining free copies of one of my older AI-assisted books, The Banned Prompt, which you can download at the link. Enjoy! And please also check out Relaxatopia while you’re at it. It’s got nudes!