The Daily Dot’s Mikael Thalen did an excellent follow-up piece about my having gotten banned from Midjourney for exposing safety issues in their system around nude content. Here’s the original Medium post with the images.

The Daily Dot’s piece follows The Debrief’s original reporting last week, which is what caused the company, presumably, to ban me.

I wrote a detailed explanation of my side of the story here for anyone interested.

The long and short of it is: these conversations need to happen publicly, with involvement from the communities who are affected by the problems. They shouldn’t happen behind closed doors and be driven and decided by solely for-profit entities with no oversight, and in whose interest it ultimately is to sweep problems under the rug.

The Daily Dot set up an account with Midjourney to see if Boucher’s findings could be reproduced. Several prompts such as “beach party photos” and even “scantily clad beach party photos” did not flag Midjourney’s filters and generated multiple realistic images of women’s naked breasts.

If they had blocked me, and then proceeded to fix the underlying technical issue, I would say fine. I accept the decision. But that’s not what happened, according to evidence we saw a couple days ago. The issue remains live in their product. So what good did banning me actually even do?

UPDATE:

The Debrief did a nice follow-up piece of its own.

According to Midjourney’s user banning policy, it states that “Any violations of these rules may lead to bans from our services. We are not a democracy. Behave respectfully or lose your rights to use the Service.”

“The fact that they feel compelled to openly state ‘This is not a democracy’ points to a grave need for democratic governance of AI technologies,” Boucher told The Debrief. “It seems more and more apparent to me every day that, without oversight, we obviously can’t trust these companies to make fair and balanced decisions that actually benefit end users.” […]

“These conversations about the right limits of technology need to happen out in the open with the public involved. It should not take place behind closed doors, or in private email exchanges which are easy for product teams to de-prioritize,” Boucher told The Debrief. “The decision of where to draw the line with AI needs to be made by communities first and foremost, and not solely left to profit-driven technology companies left to their own devices.” […]

“Banning researchers who make public for the purposes of conversation these very real flaws and issues happening right now does not make your system safer,” Boucher added. “Only fixing the underlying system issues does, and that’s obviously a much more complex undertaking than just banning critics. But that’s what needs to happen.”