Saw people talking about this on Reddit, but just experienced it for myself, the new apparently AI-based moderation system Midjourney appears to have deployed, that is way too sensitive and overly controlling.

And the messaging is pretty confusing for the notification when you (wrongly) hit a wall trying to do something that was perfectly fine a couple days ago:

Banned prompt detected

Sorry! Our AI moderator thinks this prompt is probably against our community standards.

Please review our current community standards:

ALLOWED

– Any image up to PG-13 rating involving fiction, fantasy, mythology.

– Real images that may be seen as respectful or light-hearted parodies, satire, caricatures

– Imaginary or exaggerated real-life scenarios, including absurd or humorous situations.

NOT ALLOWED

– Disrespectful, harmful, misleading public figures/events portrayals or potential to mislead.

– Hate speech, explicit or real-world violence.

– Nudity or unconsented overtly sexualized public figures.

– Imagery that might be considered culturally insensitive

This AI system isn’t perfect. If you find it rejecting something innocent please press the Notify Developers button and we will review it and try to further improve our performance. Thank you for your help!

I’m really confused about what they mean under allowed, “Real images that may be seen as respectful or light-hearted parodies, satire…”

Aren’t none of these actually “real images?” Aren’t these literally all invented or generated images?

How am I supposed to tell if something “may be seen as” respectful and light-hearted, and is that really a decision that any modern AI you have used seems actually qualified to make?

I’m sorry, but this is extremely broad and poorly worded: “Disrespectful, harmful, misleading public figures/events portrayals or potential to mislead.”

Again, do we want AI to be deciding what’s “disrespectful” to public figures? What about those public figures who act in disrespectful ways towards the public?

Also, under disallowed, they suggest “real-world violence,” but isn’t that again an impossibility, given that these are images that are not real, and do not exist in the “real world?”

Unconsented? Really?

And lastly, “Imagery that might be considered culturally insensitive.” In which culture, to which members of the culture? It would be one thing if, you know, real humans who were actually members of whatever the given culture was were the ones making these decisions. But they’re not; it’s some automated quasi-AI system that basically… has no real grasp of culture or human values. And this is what we want at the helm managing acceptable content decisions?

Not for my monthly subscription fee. Given the quality, in my opinion, is so much higher, I’m willing to ride it out and see if they can improve the sensitivity and appropriateness of these filters in relatively short order. But it’s a disappointing “improvement” for a filtering system which was, in my opinion, previously not actually broken, and which was not overly burdensome of the end user. A swing & a major miss here.