There is an AI image generation service I use, which occasionally throws the following error:

“Some of the images triggered a safety issue”

As an end user, this is extremely vague and rather annoying. What crazy NSFW prompt was I using that triggered this notification, you might be wondering?

“a hyperdimensional cosmic manifold full of bubbles or balloons”

This happens all the time, and there is never any clear correlation between what I actually input as a prompt, and the apparent need to throw such an error message.

While ChatGPTs refusals and disclaimers might be absurd and annoying at times, they at least are generally more clear than this, which I applaud.

Overall, this kind of vague non-specific “safety issue” error/warning is a design pattern that needs to go away forever. If there’s an issue, developers should spend the time to incorporate exposing the issue in the interest of trust & transparency to the end user, so they can figure out in the future how to modify their efforts to get the results they want. As it stands, the current implementation is of no help and a time waster.