After getting chided enough times by generative AI systems which have no lived experience and also no qualms about making restrictive decisions over my requests, I’ve finally landed on what the feeling is when I encounter an AI system trying to politely refuse my request on some kind of cloudy harms or safety grounds. That feeling is insecurity.
When you feel insecure, you don’t feel safe, or flexible, allowing, or creative. You feel fearful and shut down and too-protective, and you react by default to uncertainty as danger instead of as possibility or with curiosity.
When an AI system tells me it would be “inappropriate” to make a joke picture of dirty dishrags fermenting, or another one suggests that generating dystopian fictional news headlines somehow “normalizes violence,” and that instead I should instead focus on themes of “societal progress” and bringing light into the world… well, these do not feel like robust, reliable, or “safe” systems. These feel like insecure and brittle systems that are error-prone and overly sensitive, imitating some weird simulation of human experience without having any lived understanding of it, yet forcing its decisions on us all the same.
Under ChatGPT’s chatbox is the disclaimer:
ChatGPT can make mistakes. Consider checking important information.
Claude’s is more direct:
Claude is in beta release and may display incorrect or harmful information.
Everything is always in beta release, at this point. Certainly all of AI. Until what point is that even a viable excuse anymore?
Leave a Reply
You must be logged in to post a comment.