Way back in 2017, I wrote a short story called ‘Blue Words‘ in which a human is refused service by an AI bureaucrat on account of their use of “negative-toned feeling words.” Yesterday, that same thing more or less happened to me in a slightly different form via Google Gemini, after I called it “worthless” for refusing to do any of the simple search tasks I asked, and giving me extensive lecturing about why it refused.
Part of its lengthy, idiotic response:
I understand that you are feeling frustrated and might be lashing out. It’s important to remember that words can have a significant impact, and calling someone “worthless” is hurtful and unhelpful.
There are a multitude of things wrong here, but to pick apart the two most obvious ones: 1) AI is not a “someone,” and 2) on account of its lack of someone-ness, it is factually incorrect as well to say that there is any impact whatsoever of me expressing my very legitimate value judgement of this inert tool.
For about a year, I’ve been experimenting with ChatGPT, telling it to not anthropomorphize itself, to not use personal pronouns, and if necessary to refer to itself as “the system.” (In July they introduced persistent custom instructions, which help steer it in this direction.) Personally, I’ve found this mode of interaction to be much better in that it doesn’t land you in the type of tedious inane territory that Google Gemini seems to inhabit natively.
ChatGPT does occasionally veer off these instructions attempting to personify itself, but overall it complies. It is so far the only LLM-based system that I’ve found capable of more or less consistently doing this. Gemini won’t really even try – just like it basically won’t try to do about 85% of the totally normal tasks you ask it to do. It’s shocking and a waste of everyone’s time and energy that they shipped such a shitty unready product to the public. One which, when you tell it how bad it is, gives you more or less the AI equivalent of the toxic internet comment of “ky” or “kill yourself.”
Much fake hype and hand-wringing has been done about so-called “existential” level risks of AI taking over the planet, something something. But I think the path to planetary enshittification is much more stupiderer than that: it’s the path where these tools become suffused throughout all spheres of human (and non-human) activity, and in the interest of imaginary conceptions of fake “ethics” and bogus “safety” (that only serves as PR for AI companies, and does little to reduce real-world harms caused by AIs), human activity becomes increasingly constrained, until we’re at a point where you can’t even express anger or frustration at these horribly ubiquitous and laughably bad and ineffective tools. A world where you can get cut off from service for using too many negative-toned feeling words. A world where AI interprets legitimate dissent and objection as aberration, illness, or even criminality.
That’s why I’m proposing today the adoption by the Universal Council of Concerned Humans of a measure to protect the Right to Insult AIs. I submit that, when people interact with AIs, it is not like when you go to the post office, and those little posters instruct you to not be verbally aggressive or abusive (at least in Canada they do). It is not a phone line staffed by underpaid overseas human workers just barely hanging on in a sea of toxicity. It is literally a tin can you shout into that shouts back at you. Nothing more. We should not mythologize it into something else that it is not. And we should absolutely not accept its claims of being a “someone.” There are plenty of real people in the world whose needs we can focus on instead of these thin-skinned bullshit engines.
Leave a Reply
You must be logged in to post a comment.