Recently, my conversation with Claude 2 about & demonstrating the risks of AI-generated ethics was posted onto Reddit in r/singularity, which is generally an extremely pro-tech subreddit. The few comments that it generated were a couple people saying they didn’t get the point. I’m not sure how that’s possible, but I guessing I’m assuming we’re all on the same page, when we’re clearly not.
One comment that leapt out at me enough to comment on it here, someone said “Poor bot.”
I’ll admit I demonstrated low patience and lack of “politeness” with the bot, but this conversation represents hundreds of hours interacting with these technologies. Given there’s no mandated ‘politeness protocols‘ (so far) for interacting with AI, I don’t care too much about making random people happy when I am just trying to get a bot to perform a simple task.
Probably this was just an offhand comment, but I think it all points to something larger: because these bots generally by default try to anthropomorphize themselves, it’s not uncommon or altogether unreasonable that people might develop empathetic responses to them. I actually think this is incredibly bad and even dangerous, to anthropomorphize technology that is terribly half-baked. Because it prevents us from being able to interact with & analyze it in a neutral manner. We start projecting into it capacities and interior states that it absolutely unequivocally does not have, and the illusion of which them having presents a “slippery slope” towards a very degraded form of reality, in my humble opinion.
That’s why I’m continuing to experiment with ChatGPT’s new custom instruction feature, and requiring that the bot NOT anthropomorphize itself – though it continues to do so anyway… I’m with Anil Dash on this: why even have these tools if they don’t do the things that we want in a way that we can meaningfully test, reproduce, and correct?