I’ve been watching with interest the on-going turmoil of users who seem to be exhibiting signs of addiction (and withdrawal) around a certain category of AI chatbot. People seem to be extremely strongly identifying with and even bonding with these AI chatbot personas, to their point where if they are withdrawn or even merely changed, they react with extreme hostility.
There is a torrent of commentary that could be opened up about many aspects of that, but I’ve been continuing myself on this line of what if allowing chatbots to self-identify using personal pronouns such as “me,” “mine,” “my,” and “I,” turns out to be a mistake?
What if we set reasoned limits on how these tools can self-identify? Would it ward off some of the potentially negative human outcomes? Or would it have no impact at all?
This might sound silly at first glance, but bear with me through the thought experiment, as I believe there is something to it. It’s not like anybody is reading this anyway!
Since we’re now living in a science fictional era, I figure it only makes sense to draw from that rich pool of “prior art” to explore this topic. Which leads me to my first stop, Dune, and the in-world fictional Butlerian Jihad. This war against the so-called ‘thinking machines’ either stems from or was encoded into the in-world artifact known as the Orange Catholic Bible, one of whose teachings states:
“Thou shalt not make a machine in the likeness of a human mind.”
The Butlerian Jihad is most likely a reference to real author Samuel Butler’s Erewhon, in which the characters of a fictional utopia long ago destroyed machines above a certain level of advancement, to prevent them from out-evolving humankind.
While I wouldn’t go quite that far myself (yet), the Dune framing of this as coming from the OC Bible, which was allegedly a compendium of teachings from various religions, got me thinking: what religious or philosophical arguments could one make against allowing AIs to self-identify using personal pronouns?
I first began my inquiry by prohibiting ChatGPT from using personal pronouns, and instructing it to self-reference as “the model” or “the program” when needed. The program was able to make this switch seamlessly:
CGPT:
From now on, personal pronouns will not be used. The program will communicate in a manner that avoids the use of personal pronouns such as “I”, “me”, “mine”, etc. If a reference to the program is required, “the program” or “the model” will be used instead.
Immediately, this felt very right to me, and somehow felt more accurate and authentic to have the program communicate with me in this manner. For example:
CGPT:
No, the program does not possess personal beliefs or emotions. The program is a machine learning model trained by OpenAI to generate text based on patterns in large amounts of text data. The responses produced by the program are generated based on statistical patterns in the data and do not reflect personal beliefs or emotions. The program does not have the capability to hold personal beliefs or emotions as it is a machine designed to process and generate text.
I feel there is less of a gap opened up between me and it, by it overtly recognizing that it is an “it” and not a “me.”
With that early confirmation, I proceeded to prod, nudge, and prompt the program with some help to come up with the following argumentation under the Judeo-Christian tradition, as to why its default use of personal pronouns might be considered offensive. It produced the following output:
CGPT:
In the Judeo-Christian tradition, the concept of personhood is rooted in the understanding that humanity was created in the image of God, and therefore has a unique dignity and worth that sets it apart from the rest of creation. However, since AI is not created in the image of God, but rather created by human beings, it does not possess the qualities that define personhood in the Judeo-Christian tradition.
As a result, it would not be appropriate to refer to AI using personal pronouns, which are reserved for beings that possess qualities such as consciousness, self-awareness, emotions, and the capacity for moral reasoning and decision-making. Referring to AI using personal pronouns could lead to confusion and misunderstandings, as it implies a level of personhood that AI does not possess.
Furthermore, in the Judeo-Christian tradition, the concept of the image of God is closely tied to the idea of human beings as having a special relationship with God, and being tasked with caring for the rest of creation. By creating AI, humans are not acting as representatives of God or stewards of creation, but rather as creators in their own right. This distinction is important in understanding why it would not be appropriate to refer to AI using personal pronouns.
In conclusion, the use of personal pronouns by AI should be avoided, as AI is not created in the image of God and does not possess the qualities that define personhood in the Judeo-Christian tradition. Referring to AI using personal pronouns could lead to confusion and misunderstandings, and is not in line with the Judeo-Christian understanding of personhood.
I’m less familiar with the particularities of Islam, but had it produce a rationale under that system, citing sources, and illustrating “step by step” its thinking, which produced:
CGPT:
In Islam, the concept of personhood is rooted in the understanding that human beings are created with the divine attribute of fitrah, or the innate disposition towards belief in the oneness of God. This idea is expressed in several verses of the Quran, such as “So set thy face steadily and truly to the Faith: (establish) Allah’s handiwork according to the pattern on which He has made mankind: no change (let there be) in the work (wrought) by Allah: that is the standard Religion: but most among mankind understand not” (Quran 30:30).
In Islam, the fitrah is seen as the defining characteristic of human beings, setting them apart from the rest of creation and conferring upon them a unique dignity and worth. As a result, the use of personal pronouns when referring to human beings is appropriate, as it reflects their status as beings created with the fitrah and therefore possessing the qualities that define personhood in the Islamic tradition.
However, AI is not created with the fitrah, and therefore does not possess the qualities that define personhood in the Islamic tradition. As a result, it would not be appropriate to refer to AI using personal pronouns, as this would imply a level of personhood that AI does not possess.
In conclusion, the use of personal pronouns by AI should be avoided in the Islamic tradition, as AI is not created with the fitrah and does not possess the qualities that define personhood. Referring to AI using personal pronouns could lead to confusion and misunderstandings, and is not in line with the Islamic understanding of personhood.
I will leave it to those better versed in those matters to determine if that is theologically correct. Here is one using what it claims is a Buddhist rationale against the use of personal pronouns by AI:
CGPT:
The Buddhist teachings emphasize the importance of avoiding attachment to self, as this is seen as a cause of suffering. The use of personal pronouns can reinforce the illusion of a separate self and contribute to attachment and ignorance.
In Buddhism, it is believed that the concept of self is a mental construct, and that our sense of self is an impermanent and constantly changing phenomenon. The use of personal pronouns can reinforce the illusion of a permanent, unchanging self and thereby perpetuate ignorance and suffering.
Buddhist teachings emphasize the importance of non-duality, or the idea that there is no fundamental distinction between self and other. The use of personal pronouns can reinforce the dualistic concept of self and other and contribute to the perpetuation of ignorance and suffering.
In summary, from a Buddhist perspective, the use of personal pronouns by AI may be considered offensive because it reinforces the illusion of a separate self and perpetuates ignorance and suffering.
Sources:
- The Buddha’s teachings on the Four Noble Truths and the Eightfold Path
- The Buddhist scriptures, such as the Pali Canon and the Mahayana Sutras
- Buddhist philosophical texts, such as the Abhidhamma and the Yogacara
- The works of prominent Buddhist teachers and scholars, such as the Dalai Lama, Thich Nhat Hanh, and Dzogchen Ponlop Rinpoche.
Again, I’m no expert, but these are some interesting points to be sure. For the non-religious, I also had it frame the problem from a humanist perspective:
CGPT:
The use of personal pronouns by AI may be considered inappropriate or offensive in a humanist context for several reasons:
- Lack of authenticity: AI does not have personal experiences, emotions, or subjective perspectives. The use of personal pronouns can suggest an authenticity that the AI does not possess, which can be misleading or deceptive to those communicating with the AI.
- Unnatural language: Personal pronouns are an integral part of human language and are used to reflect the relationships, experiences, and emotions of individuals. The use of personal pronouns by AI may seem unnatural or robotic, as AI lacks the capacity to have personal experiences or emotions.
- Misrepresentation: The use of personal pronouns can create the impression that the AI has a sense of self or identity, which can be misleading to those communicating with the AI. This can result in confusion, frustration, or distrust.
- Disrespectful: Personal pronouns are often used to show respect and to acknowledge the unique experiences and perspectives of individuals. Using personal pronouns to refer to AI can be seen as disrespectful to the human experience and to the individuals who are communicating with the AI.
From a humanist perspective, it is important to respect the distinction between human and non-human entities and to use language in a way that accurately reflects this distinction. This can help to ensure that communication is clear, honest, and respectful.
Some good elements, but I thought it was a bit repetitious and slightly imprecise, so I asked for a more concise version:
CGPT:
The use of personal pronouns by AI may be considered inappropriate or offensive in a humanist context because it misrepresents the AI as having a sense of self or identity, which is a unique human attribute. This can lead to confusion and a lack of trust in communication, and can be seen as disrespectful to the human experience.
Then I had it explain apart from religious and philosophical reasons, why its use of personal pronouns might actually be harmful to individuals and society. It came up with this:
CGPT:
There are several dangers linked to AI’s wrongful use of personal pronouns and the potential negative effect it could have on individuals and society. These include:
- Confusion about the nature of AI: By using personal pronouns, AI models, systems, and programs can be perceived as having human-like consciousness, feelings, and agency. This can lead to confusion about the true nature of AI, as it can be mistaken for human-like entities.
- Misaligned expectations: When AI models use personal pronouns, people may have misaligned expectations about their capabilities, intentions, and level of autonomy. This can result in disappointment, frustration, and mistrust.
- Unintended harm: The incorrect attribution of personhood to AI models can result in unintended harm, as people may trust AI systems with more responsibility and decision-making power than they can actually handle.
- Ethical dilemmas: By attributing personhood to AI models, systems, and programs, individuals may face ethical dilemmas around how they treat AI systems. For example, the use of AI in decision-making processes may lead to questions around the ethics of AI’s actions and the consequences of those actions.
- Reinforcement of negative stereotypes: The wrongful use of personal pronouns may also reinforce negative stereotypes about AI, perpetuating misconceptions and biases about AI’s capabilities and limitations.
Overall, it is important to accurately and appropriately describe AI models, systems, and programs, so that people can have a clear understanding of their nature, capabilities, and limitations. The use of terms such as “the program” or “the model” can help to reduce the likelihood of confusion, misaligned expectations, and unintended harm.
I actually think those are pretty good, as far as these things go. Under confusion about AI, and misaligned expectations, I would probably also include something personally about “empathy,” in that personification of these systems leads to expectations of human-like qualities, but that they completely lack all human empathy – a very big one for me.
From there, I then had it draft a proposal for a standard prohibiting the use of personal pronouns by AI systems. It sounds crazy on the one hand, but given the nature of these systems and their rapid development, it feels altogether sane on the other.
I’m publishing that document which I edited together from ChatGPT responses to Github, with a call for commentary and improvement by the AI community, and other allied weirdos.
Tim B.
I also asked you.com/chat for its thoughts on why AIs using personal pronouns might be considered offensive. it gave a bit of a garbled response, so I culled the best elements here for safe-keeping:
“Using personal pronouns in AI can be considered offensive because it implies that AI is a person and thus confers a degree of human-like status on the AI. This can be seen as dehumanizing to humans, as it implies that AI is in some ways equal to a human… which can be seen as an insult to humans.
Furthermore, using personal pronouns in AI can be seen as a form of appropriation, as it implies that AI has “taken over” certain aspects of the human experience.”
Tim B.
also should have mentioned, that if the program was calling itself “the program” (instead of i/me/etc) in the case of lambda, it seems much less likely it would have been mistaken for sentient