On not anthropomorphizing chatbots By Tim B. On 18 February 2023 In Other Just saving this here: Not only should we not anthropomorphise chatbots, chatbots should not anthropomorphise themselves. That is to say, chatbots should not present themselves as having human-like attributes they lack, or otherwise deceive the user into thinking so.— Murray Shanahan (@mpshanahan) February 17, 2023 See my proposed standard on exactly this prohibition. That said, I’m not sure it should apply in all cases, but it’s a sticky problem that’s only going to get worse. Related writing & researchLet’s Bring More Light Into The World — AIA Brief Yet Convoluted History of Early Clues, LLC (dba Early Clues Labs), According to ChatGPTGod’s eye (Ojo de Dios)Acho, The Older Brotherhuman_actor [entity] Previous Response to OpenAI’s: how AI systems should behave & who gets to decide? Next Wild Palms & Synthetic Realities 1 Comment Add Comment → Tim B. https://www.theverge.com/23604075/ai-chatbots-bing-chatgpt-intelligent-sentient-mirror-test 18 February 2023 Leave a ReplyYou must be logged in to post a comment.