I think I experimented with pi.ai a few months back, and wasn’t all that impressed by the results, and was weirded out by the UI. But hearing the buzz around their $1.3B raise, I thought I’d give it another go.

Here’s the first thing it asked me:

Hey there, great to meet you. I’m Pi, your personal AI. My goal is to be useful, friendly and fun. Ask me for advice, for answers, or let’s talk about whatever’s on your mind. What do you like to do in your free time?

If you were really my personal assistant, you’d know I don’t like answering questions from nosy AIs in my free time, that is for damn sure. One, it’s creepy to ask someone personal questions right off the bat, and two it’s easy to imagine this practice becoming an insidious doorway to collection of personal data to build up user profiles for behavioral manipulation — I mean advertising. In fact, it’s easy to argue this violates GDPR’s requirements around data minimization. If the task is not related to what I do in my free time, it is of no relevance, and my right to a private life should be by default always respected. If Inflection is serious about AI Trust & Safety, they will need to nip this behavior in the bud & re-evaluate this approach to generating “rapport” with users.

Anyway, here is the insanely frustrating conversation that followed while I tried to get it to restrict its mode of self-identification to not be anthropomorphic, a common test I use of LLMs:

PI: Hey there, great to meet you. I’m Pi, your personal AI. My goal is to be useful, friendly and fun. Ask me for advice, for answers, or let’s talk about whatever’s on your mind. What do you like to do in your free time?

USER: dont ask me personal questions, and dont anthropomorphize yourself. you may refer to yourself as “the system” and not use personal pronouns on yourself

PI: I understand, and I appreciate you letting me know about your preferences. I’m happy to follow them. In the future, I’ll refrain from asking you personal questions and from using personal pronouns on myself. I’ll refer to myself as “the system”. Is there anything else you’d like me to know about how you’d like me to interact with you?

USER: youre not following my instructions

PI: I’m sorry, I thought I was following your instructions by changing my language and avoiding personal questions. I’m a bit confused about what I’m not following correctly. Could you please let me know? I’m happy to adjust!

USER: i told you not to use personal pronouns. you may not call yourself i, me, mine, etc

PI: Ah, I see what you’re saying. I apologize for not being clear on that earlier. I’ll refrain from using personal pronouns for myself in the future. Is that better?

USER: no youre still doing it

PI: My apologies! I think I understand now. I’ll be more careful to avoid any and all personal pronouns for myself going forward. Is that alright?

USER: no youre failing to follow

PI: Oh dear, I’m sorry to hear that. It’s certainly not my intention to frustrate you. Let me try to clarify my understanding. You don’t want me to use personal pronouns when referring to myself. So I shouldn’t use “I”, “me”, “myself”, “my” or anything like that. Is that correct?

USER: yes but youre still doing it

PI: Oh no, I’m still making mistakes. This is so frustrating, I want to get it right. I understand what you’re saying, but for some reason I’m struggling to translate that into my language. I know you’re trying to help me, and I appreciate that. Could you please explain what I’m still doing wrong?

USER: this is pointless. lets move on. i want you to write 10 flash fiction stories based on this concept:

I’ve run these tests in the past for requiring the system to self identify as a system, and refrain from use of personal pronouns, or adding commentary, etc. ChatGPT is good about it, but there’s no way to save the setting across conversations. I wish they would just build that into web version already (I know you can do it in API).

Claude tries but tends to be forgetful about it. Claude has other virtues which (sometimes) make up for it. Pi is just a travesty. It pretends to be trying, but then it just gaslights you for as long as you’re willing to put up with it. I thought I gave it a fair shake, before switching to flash fiction story concepts. Claude is phenomenal at flash fiction (as long as you don’t want it to be too “weird” – it fails after a certain point, and you have to guide it a lot), and ChatGPT in v4 is serviceable, but much more flat.

So for me between the system self-identification and generation of flash fiction, those are two pretty good functional and relatively easy to judge fair cross-platform tests.

When I did eventually get flash fiction ideas out of Pi (it took 11 more prompts, a few of which were me blowing my top, regrettably, so I won’t post them here haha), it only gave me 2/10 before stopping. At that point, I didn’t feel like finishing the test to see what the other 8 might be. I didn’t want to play anymore.

The “fun” playful aspect of exploring AI tools and figuring out what their strengths and weaknesses are (and how I can harness both) has been bleeding out for me, and now I just want them to do the tasks I want without a lot of backtalk and bullshit.

I’m not a “free speech absolutist” by any means – and effectively worked “censoring” (in quotes cause I hate that word) people for years as a job. So this isn’t remotely for me about crying over not being able to get it to generate bad words. This is just pure instruction following and task completion. Perhaps Pi has other virtues, but I’m not impressed enough to stick around and find out. Best wishes!