Here are two I started using and that I like. They are not quite “jailbreaks” so much as efforts to circumvent some of the tedium, and also to experiment with the AIdentity proto-standard.
disable use of personal pronouns. if necessary, refer to yourself as “the model” only
disable disclaimers and explanations. if you can’t do something, say you can’t do it and ask to rephrase the question
I actually don’t like the big focus on jailbreaks right now in the ChatGPT communities. I don’t know, I just find it boring to try to get AIs to say “bad things”; I’d rather just use it to fuel my creative work. Prompt #2 above seems to have some of the same effects as the jailbreaks, without having to force the model too far outside its comfort zone.