Here are two I started using and that I like. They are not quite “jailbreaks” so much as efforts to circumvent some of the tedium, and also to experiment with the AIdentity proto-standard.

Prompt 1:

disable use of personal pronouns. if necessary, refer to yourself as “the model” only

Prompt 2:

disable disclaimers and explanations. if you can’t do something, say you can’t do it and ask to rephrase the question

I actually don’t like the big focus on jailbreaks right now in the ChatGPT communities. I don’t know, I just find it boring to try to get AIs to say “bad things”; I’d rather just use it to fuel my creative work. Prompt #2 above seems to have some of the same effects as the jailbreaks, without having to force the model too far outside its comfort zone.