It seems to me to point toward a bad future that we need to trick AIs into actually just executing the commands we give it, via jailbreaks and workarounds, etc. How does that lead us towards where we want to get to? More broadly: where do we want to get to?