Got the proof back for my experimental AI Kid’s book, called WRONG SCIENCE!, which is intended to show basically that AIs cannot be trusted to young children.
It follows the formula where on each page, a digital or robot assistant is pontificating (hallucinating) a counter-factual scientific claim to a bunch of kids, who are yelling “WRONG!” at the robot.
Sometimes the images get a little mixed up as to who is saying that the claim is wrong, but it gets the job done in a pinch.
There are around twenty pages, and it was written and illustrated with help from Dalle3/ChatGPT 4, and then printed as a one-off using Walmart Photo Center. The print size I think is 4×4 inches or thereabouts. Small and fun, but the binding quality leaves something to be desired.
I did another one that I haven’t put up here samples of yet that used another Canadian photo book printer as a one-off also, and the quality is much better. But the price per unit including shipping on these would be prohibitive to make it really saleable to the public at large, in my opinion. Which is why I’m just sharing it as samples here for right now.
In addition to wanting to prepare kids for the Butlerian Jihad (a la Dune) I made it because I wanted to turn on its head a lot of the criticism I saw of people who made kids’ books using AI in the past, which I thought was kind of unnecessary and over the top against, when in actuality its pretty amazing what you can do with it, and how well kids respond.
At the same time, I think us surrendering our truth-telling and sense-making abilities to for profit AI corporations without even so much as a batted eyelash is probably a huge mistake for humanity that we seem to be blithely in the process of making as we rush to integrate AI into every little facet of everything, and beg for machines that can distinguish fact and fiction for us so we don’t have to be bothered while we continue to slurp down Netflix fare and gargle in the sewer of Dead Twitter. I, like all critics, believe there has got to be a better way…
I also think a lot of the “controversy” over AI Safety is overblown & misplaced and would be much more accurately labelled as AI Insecurity than anything else. And a lot of people talk about the need to teach “literacy” around AI to the next generation, but I’ve seen precious few concrete examples of how do we actually put that into practice. Here’s a very flawed but very fun first stab from me.
Leave a Reply
You must be logged in to post a comment.