A friend sent me this piece by Substacker Robert Evans, gloomily entitled “AI Is Coming for Your Children: Con-men are flooding kindle with AI children’s books. They could do permanent damage to childhood literacy.”
I don’t have much respect for Tim’s project, but its potential for harm is fairly minimal. All of his ‘works’ are geared towards adults. But there’s something more sinister lurking behind the florid headlines predicting AI doom or salvation. The robots are coming for your children.
It’s strange to me that we live in a world now where we have to worry if works of fiction and experimental art could have harmful consequences, but here we are. We’re all now Tipper Gore. (I went into this in somewhat more detail in my interview on This AI Life, btw.)
Just for fun:
First, before I react to any other specifics here, I think it would behoove everyone to maybe slow down before we let our fears about AI bloom out of proportion to the current and near future states of the technology. Yes, it changes things. Does it end civilization? Probably not (we’re already doing a great job on that without even taking AI into account).
Most of the rest of the article bashes a few different people who had the audacity to experiment with using AI to generate and sell children’s books. How dare they! It’s a common punching bag for AI arts/writing because it combines typical “AIs are stealing our jobs, etc” fear with the double-plus-bad “won’t someone think of the children.”
There’s only one problem: getting ChatGPT to write an entire proper novel (50k+ words) is basically impossible right now. That’s why all of Tim Boucher’s books were just a couple thousand words long.
If the author of this piece had actually reached out to me for comment – a courtesy they seem to have extended to just about every other “con-man” covered in the article, except myself – or if they had simply read a little bit of my blog, they would have found out that they are this length because this is the length I want them to be. They are art books, first focused on the images, and secondarily with text threading them together into an exploration of a neighborhood in latent space. They are not conventional books, and not for conventional people with conventional thinking on these topics.
In actual fact, their image & word counts most closely parallel comic books, another art form which in the first half of its life was considered “trash” and which now sits in the commanding heights of apex pop culture. So let’s have this conversation again in 2 years, 5 years, 10 years, and see what holds up and what falls apart here.
(!remind in 2,5,10 years) As Terence McKenna said, the truth can take care of itself.
…text generated by these AI programs is really just chopped and screwed together pieces of actual stories written by people…
I’ve gone into why AI art/text is *not* like sampling here. I think it’s an unworkable analogy because it’s not how the training actually works, which relies on measurements of dimension compared across many subjects, and not clipping bits and bobs from many different sources.
Regarding AI kids books (which I don’t make), the author writes:
But these books could be quite damaging to little kids. […]
It’s that AI books are so incomplete and broken they might fundamentally damage the way young children acquire reading skills.
Having a little bit of experience in this area, this to me sounds like a wrong conception of how children learn from books and reading. Kids are not so fragile as this that poorly drawn T-Rexes and generic sounding ChatGPT text is going to ruin their brains. Have you ever seen kids’ drawings? Technically, they “suck,” but the human mind has this incredible ability to invest meaning and story into things, no matter what.
If you want to pick on something that will (imo) probably damage childhood development and socialization, look instead at TikTok, or on having your kids 24/7 on phones and tablets, and training them to exist in a world of algorithms solely controlled by corporations hellbent on pitting us into competitive low reward status games against one another. Compared to that, simply picking up ANY book (and especially reading it with someone else), no matter how shitty is gonna be a godsend.
Since new AIs will be trained on this content there’s a high chance of creating a feedback loop which can lead to what’s called model collapse.
I see this argument trotted out a lot, but it seems very who cares to me? I’m not the least bit concerned about that. The whole point of my AI art experiments has been to show the AIs as they are today, flaws and all. The flaws are part of the story of the technology and its impact on humanity, not an aberration or this terrible monster to be feared. (feature not a bug) As the models change, so too will my art change with it to reflect (and reflect on) whatever the current state is.
Anyway, best wishes!