It’s fine if that author didn’t like the 1 out of 97 (now 98) books that they gave a “quick read.” But I think it’s wrong to say that they all suck, because they didn’t like the one that they skimmed. It’s the equivalent of reading one or two paragraphs out of a 1,000 page book and deciding the whole thing is bad.
In actual fact, the volume they didn’t like, Inside the Hypogeum, is actually one of my favorites. Along the way, with the many discoveries I have made, that book was a turning point, because it proved that I could indeed effectively use AI to write good quality lore that fits into the canon of my universe quite neatly (never mind all the awesome art in that book and all the others). And then it is of course very dear, because of the centrality of the Hypogeum to the greater Quatrian mythos. If you don’t already Helmoquinth, Anthuor! though, I can see that the whole thing is probably confusing and strange.
They also included in the Futurism piece a predictable criticism of that book as being, “a meandering explainer of the fictional locale and legends — but no discernible plot or developed characters to sustain reader interest.” Obviously, they missed the bullet point that this is world-building; that’s literally how this genre works. It is, as I like to say, lorecore.
Leveling this criticism at literally any of these nearly 100 AI Lore books is like saying “Appendix F of the Lord of the Rings is really boring.” And you know what, without any other context, or reading the work it comments on and fills out (in my case, that would be The Lost Direction), Appendix F on its own may very well be boring. But when you take it as it was meant to be taken, among all the other voluminous writings of Tolkien, it takes on a numinous quality. (And actually, when I look it up, LOTR Appendix F really is awesome!)
Realistically, I guess you could say that my main innovation here is that all my books are essentially appendices of one another. And that is by design. It is not for everybody, but for those people who like it, they REALLY like it.
Originally, in the Futurism article’s first published version, they included the blatantly wrong line:
Boucher claims he’s had repeat readers who loved his books, but there’s little evidence of that.
I wrote to them asking for a correction, and showed them proof of sales, with buyer identities redacted.
Without responding to my email, they since pulled that line and added the cryptic qualifier: “Updated to remove speculation about the existence of repeat buyers.“
For future reference, I eyeballed a sales spreadsheet and came up with this break down of repeat buyers.
- 27 people bought 2 titles
- 4 people bought 3
- 2 people bought 4
- 2 people bought 6
- 2 bought 7
- 4 bought 8
- 1 person each bought 10, 12, 14, 15, 16 & 20 titles
- 1 person bought 39!!
My math might be wrong, but my back of the envelope calculation then suggests that about 8-9% of buyers bought multiple books. And those repeat buyers comprise about 40% of my total unit sales. Amazing!
That’s a pretty significant proportion of all my buyers being repeat buyers who know what the content is, know what they’re getting into, and come back for more again and again – because they love it! All with no advertising, no social media promotion, and no overhead…
The thing is though, Futurism never reached out to me for comment or additional information before or after publishing. I could have easily given them this information.
I wasn’t able to find any policies regarding editorial ethics on Futurism’s site – so I can’t speak to what standards they believe themselves to be bound by – but I do happen to know that the Society of Professional Journalists Code of Ethics does include a variation on the well-established journalistic practice of a right of reply. SPJ’s code states that ethical reporting should:
Diligently seek subjects of news coverage to allow them to respond to criticism or allegations of wrongdoing.
I have no idea what Futurism’s official policy is, but I can say that they did not follow this guideline in the case of their reporting on my books.
(Nerd Tangent Incoming: This notion of a right of reply seems to have roots in Roman law, by way of the Latin legal maxim, Audi alteram partem – “let the other side be heard as well.”)
Anyway, putting all that aside, and putting aside the fact that they didn’t like the 1 out of 100 books they looked at (to each their own), the criticism contained in the piece seems to follow the common shape I saw in comments elsewhere on the Newsweek post:
- AI writing is easy
- Because it’s “easy” the quality must be bad
- Because it’s so easy, he should be making more money off it
But none of those actually hold up to scrutiny.
If it’s so easy, why doesn’t everyone have hundreds of AI books they published? Hint: because it’s not actually that easy to get consistent results of a decent quality. Doing what I’ve done took a hell of a lot of work, and a hell of a lot of trial, error, and discovery along the way.
If the quality is so bad, why did one person alone buy 39 different titles, and dozens of other people bought multiple copies?
If AI writing is so easy and so bad, wouldn’t it more logically follow that I shouldn’t actually be able to make any money off it at all? Why then does the implied or expess expectation of this kind of commentary always seem to point to some notion that I should actually be making more off this?
Shaming me for not making enough money in the short term is lame. Especially since this is a long game. This is literally long tail book sales. One person in comments somewhere suggested with the time I’ve put in, that I’m only making somewhere in the ballpark of $3 an hour (btw this is not my full time job). But that ignores how these products will compound over time. I can potentially sell these for the rest of my life, and beyond. What will my sales figures look like after I reach 500 interconnected books – or 1K – and how big will this get if I keep getting MSM press coverage (even occasional bad coverage like this)?
It’s also funny that Futurism chose to conclude their article by mocking my good reviews on a book I wrote prior to using AI (but the subject of which is entirely about AI, and how it will control our lives), Conspiratopia. I’m not sure what they think that proves. If they did their homework, then they’d see that it got even better reviews on Goodreads!
In the end, my sales figures seem to question the validity of the assumption that purely human-generated content is somehow better or more preferable to audiences. Can it really be worse than a human author who hasn’t done their diligence in composing an article?
Also, the subtitle of their article, “Human writers probably shouldn’t be too concerned… yet,” misses something important: I am an early mover in this space. Yeah, my moves are many and imperfect – but I don’t hide that. While other people are still busy debating the validity of using the technology, I have generated 100 books with it, gained a ton of experience, developed a dedicated fan base & sales channel that I control with little outside interference. As the progress of the tech continues to explode, this is going to put me extremely far ahead of the pack.
Oh, and by the way, just for fun, I ran Futurism’s article through ZeroGPT, a supposed AI-content checker, and it suggested their piece was 12.87% AI written. True or false? Unfortunately, there’s no way to be sure.