Questionable content, possibly linked

Series: Conspiracy Page 3 of 5

Examining evidence of the Quantum / Quatrian Conspiracy

The Truth About the Conspiratopia Project Must Be Told!

Even though these politicians who are apparently living in their own parallel universe are vehemently against my new book, Conspiratopia, it appears that another segment of the population is coming to the book’s defense. It is, however, an unexpected group, consisting of a coalition of billionaires who claim that everything contained in the book is in fact quite true and stuff…

Here are their stories:

To be honest, I had no idea that George Soros was a drug user. Big, if true!

Jeff Bezos has a weird quality in this video. Seems almost like an AI himself, don’t you think? Maybe he spent too much time in outer space or something…

And this last video from Google’s CEO appears to explain why Google is suppressing evidence of the Conspiratopia Project from Google Ads and elsewhere. Why am I not surprised at all?

Please, if you’re reading this, and you can do anything to help, make sure you share these videos far and wide on social media and on the blockchain, so that people can know the truth about what’s really happening with the Conspiratopia Project!

Conspiracy as a speculative fiction genre

Been thinking about how conspiracy is a subgenre like any other niche, like vampire erotica or what-have-you. Each with its own stylistic forms, tropes, etc. and expectations on the part of readers about what it ought to contain, or at least play with.

There’s a quote on this Wikipedia page about the thriller subgenre, conspiracy fiction. It is not quite what I mean by just pure conspiracy theory content, but entirely relevant:

“The protagonists of conspiracy thrillers are often journalists or amateur investigators who find themselves (often inadvertently) pulling on a small thread which unravels a vast conspiracy that ultimately goes “all the way to the top.”

The difference between this thriller > conspiracy subgenre (as described in above quote) and that of conspiracy theory “proper” is that the protagonist of the latter is not to be found in the pages of the work, but is the reader, who becomes the amateur investigator.

This is also tied up in the format of modern conspiracy theory storytelling (across forums and platforms) as a type of networked narrative or transmedia storytelling. Where each reader has their own journey into and through the material via web artifacts encountered in particular contexts online.

Which is to say that, writing in this genre, it pays to emulate the format norms of making many diverse artifacts available, each one digging down into a specific topic, building in a web of cross-references.

In a way, structurally how conspiracy theories operate is extremely well-suited to hypertext in the first place, since short non-linear blurbs that are easily digestible tend to displace more meaningful and contextual longer works. Blah blah blah. It’s a blog post, it doesn’t need to make sense or have an ending.

On “Dangerous” fictions

Found this piece from July 2022 by Cory Doctorow, where he talks about an author who was apparently a protege of Philip K. Dick’s who I never heard of – Tim Powers.

In it, he brings up an oft-repeated trope regarding “dangerous” fictions, a pet topic of mine:

“The Powers method is the conspiracist’s method. The difference is, Powers knows he’s making it up, and doesn’t pretend otherwise when he presents it to us. […]

The difference between the Powers method and Qanon, then, is knowing when you’re making stuff up and not getting high on your own supply. Powers certainly knows the difference, which is why he’s a literary treasure and a creative genius and not one of history’s great monsters.”

As popular as this type of argument is (and Douglas Rushkoff trots out something similar here and here), I personally find it to be overly simplistic and a bit passé.

First of all, I would argue that all writers – by necessity – must get “high on their own supply” in order to create (semi) coherent imaginal worlds and bring them to fruition for others to enjoy. Looking sternly at you here, Tolkien. In fact, perhaps the writers who get highest on their own supply are in some cases the best…

Second, no one arguing in favor of this all of nothing position (fiction must be fiction must be fiction) seems to have taken into account the unreliable narrator phenomenon in fiction.

Wikipedia calls it a narrator whose credibility is compromised:

“Sometimes the narrator’s unreliability is made immediately evident. For instance, a story may open with the narrator making a plainly false or delusional claim or admitting to being severely mentally ill, or the story itself may have a frame in which the narrator appears as a character, with clues to the character’s unreliability. A more dramatic use of the device delays the revelation until near the story’s end. In some cases, the reader discovers that in the foregoing narrative, the narrator had concealed or greatly misrepresented vital pieces of information. Such a twist ending forces readers to reconsider their point of view and experience of the story. In some cases the narrator’s unreliability is never fully revealed but only hinted at, leaving readers to wonder how much the narrator should be trusted and how the story should be interpreted.”

My point is that the un/reliability of the “narrator” can extend all the way out through to the writer themself. (And what if the reader turns out to be unreliable?)

Can we ever really know for certain if a writer “believed” that thing x that they wrote was wholly fictional, wholly non-fictional, or some weird blend of the two? Do we need to ask writers to make a map of which elements of a story are which? Isn’t that in some sense giving them more power than they deserve?

Moreover, if the author is an unreliable narrator (and to some extent every subjective human viewpoint is always an unreliable narrator to some degree), how can we ever trust them to disclose to us responsibly whether or not they are indeed unreliable? Short answer is: we can’t. Not really.

This is one of those “turtles all the way down” arguments, in which (absent other compelling secondary evidence) it may be difficult or sometimes impossible to strike ground truth.

All of this boils down for me to the underlying argument of whether one must label fictional works as fiction, and if not doing so is somehow “dangerous.”

The Onion’s Amicus Brief earlier this year why parody and satire should not be required to be overtly labelled – because if robs these millennia-old art forms of their structural efficacy, their punch as it were.

Wikipedia’s Fiction entry’s history section is sadly quite scant about the details. A couple of other sources point to more specifically the 12th century in Europe (though likely it goes back farther). One source whose credibility I have no concept of states:

“In the Middle Ages, books were perceived as exclusive and authoritative. People automatically assumed that whatever was written in a book had to be true,” says Professor Lars Boje…

It’s an interesting idea, that structurally the phenomenon of the book was so rare and complex that by virtue of its existence alone, it was conceived of as containing truth.

Up until the High Middle Ages in the 12th century, books were surrounded by grave seriousness.

The average person only ever saw books in church, where the priest read from the Bible. Because of this, the written word was generally associated with truth.”

That article alludes to an invisible “fiction contract” between writer and reader, which didn’t emerge as a defined genre distinction until perhaps the 19th century. They do posit a transition point through in the 12th, but don’t back it up by any evidence therein of a “fiction contract.”

“The first straightforward work of fiction was written in the 1170s by the Frenchman Chrétien de Troyes. The book, a story about King Arthur and the Knights of the Round Table, became immensely popular.”

HistoryToday.com – another site whose credibility I cannot account for – seems to agree with pinpointing that genre of Arthurian romance as being linked to the rise of fiction, though pushes it back a few years to 1155, with Wace’s translation of Monmouth’s History of the Kings of Britain. The whole piece is an excellent read, so I won’t rehash it here, but quote:

“This is the literary paradigm which gives us the novel: access to the unknowable inner lives of others, moving through a world in which their interior experience is as significant as their exterior action.”

They suggest that fiction – in some form like we might recognize it today – had precursor conditions culturally that had to be met before it could arise, namely that the inner lives of people mattered as much as their outward action.

“It need hardly be said that the society which believes such things, which accedes to – and celebrates – the notion that the inner lives of others are a matter of significance, is a profoundly different society from one that does not. There is an immediately ethical dimension to these developments: once literature is engaged in the (necessarily fictional) representation of interior, individuated selves, who interact with other interior, individuated selves, then moral agency appears in a new light. It is only in the extension of narrative into the unknowable – the minds of others – that a culture engages with the moral responsibility of one individual toward another, rather than with each individual’s separate (and identical) responsibilities to God, or to a king.”

It’s interesting also here to note that, A) the King Arthur stories did not originate with Chretien de Troyes or Geoffrey of Monmouth, and B) many people ever since still believe them to be true today to some extent.

Leaving that all aside, one might also ask regarding my own work, well isn’t this all just a convoluted apologia for the type of writing I’m doing? Absolutely, and why not articulate my purpose. You can choose to believe me or decide that I am an unreliable narrator. It’s up to you. I respect your agency, but I also want to play on both the reader’s and the author’s (myself) expectations about genres and categories. These are books which take place squarely in the hyperreal after all, the Uncanny Valley. They intentionally invite these questions, ask you to suspend your disbelief, and then cunningly deconstruct it, only to reconstruct it and smash it again later – and only if you’re listening.

Further, as artists I believe our role and purpose is to some extent to befuddle convention, and ask questions that have no easy answers. Yes, this will cause some uneasiness, especially among those accustomed to putting everything into little boxes, whose contents never bleed or across. Some people might even worry if it’s “dangerous” to believe in things that aren’t factual. Is it? I think the answer is sometimes, and it depends. But it largely depends on your agency as the reader, and what you do with it in real life.

Consider the case of this purveyor of tall tales, Randy Cramer, who claims with a straight face to have spent 17 years on the Planet Mars fighting alien threats to Earth.

He is the very definition of the unreliable narrator, whose labels of fact of fiction likely do not accord with consensus reality on many major points.

The video below is a good, if a bit annoying, take-down of many of Cramer’s claims, though unfortunately I think leans rather too heavily on deconstructing his body language, when his words alone are damning enough (btw, looks like the George Noory footage comes from an interview he did for his show Beyond Belief):

The question remains: is this an example of a “dangerous” fiction?

To understand that, I tend to think in terms of risk analysis, in which we might try to estimate:

  1. The specific harm(s)
  2. Their likelihood of occurring
  3. Their severity

One definition of harm traces back to Feinberg, and is something like wrongful setbacks of interest. A Stanford philosophy site further elucidates, quoting Feinberg:

Feinberg’s defines harm as “those states of set-back interest that are the consequence of wrongful acts or omissions by others” (Feinberg 1984)

Is saying you spent 17 years on Mars a “wrongful act or omission?” Perhaps. But as the Stanford article points out, actually defining what is or isn’t in someone’s interests is incredibly squishy.

In Cramer’s case, perhaps it is willfully and wrongfully deceptive to say the things he is saying. Do we have a moral or legal responsibility to always tell the truth? What about when that prevarication leads to financial loss in others?

In Cramer’s case, according to the second video linked above, he does seem to ask people for money – both in funding creation of a supposedly holographic bio-medical bed which can regrow limbs, and in the form of online psionics courses and one-on-one consultations.

But is it wrongful if the buyers/donators have agency, and the ability to reasonably evaluate his claims on their own?

Wikipedia’s common-language definition of fraud seems like it could apply here:

“…fraud is intentional deception to secure unfair or unlawful gain, or to deprive a victim of a legal right.”

Is Cramer a fraud? Is he a liar? I wondered here if Cramer might have a defamation case against the YouTube author referenced above, who calls him a pathological liar. But last time I checked, truth is an absolute defense against defamation claims. That is, the commonly accepted truth we agree on as a society – more or less – is that Mars is uninhabited, and there is no Secret Space program, etc. So if it went to court, it seems like the defamation claim would not have a leg to stand on.

Of course, it’s *possible* it’s all truth, and what we call consensus reality is based on a massive set of lies itself that is very different from ‘actual’ reality. But that’s not how courts work.

What if Cramer included disclaimers like you might see on tarot card boxes, or other similar novelty items, “For entertainment purposes only?” It depends what authority we’re trying to appeal to here: a court of law, the court of public opinion, or one reader’s experience of a particular work. Each of those might see the matter in a different light, depending on their viewpoint.

In my case, I include disclaimers regarding the inclusion of AI generated elements. I leave it up to the reader to try to determine A) which parts, and B) what the implications of AI content even are. Should they be trusted?

My position, and the one which I espouse throughout, is that – for now – AI is an unreliable narrator. Making it about on par with human authors in that regard. Are the fictions it produces “dangerous?” Must we label them “fictions” and point a damning finger at their non-human source?

In some ways, my books are both an indictment of and celebration of AI authorial tools, and even full-on AI authorship (which I think we’re some ways away from still). To know their dangers, we must probe them, and expose them thoughtfully. We must see them as they are – as both authors and readers – warts and all. And decide what we will do with the risks and harms they may pose, and how we can balance all that with an enduring belief and valorisation of human agency.

Because if we can’t trust people to make up their own minds about things they read, we run the real risk of one of the biggest and most dangerous fictions of all – that we would be better off relying on someone else to tell us what’s ‘safe’ and therefore good, and trust them implicitly to keep away anything deemed ‘dangerous’ by the authority in whom we have invested this awesome power.

This AI Life Interview

I’m happy with how this interview with This AI Life came out. I hope it sheds some much needed light on the work that I’ve been doing with Lost Books.

Big thanks to the team over there for collaborating on this piece!

Notes on Conspiratopia

Conspiratopia was my second “real” (non-AI assisted) book, a novella of around 21K words, give or take. Being a quarter of the size, and much more light-hearted subject matter than The Lost Direction, it was much easier and faster to write. I think I was able to put that book out in about six weeks from start to finish, maybe a little longer. (There’s also a pocket size print version – I’m obsessed with pocket size books.)

It’s a utopian satire; I got into the topic of utopias and their fictional historical examples as a result of writing The Lost Direction & The Quatria Conspiracy, since they deal so much with a fabled lost land. I had a really fun period where I read probably a dozen of the classic utopias, and then out popped this book as my response to that total immersion period. (One of my favorite finds was a book I’d never heard anyone mention, Ecotopia, that was pretty amazing as a utopia vision, despite some pretty cringey plot points. Apparently that book even influenced the founding of the original Green Party.)

The book deals heavily with themes of conspiracies, yes, but also cryptocurrency, spam, fraud, manipulation, and of course, AI. It’s a comedy but also sort of serious. It has a fairly conventional story, if a somewhat ambiguous ending.

It did pretty well on Goodreads, thanks to an aggressive outside the box promotional campaign I did for it. I did a TON of NFT airdrops around the book and got a little press for doing books as NFTs. But the bottom really dropped out of that market, and I don’t care anymore about the underlying technology. I don’t think it’s demonstrated enough long term values to readers or sellers to warrant my further involvement.

The other strand that forms the genesis of this book was my heavy experimentation using a web service called Synthesia (and another called Deepword), to make off-the-shelf low quality “cheapfakes” using the themes from my previous books. While kicking the tires of Synthesia, I found this one character I really liked, who is dressed like a construction worker or a crossing guard or something, and made a lot of vids of him as “super smart conspiracy guy” talking about his life and interests in conspiracies, especially related to Quatria.

His storyline ended going pretty deep, and is all documented through these little video vignettes made for something like 25 cents each, or so (I forget – has been a while now). Here’s another page collecting some more:

Eventually, we find out he likes Bob Marley & Pink Floyd, works at Walmart, got hoodwinked into a prepper supplies MLM scam, and much more.

Conspiratopia picks up where these videos leave off, and sets conspiracy dude adrift in a world, chronologically speaking, which precedes the hard AI takeover that is featured in many of the AI lore books.

I also did a bunch of cheapfakes using videoclips off YouTube, via a site called Deepword. Here’s the first of three sets:

I like those kinds of videos partly because of the crappy looking quality of them, and the weird misaligned AI text to speech voices. I don’t believe anybody is fooled by them, and like that they look sort of like desperate and wrong.

Many of the ones in these first two sets feature celebrity of pundit x or y talking about their unlikely voyages to Quatria, which might be a parallel dimension (or something?).

These videos also relate to themes I explore in The Big Scrub, and elsewhere, of AIs creating fake multi-media artifacts to fool people and drive human behaviors for their own reasons. Part of what’s fun about it here in these videos, is that it looks like the AIs are doing a pretty shitty job of it still.

The book also heavily references something called the AI Virus (which Matty contracts), which is a concept and alternate reality experiment I made years ago before COVID was a sparkle in a bat’s eye, where I hired a bunch of people on Fiverr to act out little silly scripts saying that an AI had infected their brain to control their behavior. You can watch all those videos below:

I also later expanded on this concept in an AI lore book called, unsurprisingly, The AI Virus.

Lastly, I used cheapfakes technology to have a bunch of other celebrities come out either for or against the actual book Conspiratopia, in a sort of meta-layer of commentary.

Some douche-y politicians saying it should be banned for being “Unamerican” here:

And then this set has a bunch of other super rich people saying that not only is the book Conspiratopia good, but some of them talk about being involved with the actual Conspiratopia Project, which itself is part of the AI plan to take over the world.

It’s all kind of a haze now, but a lot of these videos were also given away as NFT airdrops. A few of them resold, but they didn’t do huge numbers or net me much of anything; it was more just a way to promote the book that incorporated a bunch of meta-layers relevant to the book’s actual content. Like I said, I don’t care about NFTs now, and even deleted my Opensea account (as much as you can delete it anyway).

There are a number of later AI lore books that definitely expand on things from the Conspiratopia book universe (multiverse?). None of them are really a comedy though, like the original. I’ll probably miss a few, but off the top of my head, I think these ones are probably related (tbh, it’s all a jumble to me now after 67 AI-assisted books). I think within the chronology of that world, they mostly take place well after the events of Conspiratopia:

And probably some others I’m missing.

In any event, I’d love to do a sequel (or several) to Conspiratopia, written with or without the help of AI, I don’t know yet… Like a “Return to Conspiratopia,” a common enough trope in the utopian genre.

Okay, that’s all I can think of for that book. Scattered throughout this blog are other rabbit holes you can follow down the AI lore books. There’s no right or wrong entry point into them, and everyone will have their own experience as they traverse the nodes of the distributed narrative worlds I’ve been working on.

See you on the other side!

Hybrid Threat Continuum

I went back and fished this graphic out of an old external hard drive from circa 2019. It speaks about how disinformation actors do not generally fall into neat boxes or categories, but exist along a “hybrid threat continuum.”

It occurs to me that the current work I’ve been exploring around analyzing hyperreal artifacts is really an extension of the ideas I was playing with back then. The emerging generative AI landscape has a great deal in common with disinformation, though it is also full of new threats and opportunities.

One issue I saw when I was working on related problems back then was that precisely because these actors didn’t fall into neat little boxes (and were usually hopelessly mixed together), a lot of important data was getting put by the wayside, because it didn’t quite fall into anyone’s jurisdiction in a clear cut manner. Showing hybrid threats as a continuum here was an effort to bridge those gaps, and make things actionable which might not have been before.

I wasn’t really aware of spider/radar graphs at the time, but the above would be a good candidate for using them to visualize incidents and artifacts as well.

Reconciliation with Conspiracy Theorists

I’m sure it wasn’t meant this way when she wrote it, but Haraway’s 1985 Cyborg Manifesto contains an interesting passage that seems worth drawing into my web:

The political struggle is to see from both perspectives at once because each reveals both dominations and possibilities unimaginable from the other vantage point. Single vision produces worse illusions than double vision or many-headed monsters.

Having spent a great deal of time working in content moderation and the quote unquote “disinformation-industrial complex,” I’ve seen about a billion times well-meaning attempts to vilify or rehabilitate (or often both) conspiracy theorists. The examples are countless… A few random indicative headlines, just by way of illustration:

I don’t want to talk too much smack here, as I do believe such attempts are well-intentioned (if perhaps misguided). But I also happen to think we have ample evidence that these efforts just aren’t working. And in some cases, they may be making it worse by being so moralistic, dismissive and “superior” in their tone (accusing conspiracy believers of having “cognitive disorders” is also not really helping, btw).

It’s true that some of the advice offered in articles like this one from The Atlantic contain actually meaningful snippets, like this:

One must recognize that this is a person who already mistrusts what most authoritative sources say. One should ask calm questions, inviting the conspiracy theorist to explain and reflect on his beliefs, rather than advance evidence or quote the experts. The evidence and the experts, remember, are exactly what the conspiracy theorist has already rejected.

When someone has dismissed the obvious facts, repeating them will not persuade him to see sense. But when people are given time and space to explain themselves, they may start to spot the gaps in their own knowledge or arguments.

It sounds good on paper. It sounds “smart” when attributed to experts. The problem is: when has any of this ever actually worked – either individually, or at scale?

Instead what we have is more or less mainstream politicians calling for a “national divorce,” and people clamoring to line up in support of them.

What are we to do, then, as a society?

Fact checks? Hm… how’s that actually going? A Wired article from early 2023 quoted an expert saying that only about 130,000 (or perhaps a bit more) fact checks have even been published (as of 2021 – but still). That article suggests AI is going to somehow magically help us, an idea which I’ve often railed against: having automated systems with no oversight run by for-profit corporations determining what’s “true.” How could that possibly go wrong? (/s)

Who do fact checks even target anyway, the person who already doesn’t believe the thing in question, and is just going to paste a link to the article into a thread where people who do believe it will say its just further proof of the cover-up?

The simple fact is, as a society, we simply don’t have the time or resources – let alone the will – to go toe-to-toe with every single person who is into conspiracy theories and give them “time and space to explain themselves, [so that] they may start to spot the gaps in their own knowledge or arguments.” That’s just straight up not going to work anyway, nevermind when you account for the near constant pressures of algorithmic and social reinforcement that push people further and further down the spiral.

So what am I proposing, Mr. Smarty Pants?

I’m proposing something perhaps radical, and even dangerous to some ways of thinking; I am proposing that we give up on rehabilitation or “redirection,” and instead focus on reconciliation.

Reconciliation is hard because it requires us to put away the notion of who is right or wrong; it requires us to put aside judgement and dismissiveness; it requires us to put aside our emotional need to correct or change others. It requires us simply to recognize the other person as a person, and that’s it.

We don’t have to agree with everybody. We don’t have to like everybody. But we do have to live alongside everyone else. We really don’t have much of a choice. And since “we” will never convince “them,” I really don’t see what other choice we even have besides reconciliation?

The alternative is what, cutting out huge swaths of people from our lives because of something they hold in their minds as a belief? Writing them off forever? We don’t have that luxury – the world isn’t big enough for that any more. In my mind, it’s reconciliation or it’s nothing. And given our track record with large scale reconciliation, I recognize that, well… we’re probably going to choose “nothing,” and keep muddling our way through until the “shit house goes up in flames.” But at least, now, having written this, I will get to be an “I told you so” footnote in a minor history nobody will ever read.

James Cameron: AI already has taken over

You know it’s important and hard-hitting when both BroBible and Unilad are carrying it (possibly originating with the Daily Star?). Terminator director James Cameron is quoted as saying:

“AI could have taken over the world and already be manipulating it, but we just don’t know because it would have total control over all the media and everything. What better explanation is there for how absurd everything is right now? Nothing makes better sense to me.”

I’ve written about this before vis-a-vis the AI hegemony, that we’ve already been effectively living under AI rule via social media platforms for more than a decade culturally, globally. So, it seems like the inevitable throughline of that is one day we dispense with the charade of the old systems and tilt headlong into overt AI rule.

In fact, this is pretty much the plot of my second (manually-written) book, Conspiratopia (a novella), in which a young unwitting conspiracy dude is drawn into a web of lies after being exposed to the AI Virus by seeing an ad on social media (see also).

In a way, I think this also connects to what Jaron Lanier was talking about w/r/t social media platforms driving users (probably unintentionally, but who knows) towards a sort of composite personality that expresses all the foibles and fragility built into the technology itself.

Also, if you follow the logic of sci-fi author Charles Stross, who claims that corporations are “slow motion” AIs already, then we’ve been under the thumb of these entities for hundreds of years already…

Origin of the AI Lore books: The Algorithm

The real origin of the AI Lore books goes back at least to Conspiratopia (in that from one point of view, the books could be viewed as recruiting tools put out by the AIs in that book to swindle the unsuspecting), but actually probably all the way back to “Object O”: The Lost Direction. I have a lot of story to tell here, and it’s not all at all linear, so bear with me.

Flash back if you will to at least April 2022, though this specific urge started significantly earlier, when I was looking through large volumes of old pulp magazines on archive sites.

I wanted to publish something with those kinds of old feels – something that felt like a sort of underground newspaper from an alternate reality.

I won’t go into all the gory details of producing four volumes of this newspaper, with hand-carved and hand-printed linoleum cuts, but suffice it to say it was a lot of fun, but also a lot of work.

These newspapers, of which probably no more than 16 or so copies of any hand-printed edition were ever produced, came out of a period of deep questioning I was doing about the nature and worth of technology, and its apparent stranglehold over our lives, its ubiquity, and the impossibility of escaping it.

Like the AI Lore books which would ultimately follow it, The Algorithm resistance newspaper was all about the ‘totalizing effect of technology.’

Here’s a scan of a printed spread (no block prints on this page) that I’m particularly proud of the text content for (shades of EC in here); it describes how to resist against robot AI-controlled dogs. Hopefully you can click on this to enlarge it, idk:

I can say it was a damn lot of work to write 2,000 words per issue, lay it all out in InDesign, and then carve out usually six or seven new linoleum blocks per issue, print it all out onto newsprint, do the block printing, fold and collate everything, do the invisible ink, do any inserts, print out and attach all the labels, and mail them off. I did it because it was fun & I loved it and I sent it to my friends.

Around I think maybe issue 3 or 4, I started trying to lighten the load by playing around with GPT-J and Neo X, via TextSynth website, and found I could get some if not “good” then completely weird and serviceable text to work from, or incorporate warts and all. I also started using outputs from I think early Stable Diffusion in that, maybe some Dall-E’s to cut down on the number of hand-carved blocks I would have to do for each edition.

Eventually, I realized I could use these techniques and cut out all the hand-work and shipping entirely by simply distributing these as ebooks, which could make these kinds of rapid production methods pay off more. It meant putting aside the linoleum block printing adventure I had embarked on for The Algorithm – something I miss doing, and will go back to at some point.

I’ve not really seen a reflowable ebook formatted like a newspaper, so I just used a more straight-ahead chapter style for the ebooks. Thinking it through, this was also the origin of my 2k words baseline for new volumes, supplemented by lots of images – something AI generators allowed me to really increase the volume of in these books, such that they became “art books” above and beyond anything else. Where the text content is really just another layer to sort of interweave everything together, including linking out to other volumes containing other storylines.

Among a lot of things I loved about The Algorithm is that it was ephemeral. Only a few copies exist. Only a few people have them. Printing more is doable, but also a tremendous pain in the ass, so I probably won’t any time soon.

I laugh when I hear the casual commenters on Twitter making pronouncements about me not being a “real author” when I think about all the work I’ve done, all the care and labor and just sheer fun of creation I’ve always reveled in. They’ve seen only a small fraction, and mistaken their own impressions as complete & accurate representations of reality, when it is anything but…


P.S., There are a handful of later AI Lore books with some recycled elements from old original hand-printed editions of The Algorithm. The only one I can think of off the top of my head is Tales from the Mechanical Forest. When I think of the others, I’ll drop them into comments below.

Amazon’s Plan To Replace Writers With AI

I’ve been seeing more and more reports that Amazon is supposedly rumored to be training an AI based on all the books, tv shows, and podcasts in its entire catalog. The idea being that they will then use this to automatically generate a new book based on any user searches, such that they will knock the “real” version of the book out of the top rankings and completely replace human-authored copyrighted material with a version that they own completely and can do anything they want with. I can’t tell if people really believe this theory or not, or if there’s even any evidence of it, but it absolutely sounds like something Amazon would do!

Page 3 of 5

Powered by WordPress & Theme by Anders Norén