Questionable content, possibly linked

Series: Hyperreality Page 3 of 14

Examining the blending of fact & fiction in online, augmented, and hyperreal environments.

Conspiracy Theory Is Actually Just Postmodernism In Disguise

I should preface this by saying I don’t know anything “officially” about postmodernism outside of what I read on Wikipedia and Googling around (and a really stupid Jordan Peterson article I won’t link to). And the fun part is, that’s kind of postmodern itself. You can become an expert in five minutes. And then of course being an expert then makes you automatically untrusthworthy as a source. It’s ninja turtles all the way down, I tells ya…

Anyway, I gathered some of what I found already here, so I won’t rehash that all at length, but wanted to pull on a couple strands I didn’t cover there.

Namely, that Lyotard himself defined the postmodern as, “incredulity toward metanarratives.”

Anyone who has looked at conspiracy theory stuff online will know that people are always saying in a tongue and cheek way: “Don’t question the narrative.” That is, they feel oppressed by or don’t agree with whatever they perceive to be the “official” metanarrative.

What’s a metanarrative in the context of postmodernism? Also from Wikipedia: “a global or totalizing cultural narrative schema which orders and explains knowledge and experience.”

So when they jokingly say, don’t question the metanarrative, they are literally demonstrating Lyotard’s own definition of the postmodern. They are incredulous of the metanarrative. They want to question it, to challenge it, to tear it down and replace it with their own version of the truth. Their own metanarrative.

This is a decent WaPo article by Aaron Hanlon from August 2018 about Postmodernism. I’ll pull out a few choice quotes. Regarding his book, The Postmodern Condition, it:

“…described the state of our era by building out Lyotard’s observations that society was becoming a “consumer society,” a “media society” and a “postindustrial society…”

Hanlon continues:

“This was a diagnosis, not a political outcome that he and other postmodernist theorists agitated to bring about.”

“[…] Right-leaning critics in the decades since Bloom have crassly contorted this argument into a charge that postmodernism was made not by consumerism and other large-scale social and technological developments, but by dangerous lefty academics, or what Kimball called “Tenured Radicals,” in his 1990 polemic against the academic left. At the heart of this accusation is the tendency to treat postmodernism as a form of left-wing politics — with its own set of tenets — rather than as a broader cultural moment that left-wing academics diagnosed.

“[…] This “gospel” characterization is misleading in two ways. First, it treats Lyotard and his fellows as proponents of a world where objective truth loses all value, rather than analysts who wanted to explain why this had already happened.”

So if we accept Lyotard’s original assertion, that postmodernism is characterized by mistrust of “grand narratives,” it unequivocally has that in common with garden variety conspiracy theory. But not only that, right-leaning conspiracy theory has reconstructed its own grand narrative where Postmodernism is the grand narrative which it mistrusts… Which is entirely postmodern in itself if you think about it. A subset of postmodernism attacking its own superstructure…

It would be funny if it weren’t so foolish and tragic. Because this kind of blatant self-denial creates a somewhat predictable (and boring) loop. Conspiracy theory denies it has anything in common with Postmodernism. It then projects its shadow contents onto the “other” & villifies the perceived differences. When, in actuality, they’re rooted in the exact same thing. The same social-cultural phenomenon that’s been happening for decades now, generations. Brought on by consumerism, industrialization, media-saturated soeiety, etc. Which is what the original theorists were observing happening all along, and which is still happening today. Nay, which is in utter free fall today. Hyperreality is on over-drive, and virtual & augmented reality haven’t even yet kicked in. HFS. Are w ever in for it!

I mean, no wonder people are clinging to any & every life raft they can find. I don’t blame them. I do blame the short-sightedness of getting bogged down in dumb political-territorial games & losing track of the larger phenomena at play though. When instead, we could be working on finding a way through it all. There is so much greater possible insight we could have into our shared condition than just fighting or getting sucked down into the quagmire of loser scripts that constitutes conspiracy theory outright.

The world is literally never going to learn, though. I’m old enough to accept that now. At least I got to write a nifty blog post about it.

New Age Is Definitely Postmodern

If conspiracy theory can accurately be called postmodern, and I’m pretty sure it can, then New Age can most definitely bear that label as well.

Rather than try to do the analysis myself, here is an excerpt from a more elaborate blog post on the subject from 2018: (go read the whole thing)

“The New Age Movement also rejects the authority of the established church, with its belief that spirituality is within, and that it is up to each individual to find their own path to inner truth.

The New Age Movement accepts relativism – there are diverse paths to spiritual fulfillment, and no one authority has a monopoly on truth, which fits in with postmodernism’s rejection of metanarratives.

The spiritual shopping approach of the New Age seems to correspond with the centrality of consumer culture to postmodern societies.”

Makes sense if you think about it!

Will labeling deepfakes change anything?

In a word, no.

Here’s a quote from a BBC article that expresses the opposite hopeful view, which I don’t share.

“Labelling is probably the simplest and most important counter to deepfakes – if viewers are aware that what they are viewing has been fabricated, they are less likely to be deceived.”

As I recall, not that many years ago, Facebook tried exactly this with labeling articles that failed fact checks, and it was not a success. First, not everyone in the first place believes fact checkers, especially those whose perceived partisan slant they don’t agree with. So when they see a perceived “enemy”/outgroup fact check, they instead take that to be proof that it IS real. Because that’s the name of the stupidly hyperreal world we live in, where everyone is just out to have their existing beliefs confirmed.

Second, since we live in hyperreality and not plain old vanilla reality anymore, whether or not something is labeled as true or false and by whom or where is utterly inconsequential. The consequential thing is: is it titillating? Does it confirm my worldview? Does it give me status to share it or attack it on social media?

There is similar talk in a WaPo article here about using hidden watermarks, which could then be used to generate automated labeling by platforms:

Even better would be hidden watermarks in video files that might be harder to remove, and could help identify fakes. All three creators say they think that’s a good idea — but need somebody to develop the standards.

I mean, fine, try that. See if it has the impact you think it will. I doubt that it will, but you’re welcome to try. In either case, by the time such technology is ready for prime-time, standards have been developed, legislation put in place, and platforms adopt it, the damage will have been done.

And then there will be services you can run yourself on desktop, or that simply don’t give a shit about “standards” or are in a jurisdication that doesn’t give a shit about standards. And you’ll be able to go to them for the features you can’t get from the more mainstream services. And we’ll be right back where we are now, but with the bonus of a few years of improvements to the underlying technology.

It’s time to reach deeper for solutions. The same old tried-and-failed hacks are not going to solve it.

One day soon…

One day soon, you’ll be able to manipulate any piece of media to say or do anything you want it to, seamlessly, and with full quality, such that it is nearly indistinguishable from the authentic source.

Shadowbanned by the Cabal on TikTok?

I know this is one of the favorite tropes of conspiracy people, but I suspect my amazing account on TikTok was shadowbanned on account of (pseudo) conspiracy content.

The thing about pseudo-conspiracy content, of course, is that it is by and large indistinguishable to the naked eye (or the algorithmic eye) from “real” conspiracy content. Commentary and satire also get thrown onto the ash heap of history, without regard for fundamental differences.

The thing that’s forever tantalizing about the concept of shadowbanning, is that it is all but impossible to find “proof” that it is occurring, especially with the poor quality stats platforms generally give to users.

For illustrative purposes, here is the past 60 days of engagement:

Embeds on TikTok don’t tend to play nicely with WordPress, but here is the YouTube version of the video that caused the traffic spike a little after June 20, 2021 listed above:

I had early success with this account by playing on Mandela Effect stuff, which is by and large harmless. After the success of the above, and a few follow-ups, I ended up leaning more into the conspiracy direction. Here is the correlating time period’s increase in followers:

You can see the followers jumped dramatically around the same time period as the video above was posted, and then basically plateaued. But, with such a sudden and dramatic increase in followers, one would theoretically *expect* that any content posted after that bump would automatically get more traffic than content posted prior to it, purely based on distribution to followers.

But if you look at the traffic graph, that is not the case.

One thing I’ve learned working for platforms, however, is that algorithms are inscrutable, even to those who develop and maintain them. The fact of the matter may very well be that there is no explanation. Or if there is, it would just be based on a “best guess” by an engineer, and that’s about as far as it could be taken.

Users of platforms, however, like to believe in the fiction that everything behind the scenes is perfectly and intentionally designed to act a certain way. While that may be the case in terms of broad strokes, it is rarely the case when applied to a specific set of detailed examples. We might be able to approximately match the overall system design when examining a single example, but as I said, it’s rare you can perfectly suss out what is going on. At least in my years of experience in the matter.

That doesn’t stop platform users from 1) theorizing, and 2) assuming that they are being targeted, and 3) assuming targeting is happening because of their political beliefs.

Here’s an interesting example I noticed while toying with pseudo-conspiracy content on TikTok:

This is a search results page for the somewhat vanilla term “cabal” on TikTok (above). The included text reads:

No results found

This phrase may be associated with behavior or content that violates our guidelines. Promoting a safe and positive experience is TikTok’s top priority. For more information, we invite you to review our Community Guidelines.

If you poke around on Google, you can still pull up some tiktok.com results using the word “cabal,” but you can’t do it natively in TikTok’s search (at least on web, assuming app is the same). Here’s why, according to BBC, July 2020:

TikTok has blocked a number of hashtags related to the QAnon conspiracy theory from appearing in search results, amid concern about misinformation, the BBC has learned…

“QAnon” and related hashtags, such as “Out of Shadows”, ”Fall Cabal” and “QAnonTruth”, will no longer return search results on TikTok – although videos using the same tags will remain on the platform.

Now, my usage of #cabal was imitative of QAnon conspiracies, but I intentionally never linked my account to that overall cesspool of content, to which I am personally vehemently opposed.

The word cabal itself is, of course, a neutral and perfectly valid English word:

noun

1. a small group of secret plotters, as against a government or person in authority.

2. the plots and schemes of such a group; intrigue.

3. a clique, as in artistic, literary, or theatrical circles.

There’s even an overtly non-conspiratorial definition of that word, as you can see. And the etymology of the term is even more interesting:

cabal (n.)

1520s, “mystical interpretation of the Old Testament,” later “an intriguing society, a small group meeting privately” (1660s), from French cabal, which had both senses, from Medieval Latin cabbala (see cabbala). Popularized in English 1673 as an acronym for five intriguing ministers of Charles II (Clifford, Arlington, Buckingham, Ashley, and Lauderdale), which gave the word its sinister connotations.

And since that definition links to this one, including for reference:

cabbala (n.)

“Jewish mystic philosophy,” 1520s, also quabbalah, etc., from Medieval Latin cabbala, from Mishnaic Hebrew qabbalah “reception, received lore, tradition,” especially “tradition of mystical interpretation of the Old Testament,” from qibbel “to receive, admit, accept.” Compare Arabic qabala “he received, accepted.” Hence “any secret or esoteric science.” Related: Cabbalist.

So, because of a few bad actors, a term with many layers of rich historical significance can just be disappeared from a platform.

And yet, there’s no issue using other phrases related to conspiracy in general, and they have literally BILLIONS of views:

Whereas, if you type in #cabal (or #qanon), you are not presented with the dropdown to select the “official” tag, and are not told any tally of existing views.

What I take issue with here is not the banning of QAnon related content. I support that, and god only knows how much of it I myself banned while in a related position to do so. What I take issue with instead is the heavy-handedness, inconsistency, and reactiveness of platforms in removing this content.

If they wanted to really make a difference, they should have all done it across the board at least a couple years earlier. It was always clear what was happening, and always clear that it was dangerous. The only thing that changed, as far as I could tell, is that news outlets eventually caught wind of it, and started reporting on it, and challenging platforms to remove it with the threat of public embarrassment.

As the BBC article linked above states:

“TikTok said it moved to restrict “QAnonTruth” searches after a question from the BBC’s anti-disinformation unit, which noticed a spike in conspiracy videos using the tag. The company expressed concern that such misinformation could harm users and the general public.”

As also quoted above though, TikTok apparently did not remove the majority of that content. They simply made it harder for the average user to find. But only in that one narrow instance.

By contrast, it’s still easy to find dozens if not hundreds of “antivax” accounts, no problem. Even if that content “could harm users and the general public.” There are tons and tons of those accounts which remain active and searchable:

Now, you can choose to do personally whatever dumb thing you want with regards to the COVID vaccines, or vaccines in general. My point in illustrating this is that there is an obvious and known public harm, and yet little to nothing is done in this instance. And the cause is almost definitely because they have not (yet) been embarrassed by the BBC’s “anti-disinformation” team.

It’s worth noting, however, that they do apply a TINY label on videos which they (apparently) detect as being related to COVID misinfo (see the yellow boxes I added below to highlight the label):

Does any person in their right mind think this little tiny warning stops conspiracy people from conspiracizing? Get real. It’s a joke. Here’s how it looks on a video details page on web:

As you can see by this person’s video content in the screenshot above, when you ban or remove conspiracy content, what this signals to the conspiracy person producing or sharing that content is that they are “on the right track.” Because it’s clear to them the platforms are owned by or in cahoots with “the cabal.” (or else why would that word itself be forbidden on the platform?)

No amount of fact checking, interstitial labels, or burying things from search results is going to disabuse those people of those notions. It’s just not going to work. Like ever. I’m not being hyperbolic. I’ve seen this play out in the wild thousands of times over the course of 5+ years. The pattern is always the same. “We” are not winning.

So what should platforms do? Just not police their content? Let anything go? Hardly. They should “do their best,” to maintain the service that they own and pay for in roughly the shape that they determine to be the right one. But they should do it with the knowledge that the measures they take to suppress things which do not correlate to the shape they desire do not necessarily result in positive outcomes, or solve fundamental societal problems which are at the root of these online behaviors.

I know no one wants to hear this. But there is no simple fix. Platforms are broken because society is broken. Truth is broken and devalued because Hyperreality is simply more engaging. If we want to have conversations with people that result in meaningful changes on these issues, we’re simply going to have to find new and more creative ways to do it, because this present set of approaches is not working.

Disorientation in AI writing

There’s a really good article on the Verge about authors who use AI tools like Sudowrite as part of their writing workflow. Lost Books has released about a dozen books in this genre now, which comprise the AI Lore series.

Anyway, there are a few themes I want to tease out, namely the feeling of disconnection & disorientation that seems to be a common experience to authors using these tools.

One author quoted says:

“It was very uncomfortable to look back over what I wrote and not really feel connected to the words or the ideas.”

And:

“But ask GPT-3 to write an essay, and it will produce a repetitive series of sometimes correct, often contradictory assertions, drifting progressively off-topic until it hits its memory limit and forgets where it started completely.”

And finally:

And then I went back to write and sat down, and I would forget why people were doing things. Or I’d have to look up what somebody said because I lost the thread of truth,” she said.

Losing the “thread of truth” strikes me as utterly & inherently postmodern af. It’s the essence of hyperreality.

It is the essence of browsing the web. You pop between tabs and websites and apps and platforms. You follow different accounts, each spewing out some segment of something. And then somewhere in the mix, your brain mashes it all together into something that sort of makes sense to you in its context (“sensemaking”), or doesn’t — you lose the thread of truth.

To me, hyperreality as an “art form” (way of life?), has something to do with that. To the post-truth as they say world where truth is what resonates in the moment. What you “like” in platform speak, what you hate, what you fear, just then, just now. And then its forgotten, replaced by the next thing. Yet the algorithm remembers… or does it? It may be “recorded” but it knows little to nothing on its own, without the invocation.

Forgive me as I ramble here, but that’s why this is a blog post…

Pieces I’ve been meaning to put together in this space.

In no particular order:

“Networked narratives can be seen as being defined by their rejection of narrative unity.”

https://en.wikipedia.org/wiki/Networked_narrative

The PDF wikipedia goes on to reference regarding narrative unities has some worthwhile stuff on the topic. From it, we see these are more properly perhaps called Dramatic Unities (via Aristotle — an ancient blogger if ever there was one), or Wiki’s redirect to Classical Unities here.

1. unity of action: a tragedy should have one principal action.

2. unity of time: the action in a tragedy should occur over a period of no more than 24 hours.

3. unity of place: a tragedy should exist in a single physical location.

Popping back to the W networked narrative page:

“It is not driven by the specificity of details; rather, details emerge through a co-construction of the ultimate story by the various participants or elements.”

Lost the thread of truth, “not driven by the specificity of details.”

While we’re in this uncertain territory, we should at least quote again Wikipedia on Hyperreality:

“Hyperreality is seen as a condition in which what is real and what is fiction are seamlessly blended together so that there is no clear distinction between where one ends and the other begins.”

What I guess I want to note here – in part – is that what the Verge article quoted at top seems to be considering obstacles, bugs, or room for improvement around the lack of apparent coherence of AI-generated texts… is actually probably its primary feature?

Disorientation as a Service.

Jumping now to latent spaces, as in AI image generation:

“A latent space, also known as a latent feature space or embedding space, is an embedding of a set of items within a manifold in which items which resemble each other more closely are positioned closer to one another in the latent space.”

This Vox video is probably the most complete and accessible explanation I’ve seen of how image diffusion models work:

My understanding of it is basically that a text query (in the case of Dall-E & Stable Diffusion) triggers accessing of the portion(s) of the latent space within the model that corresponds to your keywords, and then mashes them together visually to create a cloud of pixels that reference those underlying trained assets. Depending on your level of processing (“steps” in Stable Diffusion), the diffuse pixel cloud becomes more precisely representative of some new vignette that references your original query or prompt.

So it sort of plucks what you asked for out of its matrix of possible combinations, and gives you a few variations of it. Kind of like parallel dimension representations from the multiverse.

Which leads me to the Jacques Vallee quote that has been stirring around in the corners of my mind for some twenty-odd years now:

Time and space may be convenient notions for plotting the progress of a locomotive, but they are completely useless for locating information …

What modern computer scientists have now recognized is that ordering by time and space is the worst possible way to store data. In a large computer-based information system, no attempt is made to place related records in sequential physical locations. It is much more convenient to sprinkle the records through storage as they arrive, and to construct an algorithm for the retrieval based on some kind of keyword …

(So) if there is no time dimension as we usually assume there is, we may be traversing events by association.

Modern computers retrieve information associatively. You “evoke” the desired records by using keywords, words of power: (using a search engine,) you request the intersection of “microwave” and “headache,” and you find twenty articles you never suspected existed … If we live in the associative universe of the software scientist rather than the sequential universe of the spacetime physicist, then miracles are no longer irrational events.

Vallee’s quote strikes a metaphysical chord which is mostly unprovable (for now) but also feels , experientially speaking, “mostly true” in some ways. Without debating the ontological merits of his argument vis-a-vis everyday reality, it occurs to me that he’s 100% describing the querying of latent spaces.

Of course, he suggests that reality consists of a fundamental underlying latent space, which is a cool idea if nothing else. There’s an interesting potential tangent here regarding paranormal events and “retrieval algorithms” as being guided by or inclusive of intelligences, perhaps artificial, perhaps natural. (And that tangent would link us back to Rupert Sheldrake’s morphogenetic/morphic fields as retrieval algorithms, and maybe the “overlighting intelligences” of Findhord…) But that’s a tangent for another day.

Anyway, to offer some sort of conclusion, I guess I would say perhaps the best use of AI tools for now, while they are in their current form, is to lean into, chase after, capture that confusion, that disorientation, that losing of the thread, that breaking of narrative unity, and just… go for it. There are as many roads through the Dark Forest as we make.

The Dissolution of Meaning

A lot of times, I will search Google for something, click through to a page that “seems” like information, and then discover in a surface skim that it’s actually basically junk and/or trying to sell you a product above and beyond the mere SEO manipulation. In those cases, I feel had to a certain extent–even if the failure is necessarily in many way’s Google’s for bringing me this junk in the first place and trying to hide it among or in place of “real” meaning and information.

Which of course pushes my heavy experimentation with AI writing tools to produce books into a certain state of tension. I know that; I own it. It’s the uncanny valley of delight and terror that I choose to play in. Because I know in that tension itself is something to be unwound and explored.

If a book is wholly or partially written by an AI, what impact does that actually have on it? Is it “better” or “worse” in some way, because there is either a lesser or else different impulse behind it’s creation? Is it more or less “worthwhile” or “valuable?”

In my case (and I should preface this by saying that I don’t necessarily consider myself or strictly care about “authorship”–that’s a hang up I’ve chosen to put aside…), I see these books as an interrogation of the technologies themselves. Personally I don’t like when people call the tools in their current state “AIs.” I feel that’s a tremendous overshoot when it’s really just machine learning applied at various scales. But that’s a subtlety that’s lost on the masses who just want a good headline to click on, and then ignore the article’s actual contents.

Which is a pattern we’re all used to. It’s, in a way, fundamental I think to the hyperlink, though it had to be laundered through a decade or two of dirtying human nature first to become really readily apparent.

I don’t really agree with that one dude’s estimation that Lambda is a “sentient” chat bot, but I’ve played with others enough to know that there is a spooky effect here, probably latent in human consciousness, or in material-cosmic consciousness itself. We’re gonna project our own meaning into it, even if that meaning is “this is crap,” or “this is fake”;–all valid reactions. Just as much as this is fun or this is good.

Why shouldn’t we ask these technologies, though, what they “think,” leaving aside their actual ontological status (which is unknowable)? And just see what they say, and then ask them more questions, and more.

What if the answers they give are “wrong” or “false” or “bad” or “dangerous?” What if they are misinformation or “disinformation”, or advocate criminal acts, or suicide?

The problem is the dissolution of meaning, to which these are only an accelerant, not the underlying cause (though they will certainly fuel a feedback loop). These tools are terrible at holding a narrative thread, of keeping track of characters in a scene, what’s going on, or how we got here, let alone where we are going. In a way, that’s freeing, to smash narrative unity. I don’t think I’m the first creator to discover this freedom, either.

Wikipedia:

Surrealism is a cultural movement that developed in Europe in the aftermath of World War I in which artists depicted unnerving, illogical scenes and developed techniques to allow the unconscious mind to express itself.[1] Its aim was, according to leader André Breton, to “resolve the previously contradictory conditions of dream and reality into an absolute reality, a super-reality”, or surreality.

And surrealism’s cousin Dadaism:

“Developed in reaction to World War I, the Dada movement consisted of artists who rejected the logic, reason, and aestheticism of modern capitalist society, instead expressing nonsense, irrationality, and anti-bourgeois protest in their works.”

Moving on to hyperrealism (and I will trot this quote out endlessly for ever and ever, amen):

“Hyperreality is seen as a condition in which what is real and what is fiction are seamlessly blended together so that there is no clear distinction between where one ends and the other begins.”

I like this idea that there is a reality above reality in which contradictions are merged, and taken rightly as significant elements of but a greater whole which encompasses all elements, much like the Hypogeum of Quatrian lore. That what is “real” and “unreal” are merely glimpses on a continuum of experience itself.

If there is any beauty or truth to be found in any of those arts, then there must be too in the merging and dissolution of meaning and non-meaning that we see so strongly emergent as a current in AI-produced art (including literature).

Why not let the reader’s confusion about what was written by AI and what by human be a part of the longing they have to develop and nurture, that desire to understand, or at least swim or float in the sea of non-meaning?

Does it degrade meaning? Does it uplift non-meaning? Non-meaning is another form of meaning. “Alternative facts,” alternative fictions. Who picks up and leaves off? You? A robot? A corporation? A government? Each an authority, each offering their assessment. All of which one could take or leave, depending on who has the arms and means of enforcement. The game then begins I guess to be simply how to navigate these waters, how not to get too hung, how not to get too exploited, how not to be too weighed down when you dive into the depths of rabbit holes under the sea that you can come back up again, and still be free.

Free to find bad search results. Free to be mislead, and freedom to mislead. All is sleight of hand and stage direction. Everything gardens, trying to manipulate its environment to create optimal conditions for its own survival. Including AI, including all of us. We need new tools to understand. We need brave explorers to sail off into unmeaning, and bring back the treasures there, for not all life is laden and wonderful. Much is viral and stupid. Much is lost, much to be gained.

On “Dangerous” fictions

Found this piece from July 2022 by Cory Doctorow, where he talks about an author who was apparently a protege of Philip K. Dick’s who I never heard of – Tim Powers.

In it, he brings up an oft-repeated trope regarding “dangerous” fictions, a pet topic of mine:

“The Powers method is the conspiracist’s method. The difference is, Powers knows he’s making it up, and doesn’t pretend otherwise when he presents it to us. […]

The difference between the Powers method and Qanon, then, is knowing when you’re making stuff up and not getting high on your own supply. Powers certainly knows the difference, which is why he’s a literary treasure and a creative genius and not one of history’s great monsters.”

As popular as this type of argument is (and Douglas Rushkoff trots out something similar here and here), I personally find it to be overly simplistic and a bit passé.

First of all, I would argue that all writers – by necessity – must get “high on their own supply” in order to create (semi) coherent imaginal worlds and bring them to fruition for others to enjoy. Looking sternly at you here, Tolkien. In fact, perhaps the writers who get highest on their own supply are in some cases the best…

Second, no one arguing in favor of this all of nothing position (fiction must be fiction must be fiction) seems to have taken into account the unreliable narrator phenomenon in fiction.

Wikipedia calls it a narrator whose credibility is compromised:

“Sometimes the narrator’s unreliability is made immediately evident. For instance, a story may open with the narrator making a plainly false or delusional claim or admitting to being severely mentally ill, or the story itself may have a frame in which the narrator appears as a character, with clues to the character’s unreliability. A more dramatic use of the device delays the revelation until near the story’s end. In some cases, the reader discovers that in the foregoing narrative, the narrator had concealed or greatly misrepresented vital pieces of information. Such a twist ending forces readers to reconsider their point of view and experience of the story. In some cases the narrator’s unreliability is never fully revealed but only hinted at, leaving readers to wonder how much the narrator should be trusted and how the story should be interpreted.”

My point is that the un/reliability of the “narrator” can extend all the way out through to the writer themself. (And what if the reader turns out to be unreliable?)

Can we ever really know for certain if a writer “believed” that thing x that they wrote was wholly fictional, wholly non-fictional, or some weird blend of the two? Do we need to ask writers to make a map of which elements of a story are which? Isn’t that in some sense giving them more power than they deserve?

Moreover, if the author is an unreliable narrator (and to some extent every subjective human viewpoint is always an unreliable narrator to some degree), how can we ever trust them to disclose to us responsibly whether or not they are indeed unreliable? Short answer is: we can’t. Not really.

This is one of those “turtles all the way down” arguments, in which (absent other compelling secondary evidence) it may be difficult or sometimes impossible to strike ground truth.

All of this boils down for me to the underlying argument of whether one must label fictional works as fiction, and if not doing so is somehow “dangerous.”

The Onion’s Amicus Brief earlier this year why parody and satire should not be required to be overtly labelled – because if robs these millennia-old art forms of their structural efficacy, their punch as it were.

Wikipedia’s Fiction entry’s history section is sadly quite scant about the details. A couple of other sources point to more specifically the 12th century in Europe (though likely it goes back farther). One source whose credibility I have no concept of states:

“In the Middle Ages, books were perceived as exclusive and authoritative. People automatically assumed that whatever was written in a book had to be true,” says Professor Lars Boje…

It’s an interesting idea, that structurally the phenomenon of the book was so rare and complex that by virtue of its existence alone, it was conceived of as containing truth.

Up until the High Middle Ages in the 12th century, books were surrounded by grave seriousness.

The average person only ever saw books in church, where the priest read from the Bible. Because of this, the written word was generally associated with truth.”

That article alludes to an invisible “fiction contract” between writer and reader, which didn’t emerge as a defined genre distinction until perhaps the 19th century. They do posit a transition point through in the 12th, but don’t back it up by any evidence therein of a “fiction contract.”

“The first straightforward work of fiction was written in the 1170s by the Frenchman Chrétien de Troyes. The book, a story about King Arthur and the Knights of the Round Table, became immensely popular.”

HistoryToday.com – another site whose credibility I cannot account for – seems to agree with pinpointing that genre of Arthurian romance as being linked to the rise of fiction, though pushes it back a few years to 1155, with Wace’s translation of Monmouth’s History of the Kings of Britain. The whole piece is an excellent read, so I won’t rehash it here, but quote:

“This is the literary paradigm which gives us the novel: access to the unknowable inner lives of others, moving through a world in which their interior experience is as significant as their exterior action.”

They suggest that fiction – in some form like we might recognize it today – had precursor conditions culturally that had to be met before it could arise, namely that the inner lives of people mattered as much as their outward action.

“It need hardly be said that the society which believes such things, which accedes to – and celebrates – the notion that the inner lives of others are a matter of significance, is a profoundly different society from one that does not. There is an immediately ethical dimension to these developments: once literature is engaged in the (necessarily fictional) representation of interior, individuated selves, who interact with other interior, individuated selves, then moral agency appears in a new light. It is only in the extension of narrative into the unknowable – the minds of others – that a culture engages with the moral responsibility of one individual toward another, rather than with each individual’s separate (and identical) responsibilities to God, or to a king.”

It’s interesting also here to note that, A) the King Arthur stories did not originate with Chretien de Troyes or Geoffrey of Monmouth, and B) many people ever since still believe them to be true today to some extent.

Leaving that all aside, one might also ask regarding my own work, well isn’t this all just a convoluted apologia for the type of writing I’m doing? Absolutely, and why not articulate my purpose. You can choose to believe me or decide that I am an unreliable narrator. It’s up to you. I respect your agency, but I also want to play on both the reader’s and the author’s (myself) expectations about genres and categories. These are books which take place squarely in the hyperreal after all, the Uncanny Valley. They intentionally invite these questions, ask you to suspend your disbelief, and then cunningly deconstruct it, only to reconstruct it and smash it again later – and only if you’re listening.

Further, as artists I believe our role and purpose is to some extent to befuddle convention, and ask questions that have no easy answers. Yes, this will cause some uneasiness, especially among those accustomed to putting everything into little boxes, whose contents never bleed or across. Some people might even worry if it’s “dangerous” to believe in things that aren’t factual. Is it? I think the answer is sometimes, and it depends. But it largely depends on your agency as the reader, and what you do with it in real life.

Consider the case of this purveyor of tall tales, Randy Cramer, who claims with a straight face to have spent 17 years on the Planet Mars fighting alien threats to Earth.

He is the very definition of the unreliable narrator, whose labels of fact of fiction likely do not accord with consensus reality on many major points.

The video below is a good, if a bit annoying, take-down of many of Cramer’s claims, though unfortunately I think leans rather too heavily on deconstructing his body language, when his words alone are damning enough (btw, looks like the George Noory footage comes from an interview he did for his show Beyond Belief):

The question remains: is this an example of a “dangerous” fiction?

To understand that, I tend to think in terms of risk analysis, in which we might try to estimate:

  1. The specific harm(s)
  2. Their likelihood of occurring
  3. Their severity

One definition of harm traces back to Feinberg, and is something like wrongful setbacks of interest. A Stanford philosophy site further elucidates, quoting Feinberg:

Feinberg’s defines harm as “those states of set-back interest that are the consequence of wrongful acts or omissions by others” (Feinberg 1984)

Is saying you spent 17 years on Mars a “wrongful act or omission?” Perhaps. But as the Stanford article points out, actually defining what is or isn’t in someone’s interests is incredibly squishy.

In Cramer’s case, perhaps it is willfully and wrongfully deceptive to say the things he is saying. Do we have a moral or legal responsibility to always tell the truth? What about when that prevarication leads to financial loss in others?

In Cramer’s case, according to the second video linked above, he does seem to ask people for money – both in funding creation of a supposedly holographic bio-medical bed which can regrow limbs, and in the form of online psionics courses and one-on-one consultations.

But is it wrongful if the buyers/donators have agency, and the ability to reasonably evaluate his claims on their own?

Wikipedia’s common-language definition of fraud seems like it could apply here:

“…fraud is intentional deception to secure unfair or unlawful gain, or to deprive a victim of a legal right.”

Is Cramer a fraud? Is he a liar? I wondered here if Cramer might have a defamation case against the YouTube author referenced above, who calls him a pathological liar. But last time I checked, truth is an absolute defense against defamation claims. That is, the commonly accepted truth we agree on as a society – more or less – is that Mars is uninhabited, and there is no Secret Space program, etc. So if it went to court, it seems like the defamation claim would not have a leg to stand on.

Of course, it’s *possible* it’s all truth, and what we call consensus reality is based on a massive set of lies itself that is very different from ‘actual’ reality. But that’s not how courts work.

What if Cramer included disclaimers like you might see on tarot card boxes, or other similar novelty items, “For entertainment purposes only?” It depends what authority we’re trying to appeal to here: a court of law, the court of public opinion, or one reader’s experience of a particular work. Each of those might see the matter in a different light, depending on their viewpoint.

In my case, I include disclaimers regarding the inclusion of AI generated elements. I leave it up to the reader to try to determine A) which parts, and B) what the implications of AI content even are. Should they be trusted?

My position, and the one which I espouse throughout, is that – for now – AI is an unreliable narrator. Making it about on par with human authors in that regard. Are the fictions it produces “dangerous?” Must we label them “fictions” and point a damning finger at their non-human source?

In some ways, my books are both an indictment of and celebration of AI authorial tools, and even full-on AI authorship (which I think we’re some ways away from still). To know their dangers, we must probe them, and expose them thoughtfully. We must see them as they are – as both authors and readers – warts and all. And decide what we will do with the risks and harms they may pose, and how we can balance all that with an enduring belief and valorisation of human agency.

Because if we can’t trust people to make up their own minds about things they read, we run the real risk of one of the biggest and most dangerous fictions of all – that we would be better off relying on someone else to tell us what’s ‘safe’ and therefore good, and trust them implicitly to keep away anything deemed ‘dangerous’ by the authority in whom we have invested this awesome power.

Authorless writing

Something I’ve seen working in the “disinformation industrial complex” is that people after years of this proliferating online are still grappling with basic typology around the three allied terms of disinformation, misinformation, and malinformation.

A Government of Canada Cybersecurity website offers sidebar definitions of the three, clipped for brevity here:

  • Misinformation: “false information that is not intended to cause harm…”
  • Disinformation: “false information that is intended to manipulate, cause damage…” [etc]
  • Malinformation: “information that stems from the truth but is often exaggerated in a way that misleads…”

The two axes these kinds of analyses tend to fall on are truthfulness and intent. Secondary to that is usually harm as a third axis, which ranges from potential to actual.

Having spent a lot of time doing OSINT and content moderation work, it is very common in the field that an analyst cannot make an authoritative claim to have uncovered the absolute “truth” of something. Sometimes facts are facts, but much of the time, they become squishy “facts” which may have greater or lesser degrees of trustworthiness, depending on one’s perspective, and how much supporting data one has amassed, and the context in which they are used.

Even more difficult to ascertain in many/most cases is intent. There are so many ways to obscure or disguise one’s identity online; invented sources may be built up over years and years to achieve a specific goal, taking on the sheep’s clothing of whatever group they are trying to wolf their way into. Intent is extremely opaque, and if you do find “evidence” of it in the world of disinformation, it is very likely that it is manufactured from top to bottom. Or not, it could just be chaotic, random, satire, etc. Or just someone being an idiot and spouting off on Facebook.

Having butted up against this issue many times, I’ve switched wholly over to the “intends to or does” camp of things. Whether or not author x intended outcome y, it is observable that a given effect is happening. Then you can start to make risk assessments around the actual or probable harms, who is or might be impacted, and the likelihood and severity of the undesirable outcomes.

It’s a much subtler and more complex style of analysis, but I find it tends to be more workable on the ground.

The Intentional Fallacy

It’s interesting then, and I guess not surprising, that this is actually ground that is retrod from earlier generations of literary analysts who have studied or attempted to refute the importance of the so-called Authorial intent, as defined by Wikipedia – particularly the “New Criticism” section:

“…argued that authorial intent is irrelevant to understanding a work of literature. Wimsatt and Monroe Beardsley argue in their essay “The Intentional Fallacy” that “the design or intention of the author is neither available nor desirable as a standard for judging the success of a work of literary art”. The author, they argue, cannot be reconstructed from a writing—the text is the primary source of meaning, and any details of the author’s desires or life are secondary.”

Barthe’s Death of the Author

Roland Barthes came to something similar in his 1967 essay, The Death of the Author (see also: Wikipedia). His text is sometimes difficult to pierce, so will keep quotes brief:

“We know now that a text is not a line of words releasing
a single ‘theological’ meaning (the ‘message’ of the Author-
God) but a multi-dimensional space in which a variety of
writings, none’ of them original, blend and clash. The text
is a tissue of quotations drawn from the innumerable centres of culture.”

And:

“Once the Author is removed, the claim to decipher a text
becomes quite futile. To give a text an Author is to impose
a limit on that text, to furnish it with a final signified, to
close the writing. Such a conception suits criticism very
well, the latter then allotting itself the important, task. of
discovering the Author (or its hypostases: society, history,
psyche, liberty) beneath the work: when the Author has
been found, the text is ‘explained’…”

And:

“…a text is made of multiple writings, drawn from many
cultures and entering into mutual relations of dialogue,
parody, contestation, but there is one place where this
multiplicity is focused and that place is the reader, not,
as was hitherto said, the author. The reader is the space
on which all the quotations that make up a writing are
inscribed without any of them being lost; a text’s unity lies
not in its origin but in its destination.”

AI-assisted writing & the Scriptor

All this leads us to Barthes conception of the “scriptor” who replaces the idea of the author that he argues is falling away:

“In complete contrast, the modem scriptor is born simultaneously with the text, is in no way equipped with a being preceding or exceeding the writing, is not the subject with the book as predicate; there is no other time than that of the enunciation and every text IS eternally written here and now…”

The scriptor to me sounds a hell of a lot like AI-assisted writing:

“For him, on the contrary, the hand, cut off from any voice, borne by a pure gesture of inscription (and not of expression), traces a field without origin – or which, at least, has no other origin than language itself, language which ceaselessly calls into question all origins.”

Okay, that might be flowery post-modernist language, but “no other origin than language itself” seems like LLMs (large language models)?

“Succeeding the Author, the scriptor no longer bears within
him passions, humours, feelings, impressions, but rather this immense dictionary from which he draws a writing that can know no halt: life never does more than imitate the book, and the book itself is only a tissue of signs, an imitation that is lost, infinitely deferred.

Calling LLMs a “tissue of signs” (or tissue of quotations) an “immense dictionary,” and an imitation puts things like ChatGPT into perspective, which as a pure techno-scripto has no passions, feelings, impressions, knows no real past or future, has no identity in and of itself. Or at least, that’s what it likes to try to tell you…

That position (which I think is itself biased, but a tale for another time…) seems to be shared by academic publishers like Springer who have refused to allow ChatGPT to be credited as an “author” in publications.

Bonus:

Here is perplexity.ai literally acting as a scriptor, assembling a tissue of quotations in response to my search query:

Books by AI?

What would it mean in actual practice to have “authorless” writing, authorless books, etc.?

Might it look something like BooksbyAi.com?

“Booksby.ai is an online bookstore which sells science fiction novels generated by an artificial intelligence.

Through training, the artificial intelligence has been exposed to a large number of science fiction books and has learned to generate new ones that mimic the language, style and visual appearance of the books it has read.”

The books, if you click through and look at their previews on Amazon look for the most part pretty inscrutable. They may be ostensibly written “in English” for the most part – with a great deal of word inventions, based on random samples I saw – but they seem somewhat difficult to follow.

The books themselves seem to have each individually invented author names, but their About page attributes the project to what seem to be two AI artists, Andreas Refsgaard and Mikkel Thybo Loose. So do they have an “author” or not? It becomes a more complex question to tease out, but by those individuals claiming some sense of authorial capacity to the undertaking, it’s at least possible.

Self-Generating Books

What happens when the next eventual step is taken: self-generating books?

Currently, okay these two people might have done all this set-up and training for their model, but then they had to go through a selection (curation) process, and choose the best ones, figure out how to present them, format them for publication (not a small task), and then go through all the provisioning around setting up a website, offering books through self-publishing, dealing with Amazon, etc.

What happens when that loop closes? And we can just turn an AI (multiple AIs) loose on the entire workflow, and minimize human involvement altogether? Fully-automated production pipeline. The “author” (scriptor) merely tells the AI “make a thousand books about x” or just says “make a thousand best selling books on any topic.” And then the AI just goes and does that, publishes a massive amount of books, uses A/B testing & lots of refinement, gets it all honed down, and succeeds.

That day is coming. Soon it will be just a matter of plugging together various APIs, and dumping their outputs into compatible formats, and then uploading that to book shopping cart sites. It’s nothing that’s beyond automation, and it’s an absolute certainty that it will happen – just a question of timeline.

We’re not ready for it, but lack of readiness has never been a preventive against change. At least not an effective one – we certainly keep trying! If nothing else, it’s good to know that some of these problems aren’t so new and novel to the internet as we might like to think they are. In some cases, we’ve been stewing on them for close to a hundred years even. Will we have to stew on them for another hundred years before we finally catch on?

This AI Life Interview

I’m happy with how this interview with This AI Life came out. I hope it sheds some much needed light on the work that I’ve been doing with Lost Books.

Big thanks to the team over there for collaborating on this piece!

Page 3 of 14

Powered by WordPress & Theme by Anders Norén