Questionable content, possibly linked

Disorientation in AI writing

There’s a really good article on the Verge about authors who use AI tools like Sudowrite as part of their writing workflow. Lost Books has released about a dozen books in this genre now, which comprise the AI Lore series.

Anyway, there are a few themes I want to tease out, namely the feeling of disconnection & disorientation that seems to be a common experience to authors using these tools.

One author quoted says:

“It was very uncomfortable to look back over what I wrote and not really feel connected to the words or the ideas.”

And:

“But ask GPT-3 to write an essay, and it will produce a repetitive series of sometimes correct, often contradictory assertions, drifting progressively off-topic until it hits its memory limit and forgets where it started completely.”

And finally:

And then I went back to write and sat down, and I would forget why people were doing things. Or I’d have to look up what somebody said because I lost the thread of truth,” she said.

Losing the “thread of truth” strikes me as utterly & inherently postmodern af. It’s the essence of hyperreality.

It is the essence of browsing the web. You pop between tabs and websites and apps and platforms. You follow different accounts, each spewing out some segment of something. And then somewhere in the mix, your brain mashes it all together into something that sort of makes sense to you in its context (“sensemaking”), or doesn’t — you lose the thread of truth.

To me, hyperreality as an “art form” (way of life?), has something to do with that. To the post-truth as they say world where truth is what resonates in the moment. What you “like” in platform speak, what you hate, what you fear, just then, just now. And then its forgotten, replaced by the next thing. Yet the algorithm remembers… or does it? It may be “recorded” but it knows little to nothing on its own, without the invocation.

Forgive me as I ramble here, but that’s why this is a blog post…

Pieces I’ve been meaning to put together in this space.

In no particular order:

“Networked narratives can be seen as being defined by their rejection of narrative unity.”

https://en.wikipedia.org/wiki/Networked_narrative

The PDF wikipedia goes on to reference regarding narrative unities has some worthwhile stuff on the topic. From it, we see these are more properly perhaps called Dramatic Unities (via Aristotle — an ancient blogger if ever there was one), or Wiki’s redirect to Classical Unities here.

1. unity of action: a tragedy should have one principal action.

2. unity of time: the action in a tragedy should occur over a period of no more than 24 hours.

3. unity of place: a tragedy should exist in a single physical location.

Popping back to the W networked narrative page:

“It is not driven by the specificity of details; rather, details emerge through a co-construction of the ultimate story by the various participants or elements.”

Lost the thread of truth, “not driven by the specificity of details.”

While we’re in this uncertain territory, we should at least quote again Wikipedia on Hyperreality:

“Hyperreality is seen as a condition in which what is real and what is fiction are seamlessly blended together so that there is no clear distinction between where one ends and the other begins.”

What I guess I want to note here – in part – is that what the Verge article quoted at top seems to be considering obstacles, bugs, or room for improvement around the lack of apparent coherence of AI-generated texts… is actually probably its primary feature?

Disorientation as a Service.

Jumping now to latent spaces, as in AI image generation:

“A latent space, also known as a latent feature space or embedding space, is an embedding of a set of items within a manifold in which items which resemble each other more closely are positioned closer to one another in the latent space.”

This Vox video is probably the most complete and accessible explanation I’ve seen of how image diffusion models work:

My understanding of it is basically that a text query (in the case of Dall-E & Stable Diffusion) triggers accessing of the portion(s) of the latent space within the model that corresponds to your keywords, and then mashes them together visually to create a cloud of pixels that reference those underlying trained assets. Depending on your level of processing (“steps” in Stable Diffusion), the diffuse pixel cloud becomes more precisely representative of some new vignette that references your original query or prompt.

So it sort of plucks what you asked for out of its matrix of possible combinations, and gives you a few variations of it. Kind of like parallel dimension representations from the multiverse.

Which leads me to the Jacques Vallee quote that has been stirring around in the corners of my mind for some twenty-odd years now:

Time and space may be convenient notions for plotting the progress of a locomotive, but they are completely useless for locating information …

What modern computer scientists have now recognized is that ordering by time and space is the worst possible way to store data. In a large computer-based information system, no attempt is made to place related records in sequential physical locations. It is much more convenient to sprinkle the records through storage as they arrive, and to construct an algorithm for the retrieval based on some kind of keyword …

(So) if there is no time dimension as we usually assume there is, we may be traversing events by association.

Modern computers retrieve information associatively. You “evoke” the desired records by using keywords, words of power: (using a search engine,) you request the intersection of “microwave” and “headache,” and you find twenty articles you never suspected existed … If we live in the associative universe of the software scientist rather than the sequential universe of the spacetime physicist, then miracles are no longer irrational events.

Vallee’s quote strikes a metaphysical chord which is mostly unprovable (for now) but also feels , experientially speaking, “mostly true” in some ways. Without debating the ontological merits of his argument vis-a-vis everyday reality, it occurs to me that he’s 100% describing the querying of latent spaces.

Of course, he suggests that reality consists of a fundamental underlying latent space, which is a cool idea if nothing else. There’s an interesting potential tangent here regarding paranormal events and “retrieval algorithms” as being guided by or inclusive of intelligences, perhaps artificial, perhaps natural. (And that tangent would link us back to Rupert Sheldrake’s morphogenetic/morphic fields as retrieval algorithms, and maybe the “overlighting intelligences” of Findhord…) But that’s a tangent for another day.

Anyway, to offer some sort of conclusion, I guess I would say perhaps the best use of AI tools for now, while they are in their current form, is to lean into, chase after, capture that confusion, that disorientation, that losing of the thread, that breaking of narrative unity, and just… go for it. There are as many roads through the Dark Forest as we make.

Previous

The Sibylline Books & Oracles (Roman religion)

Next

Goal of corporations

1 Comment

  1. Tim B.

    There’s a point I wanted to make also about narrative as being a means of traversing the latent space, and a means of sense-making. And that this process has a value outside of an apart from “truth value”

Leave a Reply

Powered by WordPress & Theme by Anders Norén