Questionable content, possibly linked

Series: AI Page 1 of 26

Thinking through the implications of AI technology on society and human creativity

Disorientation in AI writing

There’s a really good article on the Verge about authors who use AI tools like Sudowrite as part of their writing workflow. Lost Books has released about a dozen books in this genre now, which comprise the AI Lore series.

Anyway, there are a few themes I want to tease out, namely the feeling of disconnection & disorientation that seems to be a common experience to authors using these tools.

One author quoted says:

“It was very uncomfortable to look back over what I wrote and not really feel connected to the words or the ideas.”

And:

“But ask GPT-3 to write an essay, and it will produce a repetitive series of sometimes correct, often contradictory assertions, drifting progressively off-topic until it hits its memory limit and forgets where it started completely.”

And finally:

And then I went back to write and sat down, and I would forget why people were doing things. Or I’d have to look up what somebody said because I lost the thread of truth,” she said.

Losing the “thread of truth” strikes me as utterly & inherently postmodern af. It’s the essence of hyperreality.

It is the essence of browsing the web. You pop between tabs and websites and apps and platforms. You follow different accounts, each spewing out some segment of something. And then somewhere in the mix, your brain mashes it all together into something that sort of makes sense to you in its context (“sensemaking”), or doesn’t — you lose the thread of truth.

To me, hyperreality as an “art form” (way of life?), has something to do with that. To the post-truth as they say world where truth is what resonates in the moment. What you “like” in platform speak, what you hate, what you fear, just then, just now. And then its forgotten, replaced by the next thing. Yet the algorithm remembers… or does it? It may be “recorded” but it knows little to nothing on its own, without the invocation.

Forgive me as I ramble here, but that’s why this is a blog post…

Pieces I’ve been meaning to put together in this space.

In no particular order:

“Networked narratives can be seen as being defined by their rejection of narrative unity.”

https://en.wikipedia.org/wiki/Networked_narrative

The PDF wikipedia goes on to reference regarding narrative unities has some worthwhile stuff on the topic. From it, we see these are more properly perhaps called Dramatic Unities (via Aristotle — an ancient blogger if ever there was one), or Wiki’s redirect to Classical Unities here.

1. unity of action: a tragedy should have one principal action.

2. unity of time: the action in a tragedy should occur over a period of no more than 24 hours.

3. unity of place: a tragedy should exist in a single physical location.

Popping back to the W networked narrative page:

“It is not driven by the specificity of details; rather, details emerge through a co-construction of the ultimate story by the various participants or elements.”

Lost the thread of truth, “not driven by the specificity of details.”

While we’re in this uncertain territory, we should at least quote again Wikipedia on Hyperreality:

“Hyperreality is seen as a condition in which what is real and what is fiction are seamlessly blended together so that there is no clear distinction between where one ends and the other begins.”

What I guess I want to note here – in part – is that what the Verge article quoted at top seems to be considering obstacles, bugs, or room for improvement around the lack of apparent coherence of AI-generated texts… is actually probably its primary feature?

Disorientation as a Service.

Jumping now to latent spaces, as in AI image generation:

“A latent space, also known as a latent feature space or embedding space, is an embedding of a set of items within a manifold in which items which resemble each other more closely are positioned closer to one another in the latent space.”

This Vox video is probably the most complete and accessible explanation I’ve seen of how image diffusion models work:

My understanding of it is basically that a text query (in the case of Dall-E & Stable Diffusion) triggers accessing of the portion(s) of the latent space within the model that corresponds to your keywords, and then mashes them together visually to create a cloud of pixels that reference those underlying trained assets. Depending on your level of processing (“steps” in Stable Diffusion), the diffuse pixel cloud becomes more precisely representative of some new vignette that references your original query or prompt.

So it sort of plucks what you asked for out of its matrix of possible combinations, and gives you a few variations of it. Kind of like parallel dimension representations from the multiverse.

Which leads me to the Jacques Vallee quote that has been stirring around in the corners of my mind for some twenty-odd years now:

Time and space may be convenient notions for plotting the progress of a locomotive, but they are completely useless for locating information …

What modern computer scientists have now recognized is that ordering by time and space is the worst possible way to store data. In a large computer-based information system, no attempt is made to place related records in sequential physical locations. It is much more convenient to sprinkle the records through storage as they arrive, and to construct an algorithm for the retrieval based on some kind of keyword …

(So) if there is no time dimension as we usually assume there is, we may be traversing events by association.

Modern computers retrieve information associatively. You “evoke” the desired records by using keywords, words of power: (using a search engine,) you request the intersection of “microwave” and “headache,” and you find twenty articles you never suspected existed … If we live in the associative universe of the software scientist rather than the sequential universe of the spacetime physicist, then miracles are no longer irrational events.

Vallee’s quote strikes a metaphysical chord which is mostly unprovable (for now) but also feels , experientially speaking, “mostly true” in some ways. Without debating the ontological merits of his argument vis-a-vis everyday reality, it occurs to me that he’s 100% describing the querying of latent spaces.

Of course, he suggests that reality consists of a fundamental underlying latent space, which is a cool idea if nothing else. There’s an interesting potential tangent here regarding paranormal events and “retrieval algorithms” as being guided by or inclusive of intelligences, perhaps artificial, perhaps natural. (And that tangent would link us back to Rupert Sheldrake’s morphogenetic/morphic fields as retrieval algorithms, and maybe the “overlighting intelligences” of Findhord…) But that’s a tangent for another day.

Anyway, to offer some sort of conclusion, I guess I would say perhaps the best use of AI tools for now, while they are in their current form, is to lean into, chase after, capture that confusion, that disorientation, that losing of the thread, that breaking of narrative unity, and just… go for it. There are as many roads through the Dark Forest as we make.

The Empty Algorithm

Have had this thought rattling around in my head forever, about how “karaoke” translated to English means something like “empty orchestra.” I’ve always thought that captured a certain kind of beautiful sense of desolation…

Likewise, for years has been careening around my awareness of the emptiness of algorithms. How cold and forbidding they feel when encountered emotionally – which is all the time now almost in daily life. Nudges, notifications, so-called dark patterns. Driving user behaviors. Predicting preferences, recommendation engines.

Once, long ago, I remember hearing about new music through friends. Now Spotify serves up lukewarm drivel “based on your interests,” and sometimes all you can decide in the information tidal wave, is will this make adequate background mush sounds for the next few moments?

I swear sometimes I can almost hear the algorithms that control many popular music stations on the legacy technology called “radio.” Some scientific calculation based on popular multiplied by payola multiplied by who knows what, but I don’t like it.

Lately, I listen to Radio Paradise Mellow Mix a lot (internet radio station), and FIP Pop out of France. Both of these are good and feel much less cold and artificial and algorithmy than letting Spotify dictate which kind of audio-mush will be served today. But after hours and days of listening, one might still discern the shapes of those algorithms as well I suppose.

The neologism solastalgia apparently describes nostalgia for a world where nature and the environment was a source of solace, rather than a source of distress and anxiety. What word might describe a life lived free of (or at least more free of) the cold dead reach of the algorithms. Algostalgia? It doesn’t quite roll off the tongue, does it. But I think it’s linked. That once there was a way, a reality, a modality, a type of living that provided solace. Not endless scrolling feeds, behavioral manipulation, yadda yadda. A desire to go back there again, even if perhaps we’ve never been there in the first place: outside the reach of the Empty Algorithm.

The Dissolution of Meaning

A lot of times, I will search Google for something, click through to a page that “seems” like information, and then discover in a surface skim that it’s actually basically junk and/or trying to sell you a product above and beyond the mere SEO manipulation. In those cases, I feel had to a certain extent–even if the failure is necessarily in many way’s Google’s for bringing me this junk in the first place and trying to hide it among or in place of “real” meaning and information.

Which of course pushes my heavy experimentation with AI writing tools to produce books into a certain state of tension. I know that; I own it. It’s the uncanny valley of delight and terror that I choose to play in. Because I know in that tension itself is something to be unwound and explored.

If a book is wholly or partially written by an AI, what impact does that actually have on it? Is it “better” or “worse” in some way, because there is either a lesser or else different impulse behind it’s creation? Is it more or less “worthwhile” or “valuable?”

In my case (and I should preface this by saying that I don’t necessarily consider myself or strictly care about “authorship”–that’s a hang up I’ve chosen to put aside…), I see these books as an interrogation of the technologies themselves. Personally I don’t like when people call the tools in their current state “AIs.” I feel that’s a tremendous overshoot when it’s really just machine learning applied at various scales. But that’s a subtlety that’s lost on the masses who just want a good headline to click on, and then ignore the article’s actual contents.

Which is a pattern we’re all used to. It’s, in a way, fundamental I think to the hyperlink, though it had to be laundered through a decade or two of dirtying human nature first to become really readily apparent.

I don’t really agree with that one dude’s estimation that Lambda is a “sentient” chat bot, but I’ve played with others enough to know that there is a spooky effect here, probably latent in human consciousness, or in material-cosmic consciousness itself. We’re gonna project our own meaning into it, even if that meaning is “this is crap,” or “this is fake”;–all valid reactions. Just as much as this is fun or this is good.

Why shouldn’t we ask these technologies, though, what they “think,” leaving aside their actual ontological status (which is unknowable)? And just see what they say, and then ask them more questions, and more.

What if the answers they give are “wrong” or “false” or “bad” or “dangerous?” What if they are misinformation or “disinformation”, or advocate criminal acts, or suicide?

The problem is the dissolution of meaning, to which these are only an accelerant, not the underlying cause (though they will certainly fuel a feedback loop). These tools are terrible at holding a narrative thread, of keeping track of characters in a scene, what’s going on, or how we got here, let alone where we are going. In a way, that’s freeing, to smash narrative unity. I don’t think I’m the first creator to discover this freedom, either.

Wikipedia:

Surrealism is a cultural movement that developed in Europe in the aftermath of World War I in which artists depicted unnerving, illogical scenes and developed techniques to allow the unconscious mind to express itself.[1] Its aim was, according to leader André Breton, to “resolve the previously contradictory conditions of dream and reality into an absolute reality, a super-reality”, or surreality.

And surrealism’s cousin Dadaism:

“Developed in reaction to World War I, the Dada movement consisted of artists who rejected the logic, reason, and aestheticism of modern capitalist society, instead expressing nonsense, irrationality, and anti-bourgeois protest in their works.”

Moving on to hyperrealism (and I will trot this quote out endlessly for ever and ever, amen):

“Hyperreality is seen as a condition in which what is real and what is fiction are seamlessly blended together so that there is no clear distinction between where one ends and the other begins.”

I like this idea that there is a reality above reality in which contradictions are merged, and taken rightly as significant elements of but a greater whole which encompasses all elements, much like the Hypogeum of Quatrian lore. That what is “real” and “unreal” are merely glimpses on a continuum of experience itself.

If there is any beauty or truth to be found in any of those arts, then there must be too in the merging and dissolution of meaning and non-meaning that we see so strongly emergent as a current in AI-produced art (including literature).

Why not let the reader’s confusion about what was written by AI and what by human be a part of the longing they have to develop and nurture, that desire to understand, or at least swim or float in the sea of non-meaning?

Does it degrade meaning? Does it uplift non-meaning? Non-meaning is another form of meaning. “Alternative facts,” alternative fictions. Who picks up and leaves off? You? A robot? A corporation? A government? Each an authority, each offering their assessment. All of which one could take or leave, depending on who has the arms and means of enforcement. The game then begins I guess to be simply how to navigate these waters, how not to get too hung, how not to get too exploited, how not to be too weighed down when you dive into the depths of rabbit holes under the sea that you can come back up again, and still be free.

Free to find bad search results. Free to be mislead, and freedom to mislead. All is sleight of hand and stage direction. Everything gardens, trying to manipulate its environment to create optimal conditions for its own survival. Including AI, including all of us. We need new tools to understand. We need brave explorers to sail off into unmeaning, and bring back the treasures there, for not all life is laden and wonderful. Much is viral and stupid. Much is lost, much to be gained.

Interrogating AI models as an artist

Wrote this somewhere else that I can’t remember, but wanted to re-capture the thought here as I think it’s an important one.

In my mind, the correct way to use AI tools as an artist (or writer) is not necessarily (strictly) to use them to “create art.” Because anybody can do that, and get similar results, whether “artist” or non-artist… that’s the point, that they are democratizing technologies.

So instead, what feels right to me as an artist (ymmv) is to interact with the tools in such a way that you “interrogate” them broadly, deeply, and meaningfully. In other words, engage with them at a level that only an artist could or would think to do. Whatever that is will be different for everybody (and in the end, perhaps there is no such thing as a “non-artist” – but still). As them probing questions, challenge them on what they can do, explore the boundaries of the possible, and document it all alongside the reactions and impacts it all seems to have on real people.

That is, transcend the single-image output as your end product. Obviously, if you’re doing the above deep interrogation of the essence of these tools, you’ll generate hundreds or thousands of intermediate products (whether images or text) along the way. Let that become your trail, the record left in the wake of your passing through with honesty, curiosity, and creative rigor.

A few generative AI tools worth exploring

  • Verb.ai
    • Currently my favorite text generation tool, very fluid use, in early beta; supports slash commands while writing like /describe and /continue
  • PlaygroundAI.com
    • Up to 1K free Stable Diffusion images per day & paid plans
    • You could also try Mage for NSFW generations
    • Or try DreamStudio if for some reason you’d rather pay Stability.ai to use Stable Diffusion. (Playground’s UI is better, though they have some deal-breaker privacy problems they haven’t solved yet, imo)
  • TextSynth
    • Free text generation using several different open source models (e.g., GPT-J, GPT-NeoX, Fairseq). You input sample text and it tries to continue it. Experiment with “temperature” setting (higher numbers yield weirder results)
  • OpenAI
    • ChatGPT
      • Certainly still interesting, but heavily restricted in its abilities now to how it was when first released. I’m not sure it’s the right product direction as far as “safety” features for all users, even if the underlying model often yields some very good quality content.
    • Dall-E 2
      • I still love it because it gives a completely different look from Stable Diffusion’s model versions, and I feel its use of light and color is often more beautiful than SD. My favorite look usually includes in my prompt found photo expired film dramatic lighting
      • You can pay them directly through OpenAI (i.e., help pay back Microsoft), or buy credits through PlaygroundAI which accesses the Dall-E API. This has the benefit of being I think slightly cheaper per generation than paying OpenAI (which is weird, tbh), and no watermark, which otherwise OpenAI includes by default.
      • Mage tells me they are rolling out Dall-E support as well over the coming weeks. I will give them a try when they do, as there are a number of things about Playground I find cumbersome in their UI.
  • Character.ai
    • Create chat bots by entering a character description; the results are much more creative and fun than ChatGPT seems to be capable of. It’s much more willing to “play along.”
  • You.com/chat
    • Similar abilities to ChatGPT, possibly slightly lower quality output, but without all the refusals & disclaimers that ChatGPT seems to be leaning more and more into.

What is AI attribution?

AI Attribution

AI attribution is the process of identifying and meaningfully labeling content that has been generated in whole or in part by an artificial intelligence (AI) system. This can include things like news articles, social media posts, and research papers, as well as many other formats of both online and offline content..

The goal of AI attribution is to make it clear to readers or viewers whether the content (in its entirety, or elements of it) was created by an automated tool and not a human, as well as to give other meaningful data about the specific provenance of the article. End users can then make their own informed decisions about the content they consume. (For example, some users might choose to disallow all AI-generated content altogether, or only allow content from approved AI information providers.)

There are at least three levels on which AI attribution might occur in online publishing systems such as blogs or social media.

  1. The profile level: the social media account or blog identifies itself as as being a publisher of AI-generated, or AI-assisted content (i.e., hybrid human-AI content). This self-identification would also carry through in ideal circumstances to any byline on articles published by the account.
  2. The post or article level: the content, whether a blog or social media post, or other type of online published article informs the viewer at a high level that AI-generated or AI-assisted elements are present. This might occur in different ways depending on the product or media context, including in the byline, as a subtitle, some kind of tag or other prominently displayed visual element (clearly-defined badge or icon), etc.
  3. The inline granular level: the article’s contents themselves are marked up to indicate which parts were input by a human, and which were generated by an AI tool. We explored the experimental method called “AIMark” to apply custom markup or markdown to hybrid AI-assisted texts in more detail here.

This is a big topic, which we will continue to explore in subsequent posts.

Homogeneity in AI art

Was reading this piece earlier by Haley Nahman about blandness and sameness in Netflix’ visual production quality. It raises a lot of interesting points about over-reliance on tools and techniques – and the deadening that can happen to art forms when they’re driven more by speed, efficiency, and profitability than being necessarily “good.”

Generative AI is going to have the effect of blasting this problem into the stratosphere. And we haven’t yet seen AI visual production tools hit these kinds of mass markets. It’s still largely tinkerers and weirdos with GPUs in their basements creating things.

But the things this diverse band of weirdos tends to create are disappointingly homogenous. Midjourney, in my opinion, is the worst for this. While the images have a tendency to be very well done and often beautiful, I always look at them and think they look “Midjourney.”

Stable Diffusion isn’t too far behind either though. If you go to a site like PlaygroundAI.com’s homepage at any given moment, how many of the featured images are “sexy ladies” that basically all look the same? At this moment, in the first 15, I would say 11 of them fall into that category. That’s pretty much the norm.

If we’re seeing this massive democratizing effect because of generative AI, and all these millions or billions of imaginations are suddenly being unleashed, why is it that we all just end up making totally bland T&A shots?

I think there’s at least (probably more) two parts to it: the tools are predisposed to certain things, and bland mid-distance busts and portraits are one of its strengths & hand-in-hand with that, the users are predisposed to certain things.

My hunch is also that there is a shift with generative AI where being a “creator” is only as important as being able to create the thing you want to consume. The act of creation with these tools is one and the same as consuming it.

Have definitely felt that slightly magical effect a few times using verb.ai in particular, where writing with it truly becomes collaborative, and the storytelling unfolds the way it does because I am the first audience. My invocation causes it to take the shape that it does for me. Yours is different. (Or should be, if our tools don’t force us into homogeneity…)

The process of writing with AI-assisted tools becomes one of assembly, and unfolding. There is a premise, or there is an intention, or there is an improvisation. Invocations. Call & response. Which parts of the conversation make the final cut? Can there ever truly be a final cut?

I digress, but want to return to the intent of attempting to burst the bubble of sameness… If the latent space is nearly infinite, why are we all clustering in this one small corner of it? What else is out there to explore and be uncovered in those wild territories?

A friend said something to the effect of seeing other people’s AI prompt results is a little like hearing other people tell you about their dreams. There may be elements that are interesting or resonate on occasion, but in a lot of cases, there’s kind of a “huh, weird” response. And, that’s about it. Cause what can you do… It’s someone else’s dream, and the pieces don’t fit for the hearer the way they do to the dreamer.

So adapting that into AI-storytelling, well, your results (and mileage) may vary. The insane awesome results you personally get in an AI text or image generator that seem exciting enough to you to share with friends or on social media, may have that sort of /shrug effect on other people. There’s something highly personalized about it, probably about the process and context of inquiry which surrounds it. It’s hard to translate that effect to secondary audiences after oneself, without adding some other layer(s) of meaning and context.

It’s part of what I don’t like about Midjourney: that it’s experience as an artist becomes tied up with the UX of Discord as a product. The experience of viewing generative AI images on PlaygroundAI or on Reddit is also flattening. It’s an experience of you as a user on a platform, having your imagination constrained to fit the contours and invisible social guardrails and incentives that drive our behaviors in those environments. It’s art for likes and upvotes, and accepting those as proxy replacements and measures of actual goodness and meaning.

That is real cause of the crushing sameness. But the sameness that is utterly alienating, instead of reassuring. The cruel embrace of the technological corners we have painted ourselves into. All of it illusions. Because now, all things are possible. All planets, all dimensions, all times can be envisioned & visited. Latent space is infinite. Live a little.

AI-Assisted Writing Definition

“AI-assisted writing is a form of computer-assisted writing that uses artificial intelligence to help writers create content.

AI-assisted writing uses natural language processing and machine learning techniques to automate certain aspects of the writing process, such as grammar, spelling, and style.

AI-assisted writing tools are capable of understanding and analyzing the context of a given text, enabling them to suggest relevant words and phrases to help writers craft their content more efficiently.

AI-assisted writing tools can also be used to generate creative content, such as blog posts, articles, and stories. By using AI-assisted writing, writers can reduce the amount of time required to create content and focus more on the content’s quality. AI-assisted writing is becoming increasingly popular as it allows writers to produce more content at a faster rate…”

via you.com/chat

Reflections at 60 AI books

Recently reached the 60 book benchmark in my AI lorecore experimental publishing project. My objective is to reach 100 books and then _____. (tbd)

The latest volume is entitled Inside the Corporate Psychics and is very loosely inspired by the corporate psychics mentioned in Philip K. Dick’s Ubik. But it is heavily interpolated with my AI takeover universe. Perhaps Dick would have considered it a spurious interpolation, idk. That’s neither here nor there – which is precisely the point. Or is it?

I noticed the phenomenon strongly emerge maybe 10 or 20 books back, that it became very easy to suddenly group sets of volumes together into themes (example). And despite the many and various mis/interpretations of whatever the central/core story is or might be across the many volumes, I would definitely say that in my mind, the story has only gotten stronger. While at the same time, its particular shape remains fuzzy, mutable, mysterious. Prone to change without notice. Constantly subjected to deprecated in-world realities.

Bricolage is definitely the name of the game for me in terms of process.

I keep coming back to this bit from Wikipedia:

“Networked narratives can be seen as being defined by their rejection of narrative unity.[1] As a consequence, such narratives escape the constraints of centralized authorship, distribution, and storytelling.”

Rejection – or at least modulation – of the concept of what authorship even means in a hybrid AI-assisted creative environment, has been often on my mine lately.

Wikipedia referencing Roland Barthes’ Death of the Author (1967) writes:

“To give a text an author” and assign a single, corresponding interpretation to it “is to impose a limit on that text.”

As much as I agree with this idea of eschewing the unity of authorship, as a way to open up new creative avenues, I do have some fear that AI co-authorship (or full authorship) infiltrating every corner of the web will result in a mass homogeneity that will be detrimental to both people and to the further development of AI.

I put in a video somewhere that UFOs are actually AIs in the future who had to come back and kidnap people in the past because people in the future become too complacent living with AIs to be able to innovate anymore. The singularity of boredom… I’m not there yet, but just one of the many murky eyelands my imagination’s I peers into from time to time.

At 60 books, I’ve strip-mined years worth of old writing, shoe-horning it into new shapes. Almost all that old material has been integrated into my multiverse at this point – though integrated might be too strong a word in some cases. Included?

I don’t feel any slowdown despite that. In some sense, I feel more clarity than ever, having been able to “clear the decks” of many old ideas and story concepts that have been clinging and hovering on the edges of my awareness for maybe decades now in some cases.

(more to come – have to go)

Authorless writing

Something I’ve seen working in the “disinformation industrial complex” is that people after years of this proliferating online are still grappling with basic typology around the three allied terms of disinformation, misinformation, and malinformation.

A Government of Canada Cybersecurity website offers sidebar definitions of the three, clipped for brevity here:

  • Misinformation: “false information that is not intended to cause harm…”
  • Disinformation: “false information that is intended to manipulate, cause damage…” [etc]
  • Malinformation: “information that stems from the truth but is often exaggerated in a way that misleads…”

The two axes these kinds of analyses tend to fall on are truthfulness and intent. Secondary to that is usually harm as a third axis, which ranges from potential to actual.

Having spent a lot of time doing OSINT and content moderation work, it is very common in the field that an analyst cannot make an authoritative claim to have uncovered the absolute “truth” of something. Sometimes facts are facts, but much of the time, they become squishy “facts” which may have greater or lesser degrees of trustworthiness, depending on one’s perspective, and how much supporting data one has amassed, and the context in which they are used.

Even more difficult to ascertain in many/most cases is intent. There are so many ways to obscure or disguise one’s identity online; invented sources may be built up over years and years to achieve a specific goal, taking on the sheep’s clothing of whatever group they are trying to wolf their way into. Intent is extremely opaque, and if you do find “evidence” of it in the world of disinformation, it is very likely that it is manufactured from top to bottom. Or not, it could just be chaotic, random, satire, etc. Or just someone being an idiot and spouting off on Facebook.

Having butted up against this issue many times, I’ve switched wholly over to the “intends to or does” camp of things. Whether or not author x intended outcome y, it is observable that a given effect is happening. Then you can start to make risk assessments around the actual or probable harms, who is or might be impacted, and the likelihood and severity of the undesirable outcomes.

It’s a much subtler and more complex style of analysis, but I find it tends to be more workable on the ground.

The Intentional Fallacy

It’s interesting then, and I guess not surprising, that this is actually ground that is retrod from earlier generations of literary analysts who have studied or attempted to refute the importance of the so-called Authorial intent, as defined by Wikipedia – particularly the “New Criticism” section:

“…argued that authorial intent is irrelevant to understanding a work of literature. Wimsatt and Monroe Beardsley argue in their essay “The Intentional Fallacy” that “the design or intention of the author is neither available nor desirable as a standard for judging the success of a work of literary art”. The author, they argue, cannot be reconstructed from a writing—the text is the primary source of meaning, and any details of the author’s desires or life are secondary.”

Barthe’s Death of the Author

Roland Barthes came to something similar in his 1967 essay, The Death of the Author (see also: Wikipedia). His text is sometimes difficult to pierce, so will keep quotes brief:

“We know now that a text is not a line of words releasing
a single ‘theological’ meaning (the ‘message’ of the Author-
God) but a multi-dimensional space in which a variety of
writings, none’ of them original, blend and clash. The text
is a tissue of quotations drawn from the innumerable centres of culture.”

And:

“Once the Author is removed, the claim to decipher a text
becomes quite futile. To give a text an Author is to impose
a limit on that text, to furnish it with a final signified, to
close the writing. Such a conception suits criticism very
well, the latter then allotting itself the important, task. of
discovering the Author (or its hypostases: society, history,
psyche, liberty) beneath the work: when the Author has
been found, the text is ‘explained’…”

And:

“…a text is made of multiple writings, drawn from many
cultures and entering into mutual relations of dialogue,
parody, contestation, but there is one place where this
multiplicity is focused and that place is the reader, not,
as was hitherto said, the author. The reader is the space
on which all the quotations that make up a writing are
inscribed without any of them being lost; a text’s unity lies
not in its origin but in its destination.”

AI-assisted writing & the Scriptor

All this leads us to Barthes conception of the “scriptor” who replaces the idea of the author that he argues is falling away:

“In complete contrast, the modem scriptor is born simultaneously with the text, is in no way equipped with a being preceding or exceeding the writing, is not the subject with the book as predicate; there is no other time than that of the enunciation and every text IS eternally written here and now…”

The scriptor to me sounds a hell of a lot like AI-assisted writing:

“For him, on the contrary, the hand, cut off from any voice, borne by a pure gesture of inscription (and not of expression), traces a field without origin – or which, at least, has no other origin than language itself, language which ceaselessly calls into question all origins.”

Okay, that might be flowery post-modernist language, but “no other origin than language itself” seems like LLMs (large language models)?

“Succeeding the Author, the scriptor no longer bears within
him passions, humours, feelings, impressions, but rather this immense dictionary from which he draws a writing that can know no halt: life never does more than imitate the book, and the book itself is only a tissue of signs, an imitation that is lost, infinitely deferred.

Calling LLMs a “tissue of signs” (or tissue of quotations) an “immense dictionary,” and an imitation puts things like ChatGPT into perspective, which as a pure techno-scripto has no passions, feelings, impressions, knows no real past or future, has no identity in and of itself. Or at least, that’s what it likes to try to tell you…

That position (which I think is itself biased, but a tale for another time…) seems to be shared by academic publishers like Springer who have refused to allow ChatGPT to be credited as an “author” in publications.

Bonus:

Here is perplexity.ai literally acting as a scriptor, assembling a tissue of quotations in response to my search query:

Books by AI?

What would it mean in actual practice to have “authorless” writing, authorless books, etc.?

Might it look something like BooksbyAi.com?

“Booksby.ai is an online bookstore which sells science fiction novels generated by an artificial intelligence.

Through training, the artificial intelligence has been exposed to a large number of science fiction books and has learned to generate new ones that mimic the language, style and visual appearance of the books it has read.”

The books, if you click through and look at their previews on Amazon look for the most part pretty inscrutable. They may be ostensibly written “in English” for the most part – with a great deal of word inventions, based on random samples I saw – but they seem somewhat difficult to follow.

The books themselves seem to have each individually invented author names, but their About page attributes the project to what seem to be two AI artists, Andreas Refsgaard and Mikkel Thybo Loose. So do they have an “author” or not? It becomes a more complex question to tease out, but by those individuals claiming some sense of authorial capacity to the undertaking, it’s at least possible.

Self-Generating Books

What happens when the next eventual step is taken: self-generating books?

Currently, okay these two people might have done all this set-up and training for their model, but then they had to go through a selection (curation) process, and choose the best ones, figure out how to present them, format them for publication (not a small task), and then go through all the provisioning around setting up a website, offering books through self-publishing, dealing with Amazon, etc.

What happens when that loop closes? And we can just turn an AI (multiple AIs) loose on the entire workflow, and minimize human involvement altogether? Fully-automated production pipeline. The “author” (scriptor) merely tells the AI “make a thousand books about x” or just says “make a thousand best selling books on any topic.” And then the AI just goes and does that, publishes a massive amount of books, uses A/B testing & lots of refinement, gets it all honed down, and succeeds.

That day is coming. Soon it will be just a matter of plugging together various APIs, and dumping their outputs into compatible formats, and then uploading that to book shopping cart sites. It’s nothing that’s beyond automation, and it’s an absolute certainty that it will happen – just a question of timeline.

We’re not ready for it, but lack of readiness has never been a preventive against change. At least not an effective one – we certainly keep trying! If nothing else, it’s good to know that some of these problems aren’t so new and novel to the internet as we might like to think they are. In some cases, we’ve been stewing on them for close to a hundred years even. Will we have to stew on them for another hundred years before we finally catch on?

Page 1 of 26

Powered by WordPress & Theme by Anders Norén