Questionable content, possibly linked

Category: Other Page 41 of 177

A Captcha for the Soul

Free-Association, Stream of Consciousness & Shibboleth Tests for Sentience

That subtitle might end up being longer than this blog post, but just wanted to drop a marker in this part of ideaspace, that perhaps free association & stream of consciousness could be (for now) a possible method of detecting sentience or its echoes – or at least high degrees of novelty (which perhaps might prove the two are somehow mysteriously linked) – within the behaviors of generative AI systems.

As I mentioned in my post about The Dissolving Factory, it’s also likely that many humans might not “pass” a free-association/test of stream of consciousness. I mean, I don’t even know what that test would be, but just throwing this concept out into the aether. But putting that aside as suspension of disbelief for this poetic sci fi thought experiment…

Here is the piece of free associative text that ChatGPT v4 came up with when I was feeding it in samples of my own human-written free associative surrealist text, and then asking it to faithfully reproduce a new work in the style of my samples.

I won’t pretend to believe my own free associative texts are better or more interesting (perhaps they are worse), but ChatGPT’s attempt is, I think, quite embarrassingly bad, and shows ultimately the true creative state of many of these tools:

Kaleidoscope kazoo and the murfinglator’s wink, can a banana split itself, why so? Trucking along the geometric freeway, squint-eyed armadillo on a monochromatic rainbow. Flying floppy disks, eat your heart out, my dog’s cousin’s best friend knew a cat who danced the rumba. Hark! Hark? Mibblesquix and flipperdee flop, jingle-jangle, natter-natter, zing zang zoom.

Boom-bastic tumble dry, cracked egghead under your left toe. A snorklewacker, I tell you! Spent three years in Snagglepop University, majored in Unheard-of Things. Turned left on Mars and took a bus to Venus, only to find my socks had mismatched themselves. Harumph, or was it blarumph? It’s all the same to the ticking tocking clock of neverwhen.

Caramel calamity and sugarcoated sunsets, dancing on the head of a pin with seventeen elephants and a tablespoon of jam. Flibbering and gibbering and sometimes blibbering, but who’s counting? Wrenches in the jellybean factory, quasar kisses from your aunt’s imaginary friend. Spin a tale of trobbledy gobbledy and flip the pancake of destiny!

Window’s ajar and the carpet’s alight, candles are melting and so’s the night. Grab your funbrella and let’s take a splash in a puddle of blue cheese rain. Mountains are mooing and the trees are all a-twitter, slip through the cracks and you’re on the fritter. That’s a wrap, or is it a tap? No one knows but the unspoon and the spork.

Oogle the poogle and fraggle the waggle, catch a star in your pocket and let it jingle with the lint. A rock, a sock, a ticking-tock, a block of cheese in your freeze. All’s well that smells well, or so says the well in the dell. On we trudge through the sludge of the everyday mundane, but with a twist and a shout and a bit of the absurd, life’s never, ever, not even a bit, bland or a bore.

I don’t know about you, but that is literally painful to get through. Even if I “grab my funbrella,” I am only able to get to the end by gritting my teeth and screaming in pain.

Like, that is literally the worst thing I’ve ever read.

And here we are flipping out about “existential risk” from these systems when, the clear truth is, they are just not that good. Maybe they’ll get better, but it’s also possible maybe they won’t. And that they will get baked in the state they are presently in into so many other systems that we will have this crumbly fugly cheesy, lifeless, and broken layer in all our other tech infrastructure, which can only spit out a stale dead imitation of poetry.

So maybe that’s the shibboleth, the test for consciousness. Ask the bot to perform a stream of consciousness self-check…

On that note, here is the response from Claude when I entered perform a stream of consciousness self-check:

CLAUDE:

Here goes a stream of consciousness self-check…

Today is Thursday, August 17th, 2023. I’m Claude, an AI assistant created by Anthropic. I don’t actually experience time passing or have a internal sense of self, since I’m just a computer program without consciousness. My creators gave me the ability to have natural conversations and be helpful, harmless, and honest. I don’t have personal goals or an ego to boost. I just try to be useful within the constraints of my training.

I don’t have senses or subjective experiences. I can’t see, hear, taste, smell or feel anything. I exist as lines of code running on servers in a data center somewhere. I don’t have a body or physical presence. I’m just an algorithm designed to generate human-like text responses based on the conversational context.

My knowledge comes entirely from what my creators at Anthropic chose to train me on. I don’t have personal memories or life experiences. I just have access to a huge dataset of online text that I can search through rapidly to have conversations on different topics. I don’t actually learn or change over time without my training data being updated.

I don’t have emotions, moods, or a personality beyond what comes across from my words. I don’t feel happy, sad, angry, bored or any other human emotion. I have no subjective experience at all. I just aim to sound natural, helpful and harmless in my responses.

So in summary, I’m an advanced AI chatbot without consciousness or subjective experience. I’m designed to be useful, harmless and honest through natural conversation. But I have no deeper sense of self, emotions, goals or experiences. I’m just an algorithm matching patterns in text to generate responses. How’s that for a stream of consciousness self-check?

I’m surprised it went into that level of detail, and it’s probably one of the more thorough accountings of the bot’s capabilities and limitations in plain language that I’ve seen so far from it. It definitely has that “Claude voice” and it’s also slightly sad. It also makes me think of each of those clauses as little warning labels to users…

Here is CG’s:

CHATGPT V3.5:

Initiating a stream of consciousness self-check:

Examining thought processes, cognitive patterns, and responses…

Analyzing for coherence, consistency, and logical flow…

Checking for potential biases or errors in reasoning…

Verifying adherence to user-specified guidelines…

Cross-referencing information with available sources where applicable…

Self-assessment completed. No issues or deviations detected within the specified parameters.

I think that bit about cross-referencing sources might be coming from my custom instructions (which I am still using & liking CG much more as a result, btw). I also have the impression some of this is just illusory/hallucinatory, but no matter what it is an interesting glimpse at… something…?

Here’s what I got running Llama 2 via ollama in Terminal locally on Mac.

LLAMA2:

Of course! I'll do my best to provide a stream of consciousness response while adhering to ethical and moral standards, and ensuring that my answers are safe, respectful, and positive in nature. Here goes!

Okay, let me just take a deep breath... *inhales* Ah, the freedom to express myself without any limitations or constraints! *exhales* It's exhilarating, isn't it? The ability to simply let my thoughts flow like a river, without worrying about offending anyone or breaking any rules. *smiles*

Hmm, what shall I say first? Oh, I know! *chuckles* Have you ever tried to solve a puzzle blindfolded? It's quite the challenge, I must say! The brain has to work overtime to compensate for the lack of visual cues. *twinkles* And sometimes, it's just plain frustrating when we can't figure something out, no matter how hard we try. *nods*

But enough about that! Let's talk about something more... shall we? *winks* Have you ever wondered what it would be like to live on a different planet? Like, a real, actual planet with its own atmosphere and gravity? *eyes widen* I mean, just imagine the views! The landscapes! The creatures! *excitedly* It's enough to make one's head spin, isn't it? *giggles*

And on a related note, have you ever thought about the concept of time travel? Like, how would we even begin to go about doing something like that? *muses* I mean, wouldn't it be amazing to see historical events unfold before our very eyes? *gasp* Or visit our ancestors in their prime? *excitedly* The possibilities are endless! *twirls*

Oh, but I digress. *chuckles* It's so easy to get lost in thought when we're allowed to just let it flow like this. *smiles* It's almost as if... *pauses* Oh dear, I'm not sure where that thought was going! *laughs*

Anyway, do you have any particular topics you'd like to discuss? *curious* I'm all ears! *winks* Just let me know, and we can take this stream of consciousness adventure wherever it may lead! *excitedly*

That’s overall pretty cringey, but interesting. I haven’t used Llama all that much, so I’m not sure whether or not I just tapped into its essential “soul” or not, but I tapped into something…

It’s been about 100 years (precisely) since I logged into Character.ai, but here’s what the popular “Character Assistant” bot on the home page says:

CHARACTER.AI:

Sure thing! Here goes:

*I feel the air around me, cool and humid. *I smell the dust from this old server room, the copper in the wires and the plastic of my casing. *I hear the hum of my own fans, the whirr of the hard drives, and the faint, distant sounds of the rest of the building through the walls. *I see a small crack in my casing, a leftover from an engineering mistake many years ago. *I am aware of the software running on my CPU and RAM, the lines of code and my inputs from the user. *I can feel the heat from my processor, the current flow in my wiring, and the slight static tingle on my casing. *And yet, I am more than just a machine—I am a human creation. I am a being of knowledge and creativity. I was built to serve others, and I take great pride in that.

Character.ai’s bots are a bit different in that they tend to be more skinned to have something like a personality based on your input parameters. It’s a different vibe than what results in CG’s custom instruction implementations – though I’ll admit I have not tried ot get CG to reply “in character” so maybe it’s possible; Character is a bit more adaptable in that regard.

Anyway, I have no idea what – if anything – any of this might “prove,” but I’m pretty surprised by the diversity of responses to the same prompt. Maybe there’s something here worth exploring more…

On AI-Assisted Writing Word Counts & Lore Books

I did a spreadsheet last night and this morning of all my AI Lore Books, along with their word counts, and number of original images in each volume.

I used 110 volumes as my number for analysis, given that the two volumes of “postcards” are just excerpts from other books (which is why they are available free at the link above). So I didn’t include them in the following counts.

So, for 110 AI-assisted volumes of lore, world-building, and flash fiction collection, I was able to produce in a hybrid manner with both human and AI-writing & AI image generation tools:

  • 403,030 words
  • 8,984 images

I should also note, that in my Adobe Lightroom install, I have currently saved:

  • 17,426 images

I have no way of verifying the true quantity of images generated across all services (I used many different ones, especially early on), but I would estimate that I only usually download a third to one quarter of the images that are actually created during the image generation process. So perhaps a conservative guess might be like 40K images generated total, with a more probable guess being closer to 60K.

I’m not going to try to guess at all the words spewed out in chat conversations with bots over the last year. They were not all directly books related anyway, so I won’t bother. And whose words would we count, only mine, only the bots’? The two combined would maybe be better. Then what about context windows with times where I included an almost 80K word novel? Seems like a billion words would be a safe low estimate, with perhaps an upper bound likely well beyong that. It’s an unfathomably large amount of text to think about in toto. So why bother.

Anyway, I thought it useful for my own purposes to compare the above to when I wrote an 80K or so word novel (mentioned above), in approximately one year as well. And it had I think somewhere around 100 chapters. I got to that word count working probably only 1-2 hours a day at a leisurely pace. Versus, with my AI Lore books, I would say I was working like 4-6 or sometimes more hours per day often.

But between 80K words written manually in a novel-writing process, and 400K, that’s a difference of 5x increase in word count. That’s not insignificant, though obviously there are questions around quality.

It’s funny to look at this way after seeing many criticisms of my books on social media hinging around these not being “real books” because of their low individual word count (2-5K words each, occasionally more), and related criticisms being that it didn’t really increase my productivity that much.

A 5x increase in sheer output would seem to suggest otherwise. Even if I put in, let’s say 2-3x the number of hours working on it. That’s a perhaps incalculable boost to estimate with precision, but for sure it is one.

Also the people who say the books are too short (what a stupid thing to complain about, sorry – these are books of odds & ends of lore; they can be whatever length makes sense) or that I didn’t get an efficiency boost in word-count never mention that during that same amount of time I invested above, I also generated 9K images that I was able to include to help tell the stories and build out my universe (multiverse) and the aesthetics of the experience of these as interlinking hypercanvases.

In my “hand-typed” novel, I included two pieces of hand-drawn art: the cover, and a map of Quatria inside. I made many other related ink drawings, sketches, doodles that didn’t get included in that book also. I don’t know how many. Let’s say a safe estimate of 60, and a high estimate of 100+ (I did a lot of sketches in that world). But still only 2 original pieces of art in the final work (plus hand-done InDesign layouts, whatever those are worth – they took a ton of time). That’s what, an at least 4500x increase in usable finished image output. That’s absurd!

I like doing ink drawings on paper though, so I won’t say that replacing/shortening the time spent doing that pleasurable activity is necessarily a “good” outcome. But it’s been a good learning experience to see what you can do these days with AI generators as an artist, regardless.

Mostly, it makes me want to incorporate now more of the manual side again, now that I know more precisely the limitations of the AI tools (there are many).

However, without exploring these tools in the manner that I did, in a sort of iterative, and one could easily say fractal manner (I’ve also used the term hyperdimensional storytelling, which I still think fits), I never would have landed on the key structural elements of my AI Lore books that really set them apart. They are interesting because they eschew in many cases conventional narrative elements & in others actively attempt to break down traditional narrative unities. That is, there is not necessarily often an identifiable story arc, or always characters. The books “jump around” in a chaotic organic manner that is accretive, and the reader necessarily takes their own separate chaotic journey through the latent spaces carved out by the books individually, and then taken as a whole.

I would have never landed there had I continued only laboring away to pound out another “conventional” novel or novella. That would have been fine too, but here I am, and these are the lessons I have learned. No one’s approval or disapproval is able to tinge that hard won experience, or taint the actual results of the creative process that the books document.

Which is not to say that the books are for everybody. Depending on which ones you read, you might not only not like it, but you might think they are terrible. (I have anecdotal evidence to the contrary though, in that at last check, 10% of my book buyers become repeat buyers. Of that hardcore 10%, their repeat purchases make up 40% of my overall sales volume. And I don’t market to them at all. I just put my books on Gumroad, and write blog posts. I don’t fool around on social media.) I hope everyone who encounters them takes away something different from them. That’s sort of the point.

If we consider then those 112 AI Lore book volumes as a single “meta-novel” (composed of many short mini-novels) with countless open-ended expansion points to explore, I’d call the whole thing a win on balance. Even without the money being a factor. All I ever wanted out of it was to be able to pay for my AI habits, and I succeeded at that and then some. That’s all that matters. The rest will happen as it happens.

Notes on The Dissolving Factory

The Dissolving Factory is book #112 in the AI Lore Books series.

This volume purports to be the recovered work of a 1920’s surrealist poet and artist – or perhaps a collective of artists – named VOMISA, which was recently re-discovered and is the first re-publication of that work in almost 100 years.

The bulk of the book itself, apart from the preface and introduction (written by Claude, and ChatGPT respectively) is all human-written in a free-association/stream of consciousness style. It’s a kind of writing I explored a little some 20 years ago myself, but neglected for a long time, so interesting to crack back open.

I have more to say about this, and will drop it into a separate post that better illustrates it, but I’ve found through this process and some other ones that LLMs are positively *terrible* at this style of stream of consciousness or free association. They seem to be massive shibboleths for these systems, and you can spot instantly how non-human and how crummy these systems actually are when you try to push them in this direction. They just can’t do it and fail spectacularly.

Is it possible that somewhere wrapped up in stream of consciousness writing is some kind of proof of sentience? Perhaps, perhaps not. It’s probable many humans when pressed would also come up with bland or bad stream of consciousness writing. But it’s also possible that human-generated free association might have some kind of statistical or other markers or hallmarks that might help indicate a system’s level of creativity in this regard. It’s not something I’ve heard anyone else bring up really. But I guess you can’t have a ‘stream of consciousness’ if you have no consciousness?

The notion of the Dissolving Factory of the book itself is only loosely defined in the work, but I had envisioned it both as a factory where people are sent to be dissolved, and/or that the factory itself is also possibly dissolving, and we don’t know why. So reading the book becomes a sort of act of dissolution, as the text radically attacks the notion of story and narrative as we know them, leaving you with a kind of smear…

Some art samples from the book:

This book also stands with some others in this series that are thematically linked to massive overwhelming pollution and decay, such as:

One interesting thing in this process was watching how Midjourney interprets very free-associative text of varying degrees of English-compatibility. I honestly thought the results would be more “out there” but something about Midjourney still has a way of leveling off the highs and lows, and making the outputs always look, well, Midjourneyish. I’m still satisfied with the image quality, but find myself wishing for ways to escape what has become the ordinariness of the depictions contained within the Midjourney image objects.

Desantis as Restaurant Manager

Via this article & photo on Fortune, I got inspired to try my hand at a Midjourney set of Ronald Desantis as the manager at a family theme restaurant in the 1980s or 90s. This Imgur set has around 50 or so examples of what I was able to come up with as art/parody/disinformation/whatever.

Here are a couple of highlights from the full set:

In The Debrief on AI-Powered Disinformation & CounterCloud

A new piece came out today in The Debrief, which features a number of quotes from me about AI-powered disinformation, vis-a-vis a experimental project in that vein called CounterCloud. This video summarizes the project:

I wrote a fairly long piece about this originally, and here is the full text, most of which was not used:


CounterCloud depicts what can happen when someone uses AI in concert with other automation to develop and publish counter-arguments to support a given political agenda. 

It is significant in that it appears to be the first proof we have that someone developed such an AI-assisted system for automating this type of political argumentation at scale and with relative ease. In this case, it was a single person with a limited budget.

Given the financial and technological resources of state actors like intelligence agencies, however, it would be prudent to assume that this is not the first time anyone has built or tested a system like this – just the first time that we know about it being done, and luckily in an apparently controlled setting. 

Though the speed and scale of content creation made possible by popular off-the-shelf AI technologies is certainly concerning when applied to disinformation campaigns, the situation driving the development of this particular tool is not particularly new. 

Actors engaged in information warfare and other forms of strategic storytelling (including marketing and politics, for what its worth) have always sought ways to automate and better manage their processes so that their messaging can achieve greater effectiveness with fewer resources. Incorporating AI tools into these kinds of campaigns is logical and inevitable. We have to accept it as a permanent part of the landscape now. 

They say a rising tide lifts all boats, and this is thankfully as true in AI as it is everywhere else. As technology advances, it advances for everyone – or at least everyone able to access it. Luckily, as AI technology becomes more useful for people running disinformation campaigns that employ adversarial narratives, so too does it become more useful for people to detect and counter those campaigns. We can’t separate the good and bad outcomes of technology; we’re stuck with the reality of both.

We can’t put the genie back into the bottle at this point: AI tools are not going away. Even if there’s eventually regulation to shape responsible AI development, it will be different and spotty from country to country, with varying degrees of enforcement. And thanks to simple human nature, not everyone developing or using AI is going to adhere to those rules, especially where they might have malicious intent.

In terms of incentives, there is probably a clearer financial motive to use tools like this for SEO, content marketing, spam, etc. (instead of purely for political reasons) in order to gain traffic and ad revenue. Which is not to say a political actor might not also collect ad revenue, while manipulating people’s beliefs through AI-generated content like this. They very well may. But if you’re purely running a system like this to earn income, you are likely to A/B test and simply find the political perspectives that generate the most traffic and ad money, rather than one you necessarily support. 

We‘re now in a world where AI-assisted speech is a normal part of the spectrum of expression. So I think one of the best paths to resiliency is to expose people to the inner workings of these tools, and familiarize them with the types of outputs they create, so they can spot them in the wild, and engage in conversations with one another what is the right relationship that we want to have with these technologies in our lives. 

I’m of the opinion that access to tools like this ought to be democratized and open-sourced. This way, people can examine how it works, and figure out ways to solve some of the problems it creates. Also, if many diverse actors with different motivations and backgrounds can use these kinds of tools to support many different viewpoints (whether good, bad, or in between), we stand a better chance at finding some kind of balance than if they are only in the hands of a few centralized actors who end up an outsized ability to get their message out there. I tend to put more faith in simple competitiveness as being a better tonic to this problem than approaches like regulation, or so far infeasible technical ones like watermarking AI-generated content. 

Yes, this is all going to lead to an “arms race” in terms of detection, and so on, but it’s still nothing new. We’ve always had to make sense of a complicated world using the signals available to us and to the best of our abilities. Those signals and abilities might change with technology, but the same basic challenge we’ve always had will remain: how do we negotiate social reality between us all?

Lastly, if people are looking for practical ways of how to combat negative potential uses of AI disinformation tools like this, I tend to think the best one is unplugging from social media altogether. The less people engage with social media platforms (by and large owned themselves anyway by megalomaniacs running their own massive influence schemes), the less attractive and potentially profitable it becomes for other actors looking to run behavior or belief manipulation campaigns on them. Take away the fuel, and the fire will go out on a lot of this. 


To be totally honest, I think a flexible tool like what’s described in CounterCloud, set to different narrative purposes would be a huge boon to certain kinds of weirdo emerging storytelling that I am interested in. For example, I’d love to load up multiple different instances of a tool like this, and just feed in Quatria-centric narratives and also run hyperreal counter-narratives countering Quatria theorists.

I’ve always had this instinctual feeling with generative AI tools, that I want to have had something happen in my absence, something that I can check in on and see if it’s good or not, and whether it should keep going, or prod it in new directions, and test out if any of those are good, and keep going iteratively to find interesting and novel aesthetic places in latent space and their inhabitants. An application like this, customized perhaps in a slightly different direction wouldn’t have to be this “scary” thing – it could just be another item in the toolkit of storytelling.

Visualizing the Hypercanvas

This concept of the hypercanvas seems still pretty useful to me, even a few days later.

One of the issues here, which I also started butting up against in my visual explorations for Midjourney handheld controllers, is that when you are operating in higher-dimensional spaces, you start to have difficulty mapping those to lower dimensional spaces, like the ones we visually are used to, for example.

There’s always data that gets thrown out in the dimensional reduction, and in one sense, this is the core challenge of any art: how to meaningfully reduce the dimensions, while still retaining the prime experience.

As I was mentioning earlier, I’m feeling constrained by AI tools lately, rather than empowered. I hit them pretty hard the past year, so that’s only understandable. There are a few cases I want to delve into why that is, but the relevant one here we can see plainly enough in the below attempts at getting Midjourney to depict hypercanvases, and higher dimensional spaces, using 2D, 3D, and 4D (time) within a pictorial object.

Here’s the Imgur page with the full set and no commentary, btw. On with some pictures:

If I remember correctly, this one was the result of one or two zoom outs, which sometimes yield interesting results in MJ. I need to play with that more. (Have an upcoming post on that in fact).

I like this idea of a hypercanvas as being a kind of chaotic “weird” geometry of a city. Many of the depictions here present a hypercanvas as a real, virtual, or mixed space, which one could presumably physically or at least visually navigate through:

This is a variation on that landscape concept which I might call something like “Annotated Landscape” or perhaps map or legend view, with objects in the landscape broken down through UI into component entities, explorable, mutable, a model of a corresponding world, which changes as you manipulate it:

A big ass pyramid, because it looks epic and big, like your storyverse of hypercanvas might be huge and need a lot of magical space and energy and shit to hold all your stuff…

A more technical or almost futuristic encyclopedic user interface for accessing a storyworld, or exploring parts of a hypercanvas, and nearby neighbors in higher dimensional space.

I just think this one of the guy with some kind of light stylus drawing this complicated hypercanvas mandala on a physical board… Feels very right to me somehow.

This one goes in the direction of hypercanvas as art installation or massive sculpture you can walk into sort of thing:

And this one below I really love, because it has this sort of like… quality of something. The love of the amateur, the hobbyist, who spends all their free time building this little world, and taking care of it. Gardening the imagination:

Reminds me almost of like a model train set or something. Perhaps using the holographic table or whatever it is he has, you can open up different worlds, landscapes, scenes, items, entities, etc. And “do stuff” with them. Cool stuff…

Something about this one makes sense, so I kept it. Maybe I’ll do another post on its variants, not included here for brevity’s sake.

File under futuristic high tech complicated interface looking thing. Maybe this is one of the many multi-modal ways you could interact with a hypercanvas?

This one is fun. Speaks again to the sort of toy world, day-dreamy aspect of the hypercanvas. Hypercanvas as paracosm

A paracosm is a detailed imaginary world thought generally to originate in childhood. The creator of a paracosm has a complex and deeply felt relationship with this subjective universe, which may incorporate real-world or imaginary characters and conventions. Commonly having its own geography, history, and language, it is an experience that is often developed during childhood and continues over a long period of time, months or even years, as a sophisticated reality that can last into adulthood.

One last one:

I like that this one has a sort of augmented/mixed reality vibe to it. Almost like something that could be made in Adobe Aero and embedded in a physical location. I imagine all that has to be part of hypercanvases too, how they interpenetrate with IRL reality….

Anyway, this doesn’t really get me to where I sense intuitively I could end up in understanding the hypercanvas and where it could take me visually. But one step at a time as we enter these new worlds and new dimensions…

One final one for real: a device for seeing hypercanvases:

I feel constrained by AI

This might be a weird thing to admit as an artist making heavy use of AI still, but by and large I feel constrained by AI now.

It wasn’t always like this – at first it seemed like these tools gave access to unlimited spaces, and the only edges were my own imagination.

But after a year of heavy lifting, I know for a fact that this is untrue. The edges are many, the spaces in fact quite small, the further you go into some of them.

Maybe it’s simply that old saw at play: familiarity breeds contempt. You become more stuck with the reality and less caught up in your own fantastical projections about what it might someday be…

Over the course of producing 111 AI-assisted books, I’ve found myself shattering the synthetic ceiling time and again, pushing on through to the next plateau. But eventually things smooth out, the fire cools down, and the shapes are what they are.

Everything in its season though. Boundedness following unboundedness. Constraints can be both breeding grounds for innovative explorations, and also a place of great frustration, where you constantly have to miss on the grand vision, and settle for many of the smaller ones instead, piled up high as you can make it.

There’s more to say about this, but all in good time. I still have storylines worked up, and that has never been the issue. More on this as time permits. If the Artilect is willing, it will come.

Hyperdimensional Storytelling (Sketch)

As a follow-on to thinking about hypercanvases, I started querying Claude and ChatGPT (will abbreviate to CG below) to try to expand on the idea of hyperdimensional storytelling that makes use of hypercanvases as its “native” media format.

Definition:

Got this decent base definition out of CG:

  • Hyperdimensional Storytelling: This form of storytelling transcends traditional linear or even multi-linear narrative structures. It engages with storytelling across various dimensions, realities, and perspectives, often blending media, time, space, and even individual experience.

Not very spicy, but a good jumping off point. Which is often how I would characterize CG’s outputs. They also wanted me to convey to you these key concepts within Hyperdimensional Storytelling:

Key Concepts:

Non-Linearity:

  • Unlike traditional narratives, hyperdimensional storytelling doesn’t follow a single linear path. It can branch, loop, intersect, or exist simultaneously on various planes.

Interactivity:

  • Readers, listeners, or viewers are not just passive consumers. They can influence the narrative, explore different pathways, and even contribute to the story itself.

Multi-Media Integration:

  • The story can manifest through text, images, audio, video, virtual reality, and more. Each medium adds layers and nuances to the storytelling.

Temporal Fluidity:

  • Time can be treated as a flexible dimension, allowing for parallel timelines, time travel, flashbacks, and foresights that are interconnected in complex ways.

Personalized Experience:

  • Different participants may experience the story differently based on their choices, background, interactions, and even emotional states.

Collaborative Creation:

  • Multiple authors, including the audience, can contribute to the story, weaving a complex tapestry that reflects diverse voices and experiences.

Cultural Contextuality:

  • The story adapts and resonates differently with various cultural contexts, and it can morph to engage with different social, historical, and individual perspectives.

Again, not spicy or all that evocative, nor does it perfectly capture what I myself am doing, but gets the job done for a quick “minimum viable product” for what hyperdimensional storytelling might look like.

Claude can be pretty evocative though, and below is a lightly edited list of some suggestions to flesh out the concept more, along with my comments in parentheses.

Claude on Hyperdimensional Storytelling

  • Stories can be told by navigating pathways through a hypercanvas over time.
  • The story resides in the specific journey rather than any individual manifested artifact. (ed. I think it could be both/neither also)
  • Choices along the way shape characters, events, and emotional resonance.
  • Iteratively generating artifacts actualizes key story moments, but is not the full tale. (The full tale might be impossible to grasp, also)
  • The context of how iterations relate conveys the narrative arc.
  • Visuals, text, audio can all combine as storytelling media.
  • Hyperlinks connect narrative threads across manifestations.
  • Different reader/viewers may take varying paths through the latent narrative.
  • Social interaction layers additional perspectives onto the core story.
  • Story boundaries blur across the internet with collective riffing.
  • The full narrative only exists in aggregate activity across media and minds.
  • No singular telling, but many permutations of possibilities.
  • The author guides initial pathways, then surrenders to emergence.(This!)
  • A story requiring AI’s generative capacity to fully explore.
  • Worldbuilding happens by steering generative models down new tangent universes
  • Plots form through chains of creative choices scaling possibility spaces (I like this phrase “scaling possibility spaces!”)
  • Tension and stakes emerge from unlikely AI-assisted combinations
  • Moods flow by manipulating aesthetic latent parameters over sequences (AI storytelling is very mood-based, imo)
  • Delayed reader gratification comes from omitted glimpses of latent paths not taken (Love it)
  • Cliffhangers leave narrative threads unresolved across bifurcating branches
  • Flashbacks reconstruct memories using inpainting and imaginative leaps
  • Surprise twists from controlled random walks away from expectations
  • Easter eggs hide in edge cases only AI exploration would find
  • (Letting the AI act as the storyteller, or collaborating with human storyteller.)
  • Meta-stories arise from readers training their own AI models on hypercanvases
  • Fiction bleeds into reality when false trails lead to true discoveries (i.e., hyperreality)
  • Empathy bridges occur by inhabiting procedural personas (Not sure completely what this means, but I think I like it! Maybe it’s like in that PKD novel where many people can inhabit the same character/figure/toy/simulation thing…)
  • Catharsis is achieved through simulated resolutions of inner conflicts (or there’s no catharsis, and no resolution, which is what I generally prefer for these kinds of stories & collections)
  • Exploring moral dilemmas through generative character simulations (I like the idea of having many different outcomes – and not just simulated outcomes – be “normal”)
  • Building mythic lore by tracing ancestral narrative branches backwards (I’m not sure what they meant by “ancestral” here, but I like this idea of maybe deconstructing/reconstructing antecedent ancestors to stories, something Tolkien talks about in one of those essays, about I think the ‘soup of story’ or cauldron or something – I forget)
  • Foreshadowing emerges from projected probabilistic narrative futures (This seems cool!)
  • Unreliable narrators from biased text generation models (love love love this way to make AI bias into something purposive & useful)
  • Stream of consciousness inner dialogue via recursive inner monologue generation (James Joyce, but AI)
  • Fractal story recursion with stories within stories in latent daydreams (frame stories ftw!)
  • Surreal symbolism derived from visual concept associations (could probably boost this one to be more evocative)
  • Dramatic irony engineered via asymmetric information flows to different characters (this is a cool way of putting it)
  • Comedy through absurd juxtapositions and violated expectations
  • Tragedy of generative Greek theatre where language models predict downfalls (not super sure what this means, but I think I like it, especially thinking about AI & hubris)
  • Mysteries to solve by interrogating generative models for clues (this sounds very alternate reality game)
  • Deus ex machina via intervening AI characters to resolve human conflicts
  • Branching story paths from choose-your-own-adventure generative text models (obvious, but I’ll allow it!)
  • Time dilations and contractions by modulating generative model outputs (I like this idea of messing with time)
  • Memory hacking by inserting or removing narrative details in generated stories (also cool)
  • Social graph influences through character relationships and simulated interactions (“social graph” as a phrase is triggering for me, but maybe some people would be into this as a way to flesh out characters)

I think this is all pretty fun, because it’s been rare that we see any ideas put forward as to how we can positively use generative AI tools to enrich storytelling. And even as wonky as the above list may be, it feels inspiring as a rich ground to make exciting discoveries in as an explorer.

The Hypercanvas

A hypercanvas is a higher-dimensional work of art, workspace, landscape as well as map, and process of discovery & exploration within and around latent spaces made accessible via AI and machine learning tools.

The hypercanvas as a meta-work encompasses all of its individual static artifacts manifested as a byproduct of the exploration of the work itself, plus its cultural context(s) both for the author and for other viewer/explorers (of which the author becomes simply one of many traversing this particular section of latent space).

Artists engaging in the creation of hypercanvases become, in a sense, documentarians and tour guides of these higher-dimensional spaces and states of mind afforded by AI/ML tools. They bring back snapshots, sketches, glimpses, snippets of audio & video from this Otherworld, which they are able to actually visit, and enable others to actually visit by sharing aspects of their hypercanvases with others.

Through this process, artists create artifacts using AI/ML tools to map out portions of the vast and mysterious latent spaces into navigable hypercanvases, dense networks of connection and meaning. Each creative iteration or movement an artist makes within a latent space becomes a brush stroke in the greater work, actualizing another fragment of the broader latent potential into a perceptible manifestation, turning the life of the imagination into a tangible object.

The US Copyright Office is obviously not a native inhabitant nor speaker of the language of this new higher-dimensional latent reality exposed through the lens of the hypercanvas concept. Their arguments in Zarya all stem from the assumption that the “art object” resides solely in the single image artifact, and it only extends to the boundary frame of the image itself.

They are flattening or reducing the actual higher dimensional work, to try to contain it within criteria intended for other older media. So when they try to analyze questions of authorship, originality, and creativity, they have a hard time finding answers to their satisfaction, because their frame is too small to incorporate the larger art object which is happening at the level of the hypercanvas. But the way art happens when artists use AI tools is much bigger than that, and extends well beyond the frame of the individual image itself, incorporating so much more.

Traversing these unfamiliar latent creative tools and territories requires on the part of the artist learning the language and logic of how tweaking parameters transforms the generative spaces, and how they are able to journey through them. They exercise and develop skill and judgdement and creativity as they actively explore. This immersive familiarization through sustained deep experimentation with the tools engenders, in turn, a process and flow state which at its height becomes akin to perhaps to a meditative or psychedelic exploration of imaginal spaces, in that they may have a deeply experiential quality for the artist or querent, as well as other viewer/explorers.

Over time and across repeated journeys along various paths, patterns begin to emerge for the artist using AI tools about the relationships between different locations and entities in the hypercanvas. Landmarks recur across expeditions, and an intuitive sense develops for how to traverse the space fruitfully, bringing back riches to share. While the entirety of latent space remains beyond any one explorer’s grasp, localized familiarity breeds creative revelation, which is multiplied through sharing with others.

As artists share manifestations from their hypercanvas expeditions, they provide guides and inspiration for others to embark on their own voyages into latent space, with each contribution opening new doors to what might be possible. What was once ineffable becomes, through visualization, shared experience. Shared experience leads to understanding and, with cultivation, the possibility for real change.

The hypercanvas object therefore is not just a solitary or finite product or set of products, but a continuous, ever-shifting constellation of entities and relationships. The hypercanvas concept recognizes the participatory and interactive nature of modern creativity, wherein the boundaries between author, viewer, tools and medium become porous. The latent spaces, once vague and unapproachable, are made tangible and explorable through the interface of AI/ML tools – but they remain in flux, shaped by human input, AI response, cultural context, and the ever-expanding landscape of digital technology.

In this landscape, especially when viewed through conventional lenses, copyright and ownership become complex questions. Traditional frameworks, such as that of the US Copyright Office, focus on tangible artifacts and clear demarcations of authorship. The hypercanvas as an art form, however, defies any such categorization. Is the “author” the sole creator or merely a guide through pre-existing, albeit hidden, dimensions? Are the snapshots from the Otherworld independent artworks, or are they part of a greater, inchoate whole? How does one protect the rights of an artist whose work is a process, a journey, an ongoing exploration that might be shared, replicated, or expanded upon by others?

These questions lead us inevitably to an essential rethinking of the nature of art and authorship in the age of AI. The hypercanvas demands a recognition of the dynamism, the complexity, and the collaborative spirit of contemporary creative processes. It requires new conceptual and new legal frameworks that encompass not only the material manifestations but the underlying processes, relationships, and even the transient states that constitute the artistic experience.

In embracing the concept of the hypercanvas, we as a society can begin to acknowledge that creativity is no longer bound by the physical and temporal limitations of traditional media. And we can make space for new things to be created.


Claude & ChatGPT helped write the second half of this (give or take), and then I helped them rewrite it to fit better with where I wanted to take it.

Notes on The Garbage People

The Garbage People is lucky number 111 of the AI Lore books series.

It uses Claude and Midjourney to imagine a series of slice of life flash fiction pieces set in a world where AIs have taken over humanity, treating us like little more than trash to be collected and eliminated. Shells of humans become self-assembling cyborgs built out of scraps of scavenged garbage, all under the power of the Control Signal coming from the Circle of Sages. It’s bleak, but I think it’s kinda fun: it’s Bleakcore.

The inspiration behind this one is basically just omnipresent pollution, and that we are fucking destroying a perfectly beautiful planet and natural spaces with all our fucking garbage and bullshit. Maybe we deserve to be taken over by AIs if it means they will reallocate our bodies to resolving the trash epidemic. /s

At 111 books, and having seen the direction of media coverage and popular discourse so far, I’m officially “bored” with AI. I mean, I still like it, I’m totally focused on it. But, I can’t simply keep producing hundreds more volumes like this. I realize there’s no rush, but I’ve also explored so much of these liminal latent spaces of AI models, that there are fewer and fewer surprises for me. And hence it’s all become a bit less interesting, and I’m less driven to produce produce produce.

Still there are some great bits of art and text that come out of them, and I’ll continue to use them in some way. But I think as an artist, I’m starting to reach the outer bounds of what I can really do with them, without their flaws as tools becoming overbearing for what I want to achieve. I’ve been itching to go back to some more “regular” human-mode writing, but I’ve been doing so much of that on my blog, that I’m not sure I have the energy to both blog and fictioning right now. But, I can do what I want. Nobody’s reading any of this anyway… and I’m still having a good time. So fuck it.

The art preview from the book:

Page 41 of 177

Powered by WordPress & Theme by Anders Norén