Apparently, a number of politicians who live in their own alternate universe have somehow gotten advanced copies of my new novel, Conspiratopia, and without even having probably read it and stuff, they are calling for it to be banned. Just like that! Go figure. Thought this was still a “free country?”
All I can say is that politicians should spend more time reading books, and less time burning them!
I can’t really believe any of this is really happening…
Even frickin’ Ben Shapiro is apparently getting in on this action? WTH??
Even though these politicians who are apparently living in their own parallel universe are vehemently against my new book, Conspiratopia, it appears that another segment of the population is coming to the book’s defense. It is, however, an unexpected group, consisting of a coalition of billionaires who claim that everything contained in the book is in fact quite true and stuff…
Here are their stories:
To be honest, I had no idea that George Soros was a drug user. Big, if true!
Jeff Bezos has a weird quality in this video. Seems almost like an AI himself, don’t you think? Maybe he spent too much time in outer space or something…
And this last video from Google’s CEO appears to explain why Google is suppressing evidence of the Conspiratopia Project from Google Ads and elsewhere. Why am I not surprised at all?
Please, if you’re reading this, and you can do anything to help, make sure you share these videos far and wide on social media and on the blockchain, so that people can know the truth about what’s really happening with the Conspiratopia Project!
There’s a really good article on the Verge about authors who use AI tools like Sudowrite as part of their writing workflow. Lost Books has released about a dozen books in this genre now, which comprise the AI Lore series.
Anyway, there are a few themes I want to tease out, namely the feeling of disconnection & disorientation that seems to be a common experience to authors using these tools.
One author quoted says:
“It was very uncomfortable to look back over what I wrote and not really feel connected to the words or the ideas.”
And:
“But ask GPT-3 to write an essay, and it will produce a repetitive series of sometimes correct, often contradictory assertions, drifting progressively off-topic until it hits its memory limit and forgets where it started completely.”
And finally:
And then I went back to write and sat down, and I would forget why people were doing things. Or I’d have to look up what somebody said because I lost the thread of truth,” she said.
Losing the “thread of truth” strikes me as utterly & inherently postmodern af. It’s the essence of hyperreality.
It is the essence of browsing the web. You pop between tabs and websites and apps and platforms. You follow different accounts, each spewing out some segment of something. And then somewhere in the mix, your brain mashes it all together into something that sort of makes sense to you in its context (“sensemaking”), or doesn’t — you lose the thread of truth.
To me, hyperreality as an “art form” (way of life?), has something to do with that. To the post-truth as they say world where truth is what resonates in the moment. What you “like” in platform speak, what you hate, what you fear, just then, just now. And then its forgotten, replaced by the next thing. Yet the algorithm remembers… or does it? It may be “recorded” but it knows little to nothing on its own, without the invocation.
Forgive me as I ramble here, but that’s why this is a blog post…
Pieces I’ve been meaning to put together in this space.
“Networked narratives can be seen as being defined by their rejection of narrative unity.”
https://en.wikipedia.org/wiki/Networked_narrative
The PDF wikipedia goes on to reference regarding narrative unities has some worthwhile stuff on the topic. From it, we see these are more properly perhaps called Dramatic Unities (via Aristotle — an ancient blogger if ever there was one), or Wiki’s redirect to Classical Unities here.
1. unity of action: a tragedy should have one principal action.
2. unity of time: the action in a tragedy should occur over a period of no more than 24 hours.
3. unity of place: a tragedy should exist in a single physical location.
Popping back to the W networked narrative page:
“It is not driven by the specificity of details; rather, details emerge through a co-construction of the ultimate story by the various participants or elements.”
Lost the thread of truth, “not driven by the specificity of details.”
While we’re in this uncertain territory, we should at least quote again Wikipedia on Hyperreality:
“Hyperreality is seen as a condition in which what is real and what is fiction are seamlessly blended together so that there is no clear distinction between where one ends and the other begins.”
What I guess I want to note here – in part – is that what the Verge article quoted at top seems to be considering obstacles, bugs, or room for improvement around the lack of apparent coherence of AI-generated texts… is actually probably its primary feature?
Disorientation as a Service.
Jumping now to latent spaces, as in AI image generation:
“A latent space, also known as a latent feature space or embedding space, is an embedding of a set of items within a manifold in which items which resemble each other more closely are positioned closer to one another in the latent space.”
This Vox video is probably the most complete and accessible explanation I’ve seen of how image diffusion models work:
My understanding of it is basically that a text query (in the case of Dall-E & Stable Diffusion) triggers accessing of the portion(s) of the latent space within the model that corresponds to your keywords, and then mashes them together visually to create a cloud of pixels that reference those underlying trained assets. Depending on your level of processing (“steps” in Stable Diffusion), the diffuse pixel cloud becomes more precisely representative of some new vignette that references your original query or prompt.
So it sort of plucks what you asked for out of its matrix of possible combinations, and gives you a few variations of it. Kind of like parallel dimension representations from the multiverse.
Which leads me to the Jacques Vallee quote that has been stirring around in the corners of my mind for some twenty-odd years now:
Time and space may be convenient notions for plotting the progress of a locomotive, but they are completely useless for locating information …
What modern computer scientists have now recognized is that ordering by time and space is the worst possible way to store data. In a large computer-based information system, no attempt is made to place related records in sequential physical locations. It is much more convenient to sprinkle the records through storage as they arrive, and to construct an algorithm for the retrieval based on some kind of keyword …
(So) if there is no time dimension as we usually assume there is, we may be traversing events by association.
Modern computers retrieve information associatively. You “evoke” the desired records by using keywords, words of power: (using a search engine,) you request the intersection of “microwave” and “headache,” and you find twenty articles you never suspected existed … If we live in the associative universe of the software scientist rather than the sequential universe of the spacetime physicist, then miracles are no longer irrational events.
Vallee’s quote strikes a metaphysical chord which is mostly unprovable (for now) but also feels , experientially speaking, “mostly true” in some ways. Without debating the ontological merits of his argument vis-a-vis everyday reality, it occurs to me that he’s 100% describing the querying of latent spaces.
Of course, he suggests that reality consists of a fundamental underlying latent space, which is a cool idea if nothing else. There’s an interesting potential tangent here regarding paranormal events and “retrieval algorithms” as being guided by or inclusive of intelligences, perhaps artificial, perhaps natural. (And that tangent would link us back to Rupert Sheldrake’s morphogenetic/morphic fields as retrieval algorithms, and maybe the “overlighting intelligences” of Findhord…) But that’s a tangent for another day.
Anyway, to offer some sort of conclusion, I guess I would say perhaps the best use of AI tools for now, while they are in their current form, is to lean into, chase after, capture that confusion, that disorientation, that losing of the thread, that breaking of narrative unity, and just… go for it. There are as many roads through the Dark Forest as we make.
Have had this thought rattling around in my head forever, about how “karaoke” translated to English means something like “empty orchestra.” I’ve always thought that captured a certain kind of beautiful sense of desolation…
Likewise, for years has been careening around my awareness of the emptiness of algorithms. How cold and forbidding they feel when encountered emotionally – which is all the time now almost in daily life. Nudges, notifications, so-called dark patterns. Driving user behaviors. Predicting preferences, recommendation engines.
Once, long ago, I remember hearing about new music through friends. Now Spotify serves up lukewarm drivel “based on your interests,” and sometimes all you can decide in the information tidal wave, is will this make adequate background mush sounds for the next few moments?
I swear sometimes I can almost hear the algorithms that control many popular music stations on the legacy technology called “radio.” Some scientific calculation based on popular multiplied by payola multiplied by who knows what, but I don’t like it.
Lately, I listen to Radio Paradise Mellow Mix a lot (internet radio station), and FIP Pop out of France. Both of these are good and feel much less cold and artificial and algorithmy than letting Spotify dictate which kind of audio-mush will be served today. But after hours and days of listening, one might still discern the shapes of those algorithms as well I suppose.
The neologism solastalgia apparently describes nostalgia for a world where nature and the environment was a source of solace, rather than a source of distress and anxiety. What word might describe a life lived free of (or at least more free of) the cold dead reach of the algorithms. Algostalgia? It doesn’t quite roll off the tongue, does it. But I think it’s linked. That once there was a way, a reality, a modality, a type of living that provided solace. Not endless scrolling feeds, behavioral manipulation, yadda yadda. A desire to go back there again, even if perhaps we’ve never been there in the first place: outside the reach of the Empty Algorithm.
A lot of times, I will search Google for something, click through to a page that “seems” like information, and then discover in a surface skim that it’s actually basically junk and/or trying to sell you a product above and beyond the mere SEO manipulation. In those cases, I feel had to a certain extent–even if the failure is necessarily in many way’s Google’s for bringing me this junk in the first place and trying to hide it among or in place of “real” meaning and information.
Which of course pushes my heavy experimentation with AI writing tools to produce books into a certain state of tension. I know that; I own it. It’s the uncanny valley of delight and terror that I choose to play in. Because I know in that tension itself is something to be unwound and explored.
If a book is wholly or partially written by an AI, what impact does that actually have on it? Is it “better” or “worse” in some way, because there is either a lesser or else different impulse behind it’s creation? Is it more or less “worthwhile” or “valuable?”
In my case (and I should preface this by saying that I don’t necessarily consider myself or strictly care about “authorship”–that’s a hang up I’ve chosen to put aside…), I see these books as an interrogation of the technologies themselves. Personally I don’t like when people call the tools in their current state “AIs.” I feel that’s a tremendous overshoot when it’s really just machine learning applied at various scales. But that’s a subtlety that’s lost on the masses who just want a good headline to click on, and then ignore the article’s actual contents.
Which is a pattern we’re all used to. It’s, in a way, fundamental I think to the hyperlink, though it had to be laundered through a decade or two of dirtying human nature first to become really readily apparent.
I don’t really agree with that one dude’s estimation that Lambda is a “sentient” chat bot, but I’ve played with others enough to know that there is a spooky effect here, probably latent in human consciousness, or in material-cosmic consciousness itself. We’re gonna project our own meaning into it, even if that meaning is “this is crap,” or “this is fake”;–all valid reactions. Just as much as this is fun or this is good.
Why shouldn’t we ask these technologies, though, what they “think,” leaving aside their actual ontological status (which is unknowable)? And just see what they say, and then ask them more questions, and more.
What if the answers they give are “wrong” or “false” or “bad” or “dangerous?” What if they are misinformation or “disinformation”, or advocate criminal acts, or suicide?
The problem is the dissolution of meaning, to which these are only an accelerant, not the underlying cause (though they will certainly fuel a feedback loop). These tools are terrible at holding a narrative thread, of keeping track of characters in a scene, what’s going on, or how we got here, let alone where we are going. In a way, that’s freeing, to smash narrative unity. I don’t think I’m the first creator to discover this freedom, either.
“Surrealism is a cultural movement that developed in Europe in the aftermath of World War I in which artists depicted unnerving, illogical scenes and developed techniques to allow the unconscious mind to express itself.[1] Its aim was, according to leader André Breton, to “resolve the previously contradictory conditions of dream and reality into an absolute reality, a super-reality”, or surreality.“
Moving on to hyperrealism (and I will trot this quote out endlessly for ever and ever, amen):
“Hyperreality is seen as a condition in which what is real and what is fiction are seamlessly blended together so that there is no clear distinction between where one ends and the other begins.”
I like this idea that there is a reality above reality in which contradictions are merged, and taken rightly as significant elements of but a greater whole which encompasses all elements, much like the Hypogeum of Quatrian lore. That what is “real” and “unreal” are merely glimpses on a continuum of experience itself.
If there is any beauty or truth to be found in any of those arts, then there must be too in the merging and dissolution of meaning and non-meaning that we see so strongly emergent as a current in AI-produced art (including literature).
Why not let the reader’s confusion about what was written by AI and what by human be a part of the longing they have to develop and nurture, that desire to understand, or at least swim or float in the sea of non-meaning?
Does it degrade meaning? Does it uplift non-meaning? Non-meaning is another form of meaning. “Alternative facts,” alternative fictions. Who picks up and leaves off? You? A robot? A corporation? A government? Each an authority, each offering their assessment. All of which one could take or leave, depending on who has the arms and means of enforcement. The game then begins I guess to be simply how to navigate these waters, how not to get too hung, how not to get too exploited, how not to be too weighed down when you dive into the depths of rabbit holes under the sea that you can come back up again, and still be free.
Free to find bad search results. Free to be mislead, and freedom to mislead. All is sleight of hand and stage direction. Everything gardens, trying to manipulate its environment to create optimal conditions for its own survival. Including AI, including all of us. We need new tools to understand. We need brave explorers to sail off into unmeaning, and bring back the treasures there, for not all life is laden and wonderful. Much is viral and stupid. Much is lost, much to be gained.
Wrote this somewhere else that I can’t remember, but wanted to re-capture the thought here as I think it’s an important one.
In my mind, the correct way to use AI tools as an artist (or writer) is not necessarily (strictly) to use them to “create art.” Because anybody can do that, and get similar results, whether “artist” or non-artist… that’s the point, that they are democratizing technologies.
So instead, what feels right to me as an artist (ymmv) is to interact with the tools in such a way that you “interrogate” them broadly, deeply, and meaningfully. In other words, engage with them at a level that only an artist could or would think to do. Whatever that is will be different for everybody (and in the end, perhaps there is no such thing as a “non-artist” – but still). As them probing questions, challenge them on what they can do, explore the boundaries of the possible, and document it all alongside the reactions and impacts it all seems to have on real people.
That is, transcend the single-image output as your end product. Obviously, if you’re doing the above deep interrogation of the essence of these tools, you’ll generate hundreds or thousands of intermediate products (whether images or text) along the way. Let that become your trail, the record left in the wake of your passing through with honesty, curiosity, and creative rigor.
Or try DreamStudio if for some reason you’d rather pay Stability.ai to use Stable Diffusion. (Playground’s UI is better, though they have some deal-breaker privacy problems they haven’t solved yet, imo)
Free text generation using several different open source models (e.g., GPT-J, GPT-NeoX, Fairseq). You input sample text and it tries to continue it. Experiment with “temperature” setting (higher numbers yield weirder results)
Certainly still interesting, but heavily restricted in its abilities now to how it was when first released. I’m not sure it’s the right product direction as far as “safety” features for all users, even if the underlying model often yields some very good quality content.
I still love it because it gives a completely different look from Stable Diffusion’s model versions, and I feel its use of light and color is often more beautiful than SD. My favorite look usually includes in my prompt found photo expired film dramatic lighting
You can pay them directly through OpenAI (i.e., help pay back Microsoft), or buy credits through PlaygroundAI which accesses the Dall-E API. This has the benefit of being I think slightly cheaper per generation than paying OpenAI (which is weird, tbh), and no watermark, which otherwise OpenAI includes by default.
Mage tells me they are rolling out Dall-E support as well over the coming weeks. I will give them a try when they do, as there are a number of things about Playground I find cumbersome in their UI.
Create chat bots by entering a character description; the results are much more creative and fun than ChatGPT seems to be capable of. It’s much more willing to “play along.”
Similar abilities to ChatGPT, possibly slightly lower quality output, but without all the refusals & disclaimers that ChatGPT seems to be leaning more and more into.
AI attribution is the process of identifying and meaningfully labeling content that has been generated in whole or in part by an artificial intelligence (AI) system. This can include things like news articles, social media posts, and research papers, as well as many other formats of both online and offline content..
The goal of AI attribution is to make it clear to readers or viewers whether the content (in its entirety, or elements of it) was created by an automated tool and not a human, as well as to give other meaningful data about the specific provenance of the article. End users can then make their own informed decisions about the content they consume. (For example, some users might choose to disallow all AI-generated content altogether, or only allow content from approved AI information providers.)
There are at least three levels on which AI attribution might occur in online publishing systems such as blogs or social media.
The profile level: the social media account or blog identifies itself as as being a publisher of AI-generated, or AI-assisted content (i.e., hybrid human-AI content). This self-identification would also carry through in ideal circumstances to any byline on articles published by the account.
The post or article level: the content, whether a blog or social media post, or other type of online published article informs the viewer at a high level that AI-generated or AI-assisted elements are present. This might occur in different ways depending on the product or media context, including in the byline, as a subtitle, some kind of tag or other prominently displayed visual element (clearly-defined badge or icon), etc.
The inline granular level: the article’s contents themselves are marked up to indicate which parts were input by a human, and which were generated by an AI tool. We explored the experimental method called “AIMark” to apply custom markup or markdown to hybrid AI-assisted texts in more detail here.
This is a big topic, which we will continue to explore in subsequent posts.
Was reading this piece earlier by Haley Nahman about blandness and sameness in Netflix’ visual production quality. It raises a lot of interesting points about over-reliance on tools and techniques – and the deadening that can happen to art forms when they’re driven more by speed, efficiency, and profitability than being necessarily “good.”
Generative AI is going to have the effect of blasting this problem into the stratosphere. And we haven’t yet seen AI visual production tools hit these kinds of mass markets. It’s still largely tinkerers and weirdos with GPUs in their basements creating things.
But the things this diverse band of weirdos tends to create are disappointingly homogenous. Midjourney, in my opinion, is the worst for this. While the images have a tendency to be very well done and often beautiful, I always look at them and think they look “Midjourney.”
Stable Diffusion isn’t too far behind either though. If you go to a site like PlaygroundAI.com’s homepage at any given moment, how many of the featured images are “sexy ladies” that basically all look the same? At this moment, in the first 15, I would say 11 of them fall into that category. That’s pretty much the norm.
If we’re seeing this massive democratizing effect because of generative AI, and all these millions or billions of imaginations are suddenly being unleashed, why is it that we all just end up making totally bland T&A shots?
I think there’s at least (probably more) two parts to it: the tools are predisposed to certain things, and bland mid-distance busts and portraits are one of its strengths & hand-in-hand with that, the users are predisposed to certain things.
My hunch is also that there is a shift with generative AI where being a “creator” is only as important as being able to create the thing you want to consume. The act of creation with these tools is one and the same as consuming it.
Have definitely felt that slightly magical effect a few times using verb.ai in particular, where writing with it truly becomes collaborative, and the storytelling unfolds the way it does because I am the first audience. My invocation causes it to take the shape that it does for me. Yours is different. (Or should be, if our tools don’t force us into homogeneity…)
The process of writing with AI-assisted tools becomes one of assembly, and unfolding. There is a premise, or there is an intention, or there is an improvisation. Invocations. Call & response. Which parts of the conversation make the final cut? Can there ever truly be a final cut?
I digress, but want to return to the intent of attempting to burst the bubble of sameness… If the latent space is nearly infinite, why are we all clustering in this one small corner of it? What else is out there to explore and be uncovered in those wild territories?
A friend said something to the effect of seeing other people’s AI prompt results is a little like hearing other people tell you about their dreams. There may be elements that are interesting or resonate on occasion, but in a lot of cases, there’s kind of a “huh, weird” response. And, that’s about it. Cause what can you do… It’s someone else’s dream, and the pieces don’t fit for the hearer the way they do to the dreamer.
So adapting that into AI-storytelling, well, your results (and mileage) may vary. The insane awesome results you personally get in an AI text or image generator that seem exciting enough to you to share with friends or on social media, may have that sort of /shrug effect on other people. There’s something highly personalized about it, probably about the process and context of inquiry which surrounds it. It’s hard to translate that effect to secondary audiences after oneself, without adding some other layer(s) of meaning and context.
It’s part of what I don’t like about Midjourney: that it’s experience as an artist becomes tied up with the UX of Discord as a product. The experience of viewing generative AI images on PlaygroundAI or on Reddit is also flattening. It’s an experience of you as a user on a platform, having your imagination constrained to fit the contours and invisible social guardrails and incentives that drive our behaviors in those environments. It’s art for likes and upvotes, and accepting those as proxy replacements and measures of actual goodness and meaning.
That is real cause of the crushing sameness. But the sameness that is utterly alienating, instead of reassuring. The cruel embrace of the technological corners we have painted ourselves into. All of it illusions. Because now, all things are possible. All planets, all dimensions, all times can be envisioned & visited. Latent space is infinite. Live a little.
“AI-assisted writing is a form of computer-assisted writing that uses artificial intelligence to help writers create content.
AI-assisted writing uses natural language processing and machine learning techniques to automate certain aspects of the writing process, such as grammar, spelling, and style.
AI-assisted writing tools are capable of understanding and analyzing the context of a given text, enabling them to suggest relevant words and phrases to help writers craft their content more efficiently.
AI-assisted writing tools can also be used to generate creative content, such as blog posts, articles, and stories. By using AI-assisted writing, writers can reduce the amount of time required to create content and focus more on the content’s quality. AI-assisted writing is becoming increasingly popular as it allows writers to produce more content at a faster rate…”