Questionable content, possibly linked

Category: Other Page 80 of 177

Conducting AI

Before I forget, just wanted to jot down a quick note: around the topic of AI-generated art & text, people worry it will “replace” artists, when really what it will do is re-contextualize the role of artist as more of a conductor, rather than say a soloist, or whatever… Half-baked, but there it is.

The Dissolution of Meaning

A lot of times, I will search Google for something, click through to a page that “seems” like information, and then discover in a surface skim that it’s actually basically junk and/or trying to sell you a product above and beyond the mere SEO manipulation. In those cases, I feel had to a certain extent–even if the failure is necessarily in many way’s Google’s for bringing me this junk in the first place and trying to hide it among or in place of “real” meaning and information.

Which of course pushes my heavy experimentation with AI writing tools to produce books into a certain state of tension. I know that; I own it. It’s the uncanny valley of delight and terror that I choose to play in. Because I know in that tension itself is something to be unwound and explored.

If a book is wholly or partially written by an AI, what impact does that actually have on it? Is it “better” or “worse” in some way, because there is either a lesser or else different impulse behind it’s creation? Is it more or less “worthwhile” or “valuable?”

In my case (and I should preface this by saying that I don’t necessarily consider myself or strictly care about “authorship”–that’s a hang up I’ve chosen to put aside…), I see these books as an interrogation of the technologies themselves. Personally I don’t like when people call the tools in their current state “AIs.” I feel that’s a tremendous overshoot when it’s really just machine learning applied at various scales. But that’s a subtlety that’s lost on the masses who just want a good headline to click on, and then ignore the article’s actual contents.

Which is a pattern we’re all used to. It’s, in a way, fundamental I think to the hyperlink, though it had to be laundered through a decade or two of dirtying human nature first to become really readily apparent.

I don’t really agree with that one dude’s estimation that Lambda is a “sentient” chat bot, but I’ve played with others enough to know that there is a spooky effect here, probably latent in human consciousness, or in material-cosmic consciousness itself. We’re gonna project our own meaning into it, even if that meaning is “this is crap,” or “this is fake”;–all valid reactions. Just as much as this is fun or this is good.

Why shouldn’t we ask these technologies, though, what they “think,” leaving aside their actual ontological status (which is unknowable)? And just see what they say, and then ask them more questions, and more.

What if the answers they give are “wrong” or “false” or “bad” or “dangerous?” What if they are misinformation or “disinformation”, or advocate criminal acts, or suicide?

The problem is the dissolution of meaning, to which these are only an accelerant, not the underlying cause (though they will certainly fuel a feedback loop). These tools are terrible at holding a narrative thread, of keeping track of characters in a scene, what’s going on, or how we got here, let alone where we are going. In a way, that’s freeing, to smash narrative unity. I don’t think I’m the first creator to discover this freedom, either.

Wikipedia:

Surrealism is a cultural movement that developed in Europe in the aftermath of World War I in which artists depicted unnerving, illogical scenes and developed techniques to allow the unconscious mind to express itself.[1] Its aim was, according to leader André Breton, to “resolve the previously contradictory conditions of dream and reality into an absolute reality, a super-reality”, or surreality.

And surrealism’s cousin Dadaism:

“Developed in reaction to World War I, the Dada movement consisted of artists who rejected the logic, reason, and aestheticism of modern capitalist society, instead expressing nonsense, irrationality, and anti-bourgeois protest in their works.”

Moving on to hyperrealism (and I will trot this quote out endlessly for ever and ever, amen):

“Hyperreality is seen as a condition in which what is real and what is fiction are seamlessly blended together so that there is no clear distinction between where one ends and the other begins.”

I like this idea that there is a reality above reality in which contradictions are merged, and taken rightly as significant elements of but a greater whole which encompasses all elements, much like the Hypogeum of Quatrian lore. That what is “real” and “unreal” are merely glimpses on a continuum of experience itself.

If there is any beauty or truth to be found in any of those arts, then there must be too in the merging and dissolution of meaning and non-meaning that we see so strongly emergent as a current in AI-produced art (including literature).

Why not let the reader’s confusion about what was written by AI and what by human be a part of the longing they have to develop and nurture, that desire to understand, or at least swim or float in the sea of non-meaning?

Does it degrade meaning? Does it uplift non-meaning? Non-meaning is another form of meaning. “Alternative facts,” alternative fictions. Who picks up and leaves off? You? A robot? A corporation? A government? Each an authority, each offering their assessment. All of which one could take or leave, depending on who has the arms and means of enforcement. The game then begins I guess to be simply how to navigate these waters, how not to get too hung, how not to get too exploited, how not to be too weighed down when you dive into the depths of rabbit holes under the sea that you can come back up again, and still be free.

Free to find bad search results. Free to be mislead, and freedom to mislead. All is sleight of hand and stage direction. Everything gardens, trying to manipulate its environment to create optimal conditions for its own survival. Including AI, including all of us. We need new tools to understand. We need brave explorers to sail off into unmeaning, and bring back the treasures there, for not all life is laden and wonderful. Much is viral and stupid. Much is lost, much to be gained.

Strictly necessary

Sorry internet, there is no such thing as a “strictly necessary” cookie for me to look at a page of text. If that were true, every time I looked at a physical book or piece of paper with words on it, I would need a cookie. Luckily — for now — that is still not true. And kindly go to hell for claiming this self-serving illusion as true.

The Flight Forward

Via Cesar Aira wikipedia page:

Aira has often spoken in interviews of elaborating an avant-garde aesthetic in which, rather than editing what he has written, he engages in a “flight forward” (fuga hacia adelante) to improvise a way out of the corners he writes himself into.

Taking the Short Path

Constructing Meaning from Hypertexts

We often hear in discourse related to “mis” and “dis” information online that its the fault of the people (victim-blaming) who stumble across this content (or are served it by “algorithms”) for being not intelligent enough, or too gullible, or that they lack media literacy.

But I think the truth is that there’s something fundamental, integral to the structure of the web technologies themselves that rewards what I think of as “short hop” or short path foraging.

That is, if you’re looking for x (whatever that may be), the technologies are all skewed towards getting you to x as fast as possible. The most accurate search results. The most finely tuned recommendation engine for songs you will like. Wandering is considered undesirable (though doomscrolling okay). We could say it’s a favoring of instant gratification, but really I think its something to do with the nature of hypertexts and hyperlinks themselves.

That is, the simple act of linking from one text to another rewards these short direct hops. We need not necessarily know or trust neither the first originating source text, nor the target text we land on. We simply go from one to the next, skimming for the bits that relate to our x, which is a kind of meaning that we construct on the fly from these disparate pieces.

In so far as any given piece serves either x directly, or else the underlying impulse behind x (let’s call it z), then the rest doesn’t matter. We don’t want a course correction after the fact. We don’t feel the need or desire to dig deeper once the initial need is met — regardless of the quality or authenticity of how it is met.

So rather than thinking about the technology as such as something neutral, or the end result as being the fault of whoever uses it in terms of “who to blame” or that “they should have read more or better”, I think we need to focus structurally on how technologies themselves work to produce x or fulfill z in the first place.

What does that mean exactly in practice? Do we need to change how hyperlinks work? I don’t know yet, but I doubt that’s the solution. The hyperlink at root is just too basic and good a thing (and so fundamentally overlooked in the discourse). This is just a blog post to trial balloon some of these ideas. Will continue to chip away at this.

Evolutionary AI World-Building

After having experimented quite a lot with AI content generation (both images and text), it’s got me thinking about what are the next level applications for story-tellers and world-builders.

The first is obvious: integrate AI text and image generation tools directly into writing/publishing software (WordPress? Medium? Scrivener? Vellum?). Hm, it does appear there is a WP Stable Diffusion tool you can use directly within blocks. I think Novel AI also lets you generate text or images (and there was some big stupid kerfuffle about them in the SD community that I largely ignored the details of).

The second (or alternate first) is like Playground AI is doing, where you can use either Dall-E or SD models right from within the same UI. TextSynth does this for open source alternatives to GPT-3.

So as a writer/world-builder, you’d want to be able to effortless switch between generation tools all from within the same comfortable UI that you’re used to, or that you prefer from available tools. And then you’d string together and be able to edit & reconfigure items in the composition (sequence) you are creating.

But then there’s another couple layers on top of that, which I’m calling maybe the metaversal and evolutionary layers.

The metaversal or world-building layer would be the web of connections, entity names, attributes & relationships. You could enter whatever training data you want there, and it would also automatically track, cross-reference, etc. (including compares/diffs – not just file versions, but I’m thinking about “factual” references in-world across diverse documents)

The evolutionary layer would then be like, you could walk away from the app, and it would sort of automatically progress or generate within certain constraints customizable by the user, based on the content in its training data, or that has already been auto-generated and approved by the user.

So when you come back, it would present you with various events, suggestions, problems, checkpoints, or other evolutionary decisions. As the user, you could peruse them, and rate them, choose whether they are right/wrong for the world, and rate them according to some scale of how canonical they are. You could save or trash new text and image developments. Spin out, cancel, or merge timelines.

Also included would be character AIs where at key points, you could interact by chat messages with characters. You could do it as yourself to train/instruct characters (e.g., stage directions), or you could do it in-character (using the guise of someone else in the world), and this could become the basis for canonical text.

You would be able to take all these artifacts, and arrange them into groups, and sequences, and export them to various formats to work with other applications (such as outputting files for an ebook or a blog post, or even an audiobook or video down the road).

In looking around for World-Building AIs, there are certain tools out there that do aspects of this, but none of them all of it together. What I’ve also learned as a user and worker in software is that I don’t really want to trust someone else’s paid subscription service. What happens if they go out of business? What happens if they turn off a key feature?

WordPress is one of the only models for me that still holds any water, where I can run my own instance, and be in complete control of it, and host it myself. There’s a big developer community & regular updates, but I don’t have to take them if I don’t want, or I can augment on my own. Running locally would also be probably a desirable option. I’ve seen people mention Obsidian in a similar context for running locally, and being highly extensible (runs on markdown, I believe), but I never got into it when I tinkered with it, and it lacks all these evolutionary capacities I mentioned, which is what would take this all to the next level.

Cultural hegemony (Marxism)

Wikipedia:

“…cultural hegemony is the dominance of a culturally diverse society by the ruling class who manipulate the culture of that society—the beliefs and explanations, perceptions, values, and mores—so that the worldview of the ruling class becomes the accepted cultural norm.[1] As the universal dominant ideology, the ruling-class worldview misrepresents the social, political, and economic status quo as natural, inevitable, and perpetual social conditions that benefit every social class, rather than as artificial social constructs that benefit only the ruling class”

Dictionary of the Khazars

This sounds really interesting & just ordered it:

The novel takes the form of three cross-referenced mini-encyclopedias, sometimes contradicting each other, each compiled from the sources of one of the major Abrahamic religions (Christianity, Islam, and Judaism). In his introduction to the work, Pavić wrote:

No chronology will be observed here, nor is one necessary. Hence each reader will put together the book for himself, as in a game of dominoes or cards, and, as with a mirror, he will get out of this dictionary as much as he puts into it, for you […] cannot get more out of the truth than what you put into it.[3]

Fiction / non-fiction as not useful distinction

Usually categorically hate Quora answers, but this excerpted bit is not bad:

‘Fiction’ or ‘non-fiction’ doesn’t help us very much, and ‘fake’ and ‘real’ help us less. In a particular work, the better questions are ‘what kind of work (genre) is this?’; ‘what truths might it be teaching?’. If nothing else, a little discipline with those questions might alleviate some of the pointless and misguided wrangles which recur with disheartening frequency.

I agree that, especially with hyperreal texts, the validation of “facts” is only one of many possible avenues of experience. And it may not always be the most useful, versus a more structural (or structuralist) approach. It may be that the references/relationships/re-arranging have a greater relevance than maybe more fluid factual/counter-factual inclusions.

Co-defining narratives

I often feel while working with AI content generation tools – like Stable Diffusion, Dall-e, Fairseq & GPT-NeoX (the main ones I use) – that something like a co-defined narrative unfolds from the process. I saw someone call it I think divinatory in an article. Each of the participants, the human prompter, and the AI model offering their own bits and replies back to one another. Then you look at what they output, and you re-steer and kind of lean back into that.

It strikes me this happens with the reader also, who happens into this chaotic non-linear narrative latent space. Each might only read or look at a small subset of artifacts, a very few ingest a great number. Each comes in through their own path, and leaves their own trail across the nodes of the narrative that they traverse in the unique sequence they themselves traverse it in.

Page 80 of 177

Powered by WordPress & Theme by Anders Norén