Tim Boucher

Questionable content, possibly linked

Information should be ‘dangerous’ (sometimes)

One thing I see a lot of lately is this idea that some information is fundamentally dangerous. I don’t have this all worked out yet, but wanted to challenge that base assumption, by unpacking a little what danger constitutes in relation to information.

“Dangerous” information would presumably be information that threatens some established conception of order. Let’s put aside for a moment which order is threatened by which information (and whether that order or that information is “legitimate), and try to understand the nature of a threat in this context…

With regards to information, it seems that a threat here might constitute a challenge or request to *change* a given conception of order. Depending on the circumstance, that may or may not be warranted or a wholly “good thing.” But danger is something that causes things to change: we need to run away to escape danger, or we need to actively neutralize the danger and contain the risk.

Is it possible then to have change in information systems without “dangerous” information that challenges a given order? Maybe. But that seems like it might be “boring” in many contexts (though “dangerous” information will always be unwelcome in some settings that require or at least heavily favor stability).

Will give this more thought and add on as I go, now that I’ve established a beach head here…

Interrogating AI models as an artist

Wrote this somewhere else that I can’t remember, but wanted to re-capture the thought here as I think it’s an important one.

In my mind, the correct way to use AI tools as an artist (or writer) is not necessarily (strictly) to use them to “create art.” Because anybody can do that, and get similar results, whether “artist” or non-artist… that’s the point, that they are democratizing technologies.

So instead, what feels right to me as an artist (ymmv) is to interact with the tools in such a way that you “interrogate” them broadly, deeply, and meaningfully. In other words, engage with them at a level that only an artist could or would think to do. Whatever that is will be different for everybody (and in the end, perhaps there is no such thing as a “non-artist” – but still). As them probing questions, challenge them on what they can do, explore the boundaries of the possible, and document it all alongside the reactions and impacts it all seems to have on real people.

That is, transcend the single-image output as your end product. Obviously, if you’re doing the above deep interrogation of the essence of these tools, you’ll generate hundreds or thousands of intermediate products (whether images or text) along the way. Let that become your trail, the record left in the wake of your passing through with honesty, curiosity, and creative rigor.

A broad suspicion

I’m not so sure this is a wholly negative outcome:

“In short, people might develop a broad suspicion that the images and text we encounter online are completely unverifiable.”

(Source)

A Hallucinating Supercomputer

Via:

“A supercomputer reads huge amounts of text and learns to hallucinate what words might come next in any given situation…

Needless to say, a hallucinating supercomputer is not a source of reliable information.

Reality Collapse

Via former policy director at OpenAI, Jack Clark:

“All these generative models point to the same big thing that’s about to alter culture; everyone’s going to be able to generate their own custom and subjective aesthetic realities across text, video, music (and all three) in increasingly delightful, coherent, and lengthy ways. This form of fractal reality is a double-edged sword – everyone gets to create and live in their own fantasies that can be made arbitrarily specific, and that also means everyone loses a further grip on any sense of a shared reality. Society is moving from having a centralized sense of itself to instead highly individualized choose-your-own adventure islands, all facilitated by AI. The implications of this are vast and unknowable. Get ready.”

Like, Subscribe, ☠

One thing I don’t see people going on about with the demise of Twitter is that this has opened their eyes to the futility of chasing likes and follows on *any* social platform. Yes, you could find another platform and try your same hustle there as well. But why? Did you learn nothing? Do you want to stay on this same cycle the rest of your life, and possibly be forced to continue it after your death as a digital upload too? Cause that’s where this is going. It’s time to climb down off the carousel.

Slipstream genre

Wikipedia:

“Sterling later described it in an article originally published in SF Eye #5, in July 1989, as “a kind of writing which simply makes you feel very strange; the way that living in the twentieth century makes you feel, if you are a person of a certain sensibility.”

And:

“Slipstream fiction has been described as “the fiction of strangeness”,[4] or a form of writing that makes “the familiar strange or the strange familiar” through skepticism about elements of reality.[5] Illustrating this, prototypes of the style of slipstream are considered to exist in the stories of Franz Kafka and Jorge Luis Borges.[6]

h/t AS for pointing this one out.

Conducting AI

Before I forget, just wanted to jot down a quick note: around the topic of AI-generated art & text, people worry it will “replace” artists, when really what it will do is re-contextualize the role of artist as more of a conductor, rather than say a soloist, or whatever… Half-baked, but there it is.

The Dissolution of Meaning

A lot of times, I will search Google for something, click through to a page that “seems” like information, and then discover in a surface skim that it’s actually basically junk and/or trying to sell you a product above and beyond the mere SEO manipulation. In those cases, I feel had to a certain extent–even if the failure is necessarily in many way’s Google’s for bringing me this junk in the first place and trying to hide it among or in place of “real” meaning and information.

Which of course pushes my heavy experimentation with AI writing tools to produce books into a certain state of tension. I know that; I own it. It’s the uncanny valley of delight and terror that I choose to play in. Because I know in that tension itself is something to be unwound and explored.

If a book is wholly or partially written by an AI, what impact does that actually have on it? Is it “better” or “worse” in some way, because there is either a lesser or else different impulse behind it’s creation? Is it more or less “worthwhile” or “valuable?”

In my case (and I should preface this by saying that I don’t necessarily consider myself or strictly care about “authorship”–that’s a hang up I’ve chosen to put aside…), I see these books as an interrogation of the technologies themselves. Personally I don’t like when people call the tools in their current state “AIs.” I feel that’s a tremendous overshoot when it’s really just machine learning applied at various scales. But that’s a subtlety that’s lost on the masses who just want a good headline to click on, and then ignore the article’s actual contents.

Which is a pattern we’re all used to. It’s, in a way, fundamental I think to the hyperlink, though it had to be laundered through a decade or two of dirtying human nature first to become really readily apparent.

I don’t really agree with that one dude’s estimation that Lambda is a “sentient” chat bot, but I’ve played with others enough to know that there is a spooky effect here, probably latent in human consciousness, or in material-cosmic consciousness itself. We’re gonna project our own meaning into it, even if that meaning is “this is crap,” or “this is fake”;–all valid reactions. Just as much as this is fun or this is good.

Why shouldn’t we ask these technologies, though, what they “think,” leaving aside their actual ontological status (which is unknowable)? And just see what they say, and then ask them more questions, and more.

What if the answers they give are “wrong” or “false” or “bad” or “dangerous?” What if they are misinformation or “disinformation”, or advocate criminal acts, or suicide?

The problem is the dissolution of meaning, to which these are only an accelerant, not the underlying cause (though they will certainly fuel a feedback loop). These tools are terrible at holding a narrative thread, of keeping track of characters in a scene, what’s going on, or how we got here, let alone where we are going. In a way, that’s freeing, to smash narrative unity. I don’t think I’m the first creator to discover this freedom, either.

Wikipedia:

Surrealism is a cultural movement that developed in Europe in the aftermath of World War I in which artists depicted unnerving, illogical scenes and developed techniques to allow the unconscious mind to express itself.[1] Its aim was, according to leader André Breton, to “resolve the previously contradictory conditions of dream and reality into an absolute reality, a super-reality”, or surreality.

And surrealism’s cousin Dadaism:

“Developed in reaction to World War I, the Dada movement consisted of artists who rejected the logic, reason, and aestheticism of modern capitalist society, instead expressing nonsense, irrationality, and anti-bourgeois protest in their works.”

Moving on to hyperrealism (and I will trot this quote out endlessly for ever and ever, amen):

“Hyperreality is seen as a condition in which what is real and what is fiction are seamlessly blended together so that there is no clear distinction between where one ends and the other begins.”

I like this idea that there is a reality above reality in which contradictions are merged, and taken rightly as significant elements of but a greater whole which encompasses all elements, much like the Hypogeum of Quatrian lore. That what is “real” and “unreal” are merely glimpses on a continuum of experience itself.

If there is any beauty or truth to be found in any of those arts, then there must be too in the merging and dissolution of meaning and non-meaning that we see so strongly emergent as a current in AI-produced art (including literature).

Why not let the reader’s confusion about what was written by AI and what by human be a part of the longing they have to develop and nurture, that desire to understand, or at least swim or float in the sea of non-meaning?

Does it degrade meaning? Does it uplift non-meaning? Non-meaning is another form of meaning. “Alternative facts,” alternative fictions. Who picks up and leaves off? You? A robot? A corporation? A government? Each an authority, each offering their assessment. All of which one could take or leave, depending on who has the arms and means of enforcement. The game then begins I guess to be simply how to navigate these waters, how not to get too hung, how not to get too exploited, how not to be too weighed down when you dive into the depths of rabbit holes under the sea that you can come back up again, and still be free.

Free to find bad search results. Free to be mislead, and freedom to mislead. All is sleight of hand and stage direction. Everything gardens, trying to manipulate its environment to create optimal conditions for its own survival. Including AI, including all of us. We need new tools to understand. We need brave explorers to sail off into unmeaning, and bring back the treasures there, for not all life is laden and wonderful. Much is viral and stupid. Much is lost, much to be gained.

Strictly necessary

Sorry internet, there is no such thing as a “strictly necessary” cookie for me to look at a page of text. If that were true, every time I looked at a physical book or piece of paper with words on it, I would need a cookie. Luckily — for now — that is still not true. And kindly go to hell for claiming this self-serving illusion as true.

Page 1 of 126

Powered by WordPress & Theme by Anders Norén