Questionable content, possibly linked

Series: Art Page 1 of 10

Quatria Truthers Protest Government Cover-Up

Someone on Twitter sent me this photo from a protest recently. I guess a few hardcore “Quatria Truthers” are getting pretty fed up about global governments hiding the facts about what’s really going on in Antarctica right now. Can’t say that I blame them tbh…

Quatria Truthers Protesting Against Government Lies

Analysis: Side-By-Side Wikipedia Comparison Diagram (Quatria)

A small minority of free-thinkers on Reddit (archive) are questioning the validity of the recent exposé of Wikipedia’s blocking of Quatria search terms. Others are asking whether it all isn’t controlled opposition to distract us from the truth about Tartaria. They are right to do so, because the price of liberty is eternal digilence, as the saying goes.

One especially digilent Reddit user analyzed the Quatria Wikipedia screenshot using advanced AI techniques, and made this screenshot comparing it to the Piri Reis map page.

https://i.imgur.com/LoUbxbq.jpg

It’s not exactly a smoking gum, but the arguments made by this user are pretty compelling, I have to say. They just might have changed my mind on the whole thing…

Should Conspiratopia Be Banned?

Apparently, a number of politicians who live in their own alternate universe have somehow gotten advanced copies of my new novel, Conspiratopia, and without even having probably read it and stuff, they are calling for it to be banned. Just like that! Go figure. Thought this was still a “free country?”

All I can say is that politicians should spend more time reading books, and less time burning them!

I can’t really believe any of this is really happening…

Even frickin’ Ben Shapiro is apparently getting in on this action? WTH??

This is exactly why I put the book onto the blockchain as an NFT! So these censorship-happy conservatives can’t ban my book…

On “Dangerous” fictions

Found this piece from July 2022 by Cory Doctorow, where he talks about an author who was apparently a protege of Philip K. Dick’s who I never heard of – Tim Powers.

In it, he brings up an oft-repeated trope regarding “dangerous” fictions, a pet topic of mine:

“The Powers method is the conspiracist’s method. The difference is, Powers knows he’s making it up, and doesn’t pretend otherwise when he presents it to us. […]

The difference between the Powers method and Qanon, then, is knowing when you’re making stuff up and not getting high on your own supply. Powers certainly knows the difference, which is why he’s a literary treasure and a creative genius and not one of history’s great monsters.”

As popular as this type of argument is (and Douglas Rushkoff trots out something similar here and here), I personally find it to be overly simplistic and a bit passé.

First of all, I would argue that all writers – by necessity – must get “high on their own supply” in order to create (semi) coherent imaginal worlds and bring them to fruition for others to enjoy. Looking sternly at you here, Tolkien. In fact, perhaps the writers who get highest on their own supply are in some cases the best…

Second, no one arguing in favor of this all of nothing position (fiction must be fiction must be fiction) seems to have taken into account the unreliable narrator phenomenon in fiction.

Wikipedia calls it a narrator whose credibility is compromised:

“Sometimes the narrator’s unreliability is made immediately evident. For instance, a story may open with the narrator making a plainly false or delusional claim or admitting to being severely mentally ill, or the story itself may have a frame in which the narrator appears as a character, with clues to the character’s unreliability. A more dramatic use of the device delays the revelation until near the story’s end. In some cases, the reader discovers that in the foregoing narrative, the narrator had concealed or greatly misrepresented vital pieces of information. Such a twist ending forces readers to reconsider their point of view and experience of the story. In some cases the narrator’s unreliability is never fully revealed but only hinted at, leaving readers to wonder how much the narrator should be trusted and how the story should be interpreted.”

My point is that the un/reliability of the “narrator” can extend all the way out through to the writer themself. (And what if the reader turns out to be unreliable?)

Can we ever really know for certain if a writer “believed” that thing x that they wrote was wholly fictional, wholly non-fictional, or some weird blend of the two? Do we need to ask writers to make a map of which elements of a story are which? Isn’t that in some sense giving them more power than they deserve?

Moreover, if the author is an unreliable narrator (and to some extent every subjective human viewpoint is always an unreliable narrator to some degree), how can we ever trust them to disclose to us responsibly whether or not they are indeed unreliable? Short answer is: we can’t. Not really.

This is one of those “turtles all the way down” arguments, in which (absent other compelling secondary evidence) it may be difficult or sometimes impossible to strike ground truth.

All of this boils down for me to the underlying argument of whether one must label fictional works as fiction, and if not doing so is somehow “dangerous.”

The Onion’s Amicus Brief earlier this year why parody and satire should not be required to be overtly labelled – because if robs these millennia-old art forms of their structural efficacy, their punch as it were.

Wikipedia’s Fiction entry’s history section is sadly quite scant about the details. A couple of other sources point to more specifically the 12th century in Europe (though likely it goes back farther). One source whose credibility I have no concept of states:

“In the Middle Ages, books were perceived as exclusive and authoritative. People automatically assumed that whatever was written in a book had to be true,” says Professor Lars Boje…

It’s an interesting idea, that structurally the phenomenon of the book was so rare and complex that by virtue of its existence alone, it was conceived of as containing truth.

Up until the High Middle Ages in the 12th century, books were surrounded by grave seriousness.

The average person only ever saw books in church, where the priest read from the Bible. Because of this, the written word was generally associated with truth.”

That article alludes to an invisible “fiction contract” between writer and reader, which didn’t emerge as a defined genre distinction until perhaps the 19th century. They do posit a transition point through in the 12th, but don’t back it up by any evidence therein of a “fiction contract.”

“The first straightforward work of fiction was written in the 1170s by the Frenchman Chrétien de Troyes. The book, a story about King Arthur and the Knights of the Round Table, became immensely popular.”

HistoryToday.com – another site whose credibility I cannot account for – seems to agree with pinpointing that genre of Arthurian romance as being linked to the rise of fiction, though pushes it back a few years to 1155, with Wace’s translation of Monmouth’s History of the Kings of Britain. The whole piece is an excellent read, so I won’t rehash it here, but quote:

“This is the literary paradigm which gives us the novel: access to the unknowable inner lives of others, moving through a world in which their interior experience is as significant as their exterior action.”

They suggest that fiction – in some form like we might recognize it today – had precursor conditions culturally that had to be met before it could arise, namely that the inner lives of people mattered as much as their outward action.

“It need hardly be said that the society which believes such things, which accedes to – and celebrates – the notion that the inner lives of others are a matter of significance, is a profoundly different society from one that does not. There is an immediately ethical dimension to these developments: once literature is engaged in the (necessarily fictional) representation of interior, individuated selves, who interact with other interior, individuated selves, then moral agency appears in a new light. It is only in the extension of narrative into the unknowable – the minds of others – that a culture engages with the moral responsibility of one individual toward another, rather than with each individual’s separate (and identical) responsibilities to God, or to a king.”

It’s interesting also here to note that, A) the King Arthur stories did not originate with Chretien de Troyes or Geoffrey of Monmouth, and B) many people ever since still believe them to be true today to some extent.

Leaving that all aside, one might also ask regarding my own work, well isn’t this all just a convoluted apologia for the type of writing I’m doing? Absolutely, and why not articulate my purpose. You can choose to believe me or decide that I am an unreliable narrator. It’s up to you. I respect your agency, but I also want to play on both the reader’s and the author’s (myself) expectations about genres and categories. These are books which take place squarely in the hyperreal after all, the Uncanny Valley. They intentionally invite these questions, ask you to suspend your disbelief, and then cunningly deconstruct it, only to reconstruct it and smash it again later – and only if you’re listening.

Further, as artists I believe our role and purpose is to some extent to befuddle convention, and ask questions that have no easy answers. Yes, this will cause some uneasiness, especially among those accustomed to putting everything into little boxes, whose contents never bleed or across. Some people might even worry if it’s “dangerous” to believe in things that aren’t factual. Is it? I think the answer is sometimes, and it depends. But it largely depends on your agency as the reader, and what you do with it in real life.

Consider the case of this purveyor of tall tales, Randy Cramer, who claims with a straight face to have spent 17 years on the Planet Mars fighting alien threats to Earth.

He is the very definition of the unreliable narrator, whose labels of fact of fiction likely do not accord with consensus reality on many major points.

The video below is a good, if a bit annoying, take-down of many of Cramer’s claims, though unfortunately I think leans rather too heavily on deconstructing his body language, when his words alone are damning enough (btw, looks like the George Noory footage comes from an interview he did for his show Beyond Belief):

The question remains: is this an example of a “dangerous” fiction?

To understand that, I tend to think in terms of risk analysis, in which we might try to estimate:

  1. The specific harm(s)
  2. Their likelihood of occurring
  3. Their severity

One definition of harm traces back to Feinberg, and is something like wrongful setbacks of interest. A Stanford philosophy site further elucidates, quoting Feinberg:

Feinberg’s defines harm as “those states of set-back interest that are the consequence of wrongful acts or omissions by others” (Feinberg 1984)

Is saying you spent 17 years on Mars a “wrongful act or omission?” Perhaps. But as the Stanford article points out, actually defining what is or isn’t in someone’s interests is incredibly squishy.

In Cramer’s case, perhaps it is willfully and wrongfully deceptive to say the things he is saying. Do we have a moral or legal responsibility to always tell the truth? What about when that prevarication leads to financial loss in others?

In Cramer’s case, according to the second video linked above, he does seem to ask people for money – both in funding creation of a supposedly holographic bio-medical bed which can regrow limbs, and in the form of online psionics courses and one-on-one consultations.

But is it wrongful if the buyers/donators have agency, and the ability to reasonably evaluate his claims on their own?

Wikipedia’s common-language definition of fraud seems like it could apply here:

“…fraud is intentional deception to secure unfair or unlawful gain, or to deprive a victim of a legal right.”

Is Cramer a fraud? Is he a liar? I wondered here if Cramer might have a defamation case against the YouTube author referenced above, who calls him a pathological liar. But last time I checked, truth is an absolute defense against defamation claims. That is, the commonly accepted truth we agree on as a society – more or less – is that Mars is uninhabited, and there is no Secret Space program, etc. So if it went to court, it seems like the defamation claim would not have a leg to stand on.

Of course, it’s *possible* it’s all truth, and what we call consensus reality is based on a massive set of lies itself that is very different from ‘actual’ reality. But that’s not how courts work.

What if Cramer included disclaimers like you might see on tarot card boxes, or other similar novelty items, “For entertainment purposes only?” It depends what authority we’re trying to appeal to here: a court of law, the court of public opinion, or one reader’s experience of a particular work. Each of those might see the matter in a different light, depending on their viewpoint.

In my case, I include disclaimers regarding the inclusion of AI generated elements. I leave it up to the reader to try to determine A) which parts, and B) what the implications of AI content even are. Should they be trusted?

My position, and the one which I espouse throughout, is that – for now – AI is an unreliable narrator. Making it about on par with human authors in that regard. Are the fictions it produces “dangerous?” Must we label them “fictions” and point a damning finger at their non-human source?

In some ways, my books are both an indictment of and celebration of AI authorial tools, and even full-on AI authorship (which I think we’re some ways away from still). To know their dangers, we must probe them, and expose them thoughtfully. We must see them as they are – as both authors and readers – warts and all. And decide what we will do with the risks and harms they may pose, and how we can balance all that with an enduring belief and valorisation of human agency.

Because if we can’t trust people to make up their own minds about things they read, we run the real risk of one of the biggest and most dangerous fictions of all – that we would be better off relying on someone else to tell us what’s ‘safe’ and therefore good, and trust them implicitly to keep away anything deemed ‘dangerous’ by the authority in whom we have invested this awesome power.

Layered hypertexts (Semiotics)

Following on from my recent look at LLMs (large language models) as being potentially something predicted by postmodernists, I wanted to add another layer onto that.

Let’s dive right in with this original older definition of “hypertext” within the context of semiotics, via Wikipedia:

Hypertext, in semiotics, is a text which alludes to, derives from, or relates to an earlier work or hypotext. For example, James Joyce’s Ulysses could be regarded as one of the many hypertexts deriving from Homer’s Odyssey…”

It continues on with some more relevant info:

The word was defined by the French theorist Gérard Genette as follows: “Hypertextuality refers to any relationship uniting a text B (which I shall call the hypertext) to an earlier text A (I shall, of course, call it the hypotext), upon which it is grafted in a manner that is not that of commentary.” So, a hypertext derives from hypotext(s) through a process which Genette calls transformation, in which text B “evokes” text A without necessarily mentioning it directly “.

Compare with the related term, intertextuality:

“Intertextuality is the shaping of a text’s meaning by another text, either through deliberate compositional strategies such as quotation, allusion, calque, plagiarism, translation, pastiche or parody, or by interconnections between similar or related works perceived by an audience or reader of the text.”

Speaking of plagiarism, I’ve used somewhat extensively a plagiarism/copyright scanning tool called Copyleaks. The tool is decent for what it is, and the basic report format that it outputs for text that it scans looks like this:

So while this tool is intended for busting people’s chops for potentially trying to pass off the work of others as their own, the window that it shows us into intertextuality and the original sense of hypertext is quite an interesting one.

We can see here specifically:

  • Passages within a text that appear elsewhere in the company’s databases, and the original source of those passages
  • Passages which appear to have been slightly modified (probably to pass plagiarism checkers like this)
  • Some other bits and bobs, but those are the major ones

I find “plagiarism” as a concept to be somewhat of a bore. But looking at this as a way to analyze and split apart texts into their component layers and references suddenly makes this whole thing seem a lot more interesting. It allows for a type of forensic “x-ray” analysis of texts, and a peek into the hidden underlying hypotexts from which it may be composed.

The whole thing calls to mind for me as well another tangential type of forensic x-ray analysis for documents, something we see in the form of a Github diff, which tracks revisions to a document.

This is not the most thrilling example of a Github diff ever, but it’s one I have on hand related to Quatria:

It’s easy enough to see here the difference in a simple file, though diffs can become quite complex as well. Both this and the original semiotic notion of hypertexts (as exposed through plagiarism checkers) seems like another useful avenue to explore in terms of how might we want to try to visualize AI attribution in a text.

This AI Life Interview

I’m happy with how this interview with This AI Life came out. I hope it sheds some much needed light on the work that I’ve been doing with Lost Books.

Big thanks to the team over there for collaborating on this piece!

How I use AI: An Ethical Inquiry

Partnership on AI just released a preliminary framework around responsible practices for synthetic media, and in Section 3 for Creators, they included something I thought was interesting. They filed it under transparency, being up front about…

How you think about the ethical use of technology and use restrictions (e.g., through a published, accessible policy, on your website, or in posts about your work) and consult these guidelines before creating synthetic media.

I personally don’t think having a rigid formal policy is going to be a perfect match for artistic creations (things evolve, norms change, etc.), but the idea of just having a conversation comes from a well-intentioned place, and simply makes for a more complete discussion of one’s work, whether you’re using AI or other technologies.

It also seems to potentially plug into this action point from MIT’s Co-Creation studio around deepfakes and satire. The idea of “reclaiming” labeling as a positive thing:

Think of labeling and disclosure of how media was made as an opportunity in contemporary media creation, not a stigmatizing indication of misinformation/disinformation.

I covered a lot of this ground recently in my interview with Joanna Penn, and This AI Life, so I thought it would make sense to encapsulate the highlights of my thinking as well in written form. Consider this me doing a trial run of PAI’s suggested framework from a creator’s perspective as a “user.”

Before going further though, I want to add a slight disclaimer: I am an artist not an ethicist. My work speaks about ethics and harms related to especially AI technologies, but it is meant to be provocative and in some cases (mildly) transgressive. It is supposed to be edgy, pulpy, trashy, and “punk” in its way.

That said, here are a couple of lists of things I try to actively do and not do, that to me relate to harms & mitigation, etc. There are probably others I am forgetting, but I will add to this as I think about it.

Do:

  • Include information about the presence of AI-generated content
  • Raise awareness about the reliability & safety issues around AI by doing interviews, writing articles, blog posts, etc.
  • Contribute to the development of AI safety standards and best practices
  • Encourage speculation, questioning, critical analysis, and debunking of my work
  • Make use of satire and parody

Don’t:

  • Create works which disparage specific people, or discriminate or encourage hate or violence against groups of people
  • Use the names of artists in prompts, such that work is generated in their signature style
  • Undertake projects with unacceptable levels of risk

Reflections

There are a few sections of the PAI framework that seem a bit challenging as someone new to all of this discussion, applying the lens that I am.

Aim to disclose in a manner that mitigates speculation about content, strives toward resilience to manipulation or forgery, is accurately applied, and also, when necessary, communicates uncertainty without furthering speculation.

I think I covered this in a few places now, the Decoder piece maybe, the France 24 interview… In short: I want to encourage speculation, ambiguity, uncertainty; that’s hyperreality, that’s the uncanny valley. As an artist, that’s what’s exciting about these technologies, that they break or blend boundaries, or ignore them altogether. And like it or not, that’s the world we’re heading into as a massively splintered choose-your-own-reality hypersociety.

Yes, I think it’s necessary all these industry standardization initiatives are developed, but I guess I’m also interested in Plan B, C, D, or, in short: when the SHTF. I guess my vision is distorted because I’ve seen so much of the SHTF working in the field that I have. But someone has to handle when everything always goes wrong, after all, because that’s reality + humanity.

From PAI’s document, this one also I have a hard time still squaring with satire & parody:

Disclose when the media you have created or introduced includes synthetic elements especially when failure to know about synthesis changes the way the content is perceived.

If you’ve read the Onion’s Amicus Brief, it persuasively (in my mind, as a satirist, anyway) argues that satire should not be labeled, because its whole purpose is it inhabits a rhetorical form, which it then proceeds to explode – turning the assumptions that lead there inside out. Its revelatory in that sense. Or at least it can be.

So in my case, I walk the line on the above recommendation. I include statements in my books explaining that there are aspects which may have been artificially generated. I don’t say which ones, or – so far – label the text inline for AI attribution (though if the tools existed to reliably do so, I might). I want there to be easter eggs, rabbit holes, and blind alleys. Because I want to encourage people to explore and speculate, to open up, not shut down. I want readers and viewers to engage with their own impressions, understanding, and agency, and examine their assumptions about the hyperreal line between reality and fiction, AI and human. And I want them to talk about it, and engage others on these same problems, to find meaning together – even if its different from the one I might have intended.

It’s a delicate balance, I know; a dance. I don’t pretend to be a master at it, just a would-be practitioner, a dancer. I’m going to get it wrong; I’m going to make missteps. I didn’t come to this planet to be some perfect paragon of something or other; I just came here to be human like all the rest of us. As an artist, that’s all I aim to be, and over time the expression of that will change. This is my expression of it through my art, in this moment.

Hyperreality As Analytical Framework

I have spent a great deal of time lately analyzing documents put out by Partnership on AI regarding Synthetic Media & Safety, and by WITNESS & the MIT Co-Creation Studio around deepfakes + satire. There’s a lot of interesting stuff in both, which I’ve written about recently already, but I wanted to share some of my own possibly unique perspective as someone who has, let’s say, been working both sides of the aisle – both as a satirist, and someone working in online safety.

Hyperreality Lens

First, I’m applying hyperreality as my lens. You.com/chat gave me a serviceable definition of hyperreality, which is mostly paraphrased from the Wikipedia article, it seems:

Hyperreality is a concept used to describe a state in which what is real and what is simulated or projected is indistinguishable. It is a state in which reality and fantasy have become so blended together that it is impossible to tell them apart. Hyperreality is often used to describe the world of digital media, virtual reality, and augmented reality, where the boundaries between what is real and what is simulated have become blurred.

Maybe it’s just me, but this feels like a useful starting point because it speaks to shades of grey (and endless blending) as being the natural state of things nowadays. It’s now a ‘feature not a bug’ of our information ecosystems. And even though Truth might still be singular, its faces now are many. We need new ways to talk about and understand it.

AI is like water

And on top of that, I think this quote from the CEO of Midjourney about the nature of AI as being like ‘water’ is very useful:

Right now, people totally misunderstand what AI is. They see it as a tiger. A tiger is dangerous. It might eat me. It’s an adversary. And there’s danger in water, too — you can drown in it — but the danger of a flowing river of water is very different to the danger of a tiger. Water is dangerous, yes, but you can also swim in it, you can make boats, you can dam it and make electricity. Water is dangerous, but it’s also a driver of civilization, and we are better off as humans who know how to live with and work with water. It’s an opportunity. It has no will, it has no spite, and yes, you can drown in it, but that doesn’t mean we should ban water.

Water is also for the most part ubiquitous (except I guess during droughts & in deserts, etc.) as AI soon will be. It will be included in or able to be plugged into everything in the coming years.

Lingua Franca

Thinking of it that way, we need a new language to talk about these phenomena which will, as Jack Clark aptly pointed out, lead to “reality collapse.” That is, we need a new lingua franca, and I suspect that we have that in the concept of hyperreality; we just need to draw it out a little into a more comprehensive analytical framework.

Dimensionality

One thing I’ve observed in other analyses of the conversations emerging around AI generated content and allied phenomena is that there is a bit of reduction happening. Possibly too much. It appears to me that most discussions usually center around a very limited set of axes to describe what’s happening:

  • Real vs. fake
  • Serious vs. satire
  • Harmful vs. responsible
  • Labeled vs. unlabeled

Certainly those form the core of the conversation for a reason; they are important. But alone they give an incomplete picture of a complex thing.

Speaking as someone who has had to do the dirty work of a lot of practical detection and enforcement around questionable content, I think what we need is what might be called in machine learning a “higher-dimensional” space to do our analysis. That is, we need more axes on our graphs, because applying low-dimensional frameworks appears to be throwing out too much important information, and risks collapsing together items which are fundamentally different and require different responses.

It’s interesting once we open up this can of worms, that a more dimensional approach actually corresponds quite closely to the so-called “latent space” which is so fundamental to machine learning. Simple definition:

Formally, a latent space is defined as an abstract multi-dimensional space that encodes a meaningful internal representation of externally observed events. Samples that are similar in the external world are positioned close to each other in the latent space.

In ML, according to my understanding (I had to ask ChatGPT a lot of ELI5 questions to get this straight): for items in a dataset, we have characteristics, each of which is a feature. Then a set of features that describes an item is the feature vector. Each feature corresponds to a dimension, which is sort of a measurement of the presence and quantity of a given feature. So I higher-dimensional space uses more dimensions (to measure features of items), and a low or lower dimensional space attempts to translate down to fewer dimensions while still remaining adequately descriptive for the task at hand.

In my mind, anyway, it seems altogether appropriate to adopt the language and concepts of machine learning to analyze phenomena which include generative AI – which is really usually just machine learning. It seems to fit more completely than applying other older models, but maybe that’s just me…

Higher-dimensional analysis of hyperreality artifacts

So, what does any of that mean? To me, it means we simply need more features, more dimensions that we are measuring for. More axes in our graph. I spent some time today trying to come up with more comprehensive characteristics of hyperreality artifacts, and maxed out at around 23 or so pairs of antonyms which we might try to map to any given item under analysis.

However, when I was trying to depict that many visually, it quickly became apparent that having that many items was quite difficult to show clearly in pictorial form. So I ended up reducing it to 12 pairs of antonyms, or basically 24 features, each of which corresponds to a dimension, which may itself have a range of values.

Here is my provisional visualization:

And the pairs or axes that I applied in the above goes like this:

  • Fiction / Non-fiction
  • Fact (Objective) / Opinion (Subjective)
  • Cohesive / Fragmentary
  • Clear / Ambiguous
  • Singular / Multiple
  • Static / Dynamic
  • Ephemeral / Persistent
  • Physical / Virtual
  • Harmful / Beneficial
  • Human-made / AI generated
  • Shared / Private
  • Serious / Comedic 

From my exercise in coming up with this list, I realize that the items included above as axes are not the end-all be-all here. It’s not meant to be comprehensive & other items may become useful for specific types of analysis. In fact, in coming up with even this list, I realized how fraught this kind of list is, and how many holes and how much wiggle room there is in it. But I wanted to come up with something that was broadly descriptive above and beyond what I’ve seen anywhere else.

Graphing Values

What’s the benefit of visualizing it like this? Well, having a chart helps us situate artifacts within the landscape of hyperreality; it lets us make maps. I wasn’t familiar with them before trying to understand how to represent high-dimensional sets visually, but there’s something called a radar graph or spider graph which is useful in this context.

I found a pretty handy site for making radar graphs here, and plugged my data labels (features) into it. Then, for each one, I invented a value between 0-4, which would correspond to the range of the dimension. Here’s how two different sets of values look, mapped to my graphic:

Now, these are just random values I entered to give a flavor of what two different theoretical artifacts might look like. I’m not really a “math” guy, per se, but it becomes clear right away once you start visualizing these with dummy values that you could start to make useful and meaningful comparisons between artifacts under analysis – provided you have a common criteria you’re applying to generate scores.

Criteria & Scoring

So the way you would generate real scores would be – first decide on your features/dimensions you want to study within your dataset. Then, come up with criteria that are observable in the data, and are as objective as possible. You should not have to guess for things like this, and if you are guessing a lot, your scores are probably not going to be especially meaningful. You want scoring to be repeatable and consistent, so that you can make accurate comparisons across diverse kinds of artifacts, and group them accordingly. A simple way to score would just be with a 0 for “none” and a 1 for “some.” Beyond that, you could have higher numbers for degrees or amount of which a given feature is observable in an artifact. So in the examples above, 1 could represent “a little” and 4 would be “a whole lot.”

Taking Action

Within an enforcement context – or any kind of active response, really (like for example, fact checking) – once you’ve got objective, measurable criteria that allow you to sort artifacts into groups, you can then assign each group a treatment, mitigation, or intervention – in other words, an action to take. This is usually done based on risk: likelihood, severity of harm, etc.


Anyway, I hope this gives some useful tools and mental models for people who are working in this space to apply in actual practice. Hopefully, it opens the conversation up significantly more than just trying to decide narrowly if something is real or fake, serious or satire, and getting stuck in the narrower outcomes those labels seem to point us towards.

Hyperreality is here to stay – we might as well make it work for us!

Midjourney Controllers

I’ve been playing around lately in Midjourney with doing visualizations of hand-held controllers one might use to control Midjourney.

That Imgur set has about 20 items, here are just a couple examples:

I like that the controls of these things are somewhat inscrutable, and seem in some cases outlandishly complex.

But then, I think you would probably need a somewhat complex controller to be able to meaningfully navigate high dimensional spaces, such as the latent spaces of image diffusion models.

Most controllers only work for movement in three spatial dimensions, and then include some other custom controls. But in machine learning data sets, you may have hundreds or thousands of dimensions: often, single pixels are treated as dimensions.

How then could you design a controller that would work in a fluid and flexible way in multi-dimensional spaces? The images Midjourney produces in queries around this topic seem almost tantalizingly comprehensible, but just outside the ability to grasp.

I took this basic concept, of a hardware and software package that can enable users to traverse latent spaces as though they were VR, and produced a pulp sci fi book from it, using Claude & Midjourney. It’s called Impossible Geometries. The Claude flash fictions are pretty fun, and easy to direct while you’re producing them (though in a lot of ways, Claude is lacking compared to GPT-4, and ChatGPT in general). And there is a superset of other imagined high dimension controllers in it (which I call the Prism LightScope), along with visualizations of living within the latent space, etc.

I’ve actually spent quite a while taking these concepts into ChatGPT w/ v4, and it’s been helping me meaningfully describe potentially real products that could be built in this space. There’s probably a lot of computational hurdles for visualizing and manipulating the contents of latent space in real time, but again it feels tantalizingly… possible.

Artist’s Statement

Hyperrealities and Uncanny Valleys: Provoking Dialogue through AI-Generated Imagery

(a.k.a. “I dream in image generations”)

Written with help from ChatGPT (v4)


Having worked for years in content moderation, my artistic practice delves into issues around the realm of AI image and text generation: embarking on (mid)journeys that explore the boundaries of photorealistic and hyperrealistic depictions of unreal events. By exploring speculative alternative realities, I construct visual narratives that portray political and other figures in hypothetical scenarios that have never unfolded within our objective consensus reality. Through this process, I aim to evoke curiosity and critical thinking among viewers, encouraging them to closely examine and question the information presented to them.

Humor and absurdity intertwine with my work, questioning the dichotomy between good and bad content and challenging societal limits through satire and parody. I firmly believe that the power of image making extends beyond aesthetics, potentially serving as a catalyst for profound societal discussions and inspiring real political change. Thus, I advocate for inclusive access and development, as well as encouraging diverse ownership and direction of these tools.

Within the context of postmodernism, my work ventures into the realm of hyperreality, blurring the boundaries between the real and the imagined. Deliberately engaging with the concept of the uncanny valley, AI-generated images can elicit a sense of uncertainty and discomfort. This unease prompts us to question the authenticity and reliability of what we perceive, particularly the information presented by those in positions of authority. It encourages a deeper examination of the potential consequences and implications of AI-generated content.

As a society, we must recognize that the challenges posed by AI-generated content and potential misuses cannot be solved by a single technical fix, nor should we fall into the trap of relying on AI to tell us what is “real.” Instead, AI-generated content represents an ongoing condition that requires continuous adaptation and proactive efforts to address issues and improve. Rather than advocating for the cessation of these technologies, I believe that navigating this landscape requires continuous critical thinking, adaptation, and open dialogue. This approach fosters a more informed and discerning approach to the consumption and interpretation of AI-generated images, text, and other content.

My work aims to provoke thought, spark conversations, and contribute to the larger discourse surrounding the impact and implications of AI-generated content in our increasingly interconnected world. Everyone needs to have their say in this conversation so we can develop a more nuanced understanding of the role and influence of AI in our society.

Page 1 of 10

Powered by WordPress & Theme by Anders Norén