Tim Boucher

Questionable content, possibly linked

Latent space *is* the metaverse.

Homogeneity in AI art

Was reading this piece earlier by Haley Nahman about blandness and sameness in Netflix’ visual production quality. It raises a lot of interesting points about over-reliance on tools and techniques – and the deadening that can happen to art forms when they’re driven more by speed, efficiency, and profitability than being necessarily “good.”

Generative AI is going to have the effect of blasting this problem into the stratosphere. And we haven’t yet seen AI visual production tools hit these kinds of mass markets. It’s still largely tinkerers and weirdos with GPUs in their basements creating things.

But the things this diverse band of weirdos tends to create are disappointingly homogenous. Midjourney, in my opinion, is the worst for this. While the images have a tendency to be very well done and often beautiful, I always look at them and think they look “Midjourney.”

Stable Diffusion isn’t too far behind either though. If you go to a site like PlaygroundAI.com’s homepage at any given moment, how many of the featured images are “sexy ladies” that basically all look the same? At this moment, in the first 15, I would say 11 of them fall into that category. That’s pretty much the norm.

If we’re seeing this massive democratizing effect because of generative AI, and all these millions or billions of imaginations are suddenly being unleashed, why is it that we all just end up making totally bland T&A shots?

I think there’s at least (probably more) two parts to it: the tools are predisposed to certain things, and bland mid-distance busts and portraits are one of its strengths & hand-in-hand with that, the users are predisposed to certain things.

My hunch is also that there is a shift with generative AI where being a “creator” is only as important as being able to create the thing you want to consume. The act of creation with these tools is one and the same as consuming it.

Have definitely felt that slightly magical effect a few times using verb.ai in particular, where writing with it truly becomes collaborative, and the storytelling unfolds the way it does because I am the first audience. My invocation causes it to take the shape that it does for me. Yours is different. (Or should be, if our tools don’t force us into homogeneity…)

The process of writing with AI-assisted tools becomes one of assembly, and unfolding. There is a premise, or there is an intention, or there is an improvisation. Invocations. Call & response. Which parts of the conversation make the final cut? Can there ever truly be a final cut?

I digress, but want to return to the intent of attempting to burst the bubble of sameness… If the latent space is nearly infinite, why are we all clustering in this one small corner of it? What else is out there to explore and be uncovered in those wild territories?

A friend said something to the effect of seeing other people’s AI prompt results is a little like hearing other people tell you about their dreams. There may be elements that are interesting or resonate on occasion, but in a lot of cases, there’s kind of a “huh, weird” response. And, that’s about it. Cause what can you do… It’s someone else’s dream, and the pieces don’t fit for the hearer the way they do to the dreamer.

So adapting that into AI-storytelling, well, your results (and mileage) may vary. The insane awesome results you personally get in an AI text or image generator that seem exciting enough to you to share with friends or on social media, may have that sort of /shrug effect on other people. There’s something highly personalized about it, probably about the process and context of inquiry which surrounds it. It’s hard to translate that effect to secondary audiences after oneself, without adding some other layer(s) of meaning and context.

It’s part of what I don’t like about Midjourney: that it’s experience as an artist becomes tied up with the UX of Discord as a product. The experience of viewing generative AI images on PlaygroundAI or on Reddit is also flattening. It’s an experience of you as a user on a platform, having your imagination constrained to fit the contours and invisible social guardrails and incentives that drive our behaviors in those environments. It’s art for likes and upvotes, and accepting those as proxy replacements and measures of actual goodness and meaning.

That is real cause of the crushing sameness. But the sameness that is utterly alienating, instead of reassuring. The cruel embrace of the technological corners we have painted ourselves into. All of it illusions. Because now, all things are possible. All planets, all dimensions, all times can be envisioned & visited. Latent space is infinite. Live a little.

The Celestial Books (so far)

Just a quick collection of some of the sky or space themed AI-assisted Lost Books so far:

AIMark: AI Attribution in Markdown (Proposal)

Preface: Have been collaborating with ChatGPT to come up with a way to meaningfully mark up AI-assisted texts to show which parts were generated by AI and which written by human. The below is a cobbled together series of replies from ChatGPT based on my inputs about how we could do this using custom markdown. It might not be the best solution ever, as I’m not a technical person. Hopefully it can be a conversation starter at least!

AIMark in Markdown (Simplified explanation)

by ChatGPT (with light human edits)

AIMark is a proposed method for using custom markdown formatting to clearly differentiate between contributions made by human authors and AI in AI-assisted texts. The idea is to use specific symbols, such as the percent sign % and forward slash /, to indicate the source of the text.

Short text

For example, AI-generated text could be indicated with a percent sign at the beginning and end of the text, like this:

%This text was generated by AI%

Human-generated text could be indicated with a forward slash at the beginning and end of the text, like this:

/This is human-generated text./

Additionally, attributes such as the name of the AI model or the name of the human author can be added to the markdown by placing them in parentheses at the beginning or end of the text passage, like this:

%This text is AI generated(AI model X,3.2,OpenAI)% 

%(AI model X,3.2,OpenAI)This text is AI generated%

/A human wrote this(John Doe)/ 

/(John Doe)This text is AI generated/

To indicate a nested human edit of an AI-generation, it could be something like this, where ~ means strikethrough (deletion) and ^ means insertion. In the example below, we can see that the deletion and insertion happen inside of the /, indicating the action was taken by a human.

%An AI-generated a big block of text and it was /~good~^bad^/%

Under this proposed system, it would be possible to also embed global author definitions in the document, like this:

%% Model: AI model X, Version: 3.2, Source: OpenAI %%

// Name: John Doe //

By using this proposed custom markdown formatting, it will be easier for readers to understand the contributions made by both human and AI authors in AI-assisted texts in compatible display environments.

Longer text blocks

In addition to the short format versions discussed earlier, AIMark also proposes a longer format version that uses square brackets [ ] in combination with the / or % signs to indicate AI-generated text and human-generated text respectively.

For example, a longer block of AI-generated text could be indicated with square brackets surrounding the text and percent sign at the beginning, like this:

[%This is a longer block of AI-generated text]

Likewise, a longer block of human-generated text could be indicated with square brackets surrounding the text and forward slash at the beginning, like this:

[/This is a longer block of human-written text]

This longer format version allows the markdown to indicate the author type (AI or human) without the need to close the symbol, as it’s done for shorter text passages when square brackets are not used.

Overall, the use of this longer format version of AIMark allows for clear and easy differentiation between AI-generated text and human-generated text, even in longer blocks of text, making it an efficient and user-friendly method for AI attribution.

In conclusion, AI attribution is a valuable practice because it helps to promote trust and transparency in the use of AI-generated content. By clearly identifying and labeling AI-generated and AI-assisted content from the moment of its creation, readers and viewers can better understand the source of the information they are consuming, and make their own meaningful choices about its reliability.

What is AI attribution?

AI Attribution

AI attribution is the process of identifying and meaningfully labeling content that has been generated in whole or in part by an artificial intelligence (AI) system. This can include things like news articles, social media posts, and research papers, as well as many other formats of both online and offline content..

The goal of AI attribution is to make it clear to readers or viewers whether the content (in its entirety, or elements of it) was created by an automated tool and not a human, as well as to give other meaningful data about the specific provenance of the article. End users can then make their own informed decisions about the content they consume. (For example, some users might choose to disallow all AI-generated content altogether, or only allow content from approved AI information providers.)

There are at least three levels on which AI attribution might occur in online publishing systems such as blogs or social media.

  1. The profile level: the social media account or blog identifies itself as as being a publisher of AI-generated, or AI-assisted content (i.e., hybrid human-AI content). This self-identification would also carry through in ideal circumstances to any byline on articles published by the account.
  2. The post or article level: the content, whether a blog or social media post, or other type of online published article informs the viewer at a high level that AI-generated or AI-assisted elements are present. This might occur in different ways depending on the product or media context, including in the byline, as a subtitle, some kind of tag or other prominently displayed visual element (clearly-defined badge or icon), etc.
  3. The inline granular level: the article’s contents themselves are marked up to indicate which parts were input by a human, and which were generated by an AI tool. We explored the experimental method called “AIMark” to apply custom markup or markdown to hybrid AI-assisted texts in more detail here.

This is a big topic, which we will continue to explore in subsequent posts.

Hadith chains of authority

As a follow-on to the discussion the obelus as a way to mark potentially incorrect passages included in Homeric texts…


“…refers to what most Muslims and the mainstream schools of Islamic thought, believe to be a record of the words, actions, and the silent approval of the Islamic prophet Muhammad as transmitted through chains of narrators. In other words, the ḥadīth are transmitted reports attributed to what Muhammad said and did.[5]”

And same source:

“Unlike the Quran, not all Muslims believe that hadith accounts (or at least not all hadith accounts) are divine revelation. Different collections of hadīth would come to differentiate the different branches of the Islamic faith.[15] Some Muslims believe that Islamic guidance should be based on the Quran only, thus rejecting the authority of hadith;…”


“Centuries ago, Arabia did not have schools for formal education. Students went to masters who taught them. Upon completion of their study, they received ijazah (permission) which acted as the certification of their education. A graduate then acted as a master having his own students or disciples. This chain of masters was known as silsila or lineage. Somewhat analogous to the modern situation where degrees are only accepted from recognized universities, the certification of a master having a verifiable chain of masters was the only criteria which accorded legitimacy…”

Same source:

“For Muslims, the Chain of Authenticity is an important way to ascertain the validity of a saying of Muhammad (also known as a Hadith). The Chain of Authenticity relates the chain of people who have heard and repeated the saying of Muhammad through the generations, until that particular Hadith was written down (Ali bin Abi Talib said that ‘Aisha said that the Prophet Muhammad said…). A similar idea appears in Sufism in regards to the lineage and teachings of Sufi masters and students. This string of master to student is called a silsila, literally meaning “chain”. The focus of the silsila like the Chain of Authenticity is to trace the lineage of a Sufi order to Muhammad through his Companions: Ali bin Abi Talib (the primary link between all Sufi orders and Muhammad) and Abu Bakr (only the Naaqshbandiyyah order). When a Sufi order can be traced back to Muhammad through one Ali or Abu Bakr, the lineage is called the Silsilat al-Dhahab (dhahab meaning gold) or the “Chain of Gold” (Golden Chain).”

The thing that strikes me here is that both this and the obelus in Homeric literature serve the function of attempting to retain accurate oral (and later written) traditions across generations. Also authenticity. Some concepts buried in here that might positively impact modern attempts at human versus AI attribution in texts.

Obelus / Obelism


“The obelus is believed to have been invented by the Homeric scholar Zenodotus, as one of a system of editorial symbols. They marked questionable or corrupt words or passages in manuscripts of the Homeric epics.”


“The obelos symbol (see obelus) gets its name from the spit, or sharp end of a lance in ancient Greek. An obelos was placed by editors on the margins of manuscripts, especially in Homer, to indicate lines that may not have been written by Homer. The system was developed by Aristarchus and notably used later by Origen in his Hexapla. Origen marked spurious words with an opening obelos and a closing metobelos (“end of obelus”).

Re: Origen, also see Hexapla:

…because of discrepancies found in [other] manuscripts had given occasion for doubt, we have evaluated on the basis of these other editions, and marked with an obelus those places that were missing in the Hebrew text […] while others have added the asterisk sign where it was apparent that the lessons were not found in the Septuagint…

Why an attribution markup for AI-assisted texts?

Interestingly, there is an old XML dialect called Artificial Intelligence Markup Language (AIML). Wikipedia says it originated between 1995 and 2002 or so. It’s purpose seems to have been defining patterns to use in chatbots – basically question-answer pairs. The pattern they use as an example:

  <pattern>WHAT IS YOUR NAME</pattern>
  <template>My name is Michael N.S Evanious.</template>

It’s not at all what I imagine the use case would be for something like AIMark (which is really still just an experiment), or for other attempts at a microformat for distinguishing contributions in hybrid AI/human authorship texts.

The use case there is:

  • I am an author who uses AI-assisted editing tools
  • I want to track the contributions made by me (human) and the AI
  • I want to communicate these attributions somehow in a meaningful way to readers.
  • My readers might get some benefit from that.

It’s based on the assumption that this is something readers might want – though the benefits are unexplored/unknown. The first layer user want is merely that sometimes you might want to include or exclude AI-assisted content in your search results or social feeds.

It’s difficulty and becomes costly at scale to accurately detect AI-generated content, because of the variety of methods and technologies available, and the speed with which they are improving.

So what if you could reduce the load somewhat by having content creators voluntarily disclose not just the presence of AI content at a high-level, but at an inline granular level as well. Seeing line by line or word by word what were AI generations, and what were human contributions. (It could even be “signed” by the creators)

The core concept reduces to these simple examples:

% indicates AI generated text content%

/indicates human written content/

It might be awkward to have this in long text using those symbols, so presumably you could have a presentational mode, where the text just reads straight. And then you could have an “X-ray” mode where all this analytical/forensic layer can be exposed about the construction of the information contained.

You could also, for example, use the X-ray layer to identify claims, and link to fact-checks, etc. Especially if there are known biases around certain AI models tending to fabricate or suppress information around certain topics or blindspots.

Anyway, I got interrupted writing this… Will do a continuation of it another time, but wanted to jot down all the above while still fresh in my head.

AIMark: Custom markup & markdown for differentiating human versus AI contributions in AI-assisted texts

AIMark is a tentative exploration for a method to use a custom markup or markdown formatting on AI-assisted texts, where the contributions made by human author(s) versus AI(s) are clearly and meaningfully differentiated.

Contributions to the project:

  • You.com/chat came up with the AIMark name and basic concept based on my inputs (it wrongly claimed that this whole thing exists already, but based on my research it does not – perhaps it was seeing the future?)
  • ChatGPT helped me work through the code examples for the custom markup and markdown elements, as well as walking through the thought process collaboratively.

Example Markup Usage (custom)

  <def author type="AI" model="AI model X" version="3.2" source="OpenAI" />
  <def author type="human" name="John Doe" />

<p author="AI">
  The quick brown fox jumps over the lazy dog<del author=”human”>.</del>
<ins author="human">, but sometimes the fox is not quick enough.</ins>

In the above example, it its proposed to use <defs> tag to set up definitions for authors, whether AI or human, and any pertinent details about them that may apply globally within a document.

Then each element is intended to be clearly marked as to which author produced or modified which elements. <del> is used for deletions, and <ins> for insertions. In the example above, the AI tool generated the text The quick brown fox jumps over the lazy dog. and then the human deleted the period, and added the clause, , but sometimes the fox is not quick enough.

Each of those elements could also take inline attributes if that is preferable, such as: <del author=”human” name="John Doe" date="2023-01-23">

Example Markdown Usage (custom)

In markdown, we’re adopting the convention that the % percent sign indicates AI and the forward slash / indicates human content.

So in its simplest form a line of AI generated text would look like:

%This text was generated by AI%

Where the text starts and ends with %. Likewise, for human text, the usage would be like:

/This is human-generated text./

If you want to add attributes to either, you could possibly (?) do it in () either at the beginning or end of the text passage. And attributes could be named pairs or just the value. So like:

AI Attribution Short:

%This text is AI generated(AI model X,3.2,OpenAI)%
%(AI model X,3.2,OpenAI)This text is AI generated%


%This text is AI generated(model:AI model X,version:3.2,source:OpenAI)%
%(model:AI model X,version:3.2,source:OpenAIAI model X,3.2,OpenAI)This text is AI generated%

Human Attribution Short:

/A human wrote this(John Doe)/
/(John Doe)This text is AI generated/


/A human wrote this(name:John Doe)/
/(name:John Doe)This text is AI generated/

Then there could be a form for text that is longer than a short sentence, which could be written as:

[%This is a longer block of AI-generated text]

[/This is a longer block of human-written text]

In the above example, use of [] would mean you do not need to close the other marks.

To indicate a nested human edit of an AI-generation, it could be something like this, where ~ means strikethrough (deletion) and ^ means insertion.

[%An AI-generated a big block of text and it was [/~good][/^bad]

Or using the short non-bracketed form of both:

%An AI-generated a big block of text and it was /~good~^bad^/%

So in the above examples, a human deleted the word good, and inserted the word bad into the AI content generation.

Global author definitions in such a custom markdown document could be something like:

Model: AI model X
Version: 3.2
Source: OpenAI

Name: John Doe

Hypertext fiction

This is a poorly defined concept overall, but I think a fun one worthy of another look:

Hypertext fiction is characterized by networked nodes of text making up a fictional story. There are often several options in each node that directs where the reader can go next. Unlike traditional fiction, the reader is not constrained by reading the fiction from start to end, depending on the choices they make. In this sense, it is similar to an encyclopaedia, with the reader reading a node and then choosing a link to follow.

See also: networked narrative, transmedia storytelling

Page 2 of 130

Powered by WordPress & Theme by Anders Norén