Questionable content, possibly linked

Category: Other Page 77 of 177

Hypertext fiction

This is a poorly defined concept overall, but I think a fun one worthy of another look:

Hypertext fiction is characterized by networked nodes of text making up a fictional story. There are often several options in each node that directs where the reader can go next. Unlike traditional fiction, the reader is not constrained by reading the fiction from start to end, depending on the choices they make. In this sense, it is similar to an encyclopaedia, with the reader reading a node and then choosing a link to follow.

See also: networked narrative, transmedia storytelling

Towards a markup microformat for AI-assisted texts

I’m not necessarily a believe that all AI-assisted text or images categorically need to be labelled as such, everywhere, all the time. It can be a good idea in some cases, however. And in cases where it is desirable (putting aside the “why”) for now, what might be the ideal methods to do it?

I’ve seen a few of approaches so far, but not many:

  • Don’t disclose the use of AI assistance in a text
  • Disclose it at the beginning or end of a document
  • Disclose it and estimate the relative ration of human-to-AI content in a given work (the so-called HI2AI number)

Each may have its appropriate use, but none of them offer any pointers as to how to visually display within a text which elements were human-generated and which were machine-generated. It doesn’t give us any specific advice or tools for marking up a document in a way that might be useful or meaningful in some manner to human (and other) readers.

Here are a couple initial ideas to differentiate within a text:

  • Human-written text is in one color (or is available in), and AI generations in another
  • Same thing, but with highlighting, instead of or in addition to text color

Here is the first as a mockup, using what happens natively in Verb.ai:

Black is the human generations, and green is the AI generations. Colors seem useful here because they are not too disruptive, and seem to add a sort of dimensionality to the text.

One issue here is where displays are restricted to black and white, or for use in print applications. So not relying on color is probably one constraint we should design with in mind.

Also: what about color blindness? What about the fact that different sites or services might use different color schemes?

Getting back to our list ideas:

  • Use superscript characters like ᴬᴵ and ᴴ to mark transitions within a passage.

Let’s try that out with our quick example passage, but here in WordPress:


ᴴThis is the beginning of a ᴬᴵstory. John had spent the last three years ᴴeatingᴬᴵ, sleeping, and going to work. His job was not particularly interesting or rewarding, but it paid the bills ᴴand that would do for now.


I don’t know about you, but that seems awkward to me to read. It is less disruptive than I was expecting, but seems like it would quickly grow tiresome. Plus it would be cumbersome to manually insert and track those transitions in a long text.

We could do the same thing, but sub in emojis to indicate speaker:


🗣️This is the beginning of a 🤖story. John had spent the last three years 🗣️eating🤖, sleeping, and going to work. His job was not particularly interesting or rewarding, but it paid the bills 🗣️and that would do for now.


Also somewhat disruptive to my eyes as a reader. And it doesn’t display the para-textual subtlety, the sort of “shades of subtext” that I feel the color example at the top does.

Let’s go back to the list:

  • Using different fonts to indicate speakers: it could work, but there are often applications where only one font is available.
  • Which leads to: text decoration, such as underlines (and overlines), italics, or bold. Those may be possible in some cases, but often these text styles or decorations may have other senses to their use, such as for emphasis. So perhaps it would be better to colonize an unoccupied typographic space…
  • In-line containers like different types of brackets {},[],(),||,/,\,’,”,`,~. This might be an option, but again there’s the possibility of collision with other semantic or semiotic uses. Let’s run again our text, and try to do / to indicate human and \ to indicate AI.

/This is the beginning of a \story. John had spent the last three years /eating\, sleeping, and going to work. His job was not particularly interesting or rewarding, but it paid the bills /and that would do for now.


It’s somewhat awkward, but it’s for me anyway a little less intrusive than some other experiments here. One thing that’s cool is that human-gens are under a sort of /little house\ typographically. And AI-gens are almost like they are \outdoors/.

Another variation below, where | indicates a start and stop of AI-gen text, and human text is not indicated.


This is the beginning of a |story. John had spent the last three years |eating|, sleeping, and going to work. His job was not particularly interesting or rewarding, but it paid the bills |and that would do for now.


I think that’s confusing cause you don’t know easily what is inside and what is outside the | since it starts and ends with same sign.

Anyway, there are certainly other possibilities and I’m sure we will see them flourish over the coming months and years as these technologies become more widespread in writing tools.

Will continue to explore others in follow-ups, time permitting.

Collaborating With AI: Wen Wormhole

Thought this was a decent short read & some cool pics from an AI artist. I wish their art were available to look at off Instagram, cause hard to see w/o an account.

“To me, it resembles a very similar process and method to producing images as an art director. They typically don’t execute the technical aspects but decide and inform a team about the required aesthetic of several aspects. The same relationship can be created with an AI. Deciding on the general concept, casting, location, hair and makeup, lighting, colour grading, the fashion and having to put it into context with other images. The AI is basically the team that returns an image based on all these aspects. Hence I find it collaborative in a certain weird way, however, I found the social aspect of collaboration always very important and working alone with software is devoid of any of that.”

Should AI chatbots be allowed to self-identify as I/me?

Sentience is a complicated topic. I won’t pretend to understand all its vagaries, especially once we try to apply that to AI with a still imperfect understanding of both domains…

Selfhood, likewise, is a squishy thing to try to define. But maybe becomes a little less squishy because it becomes eventually somewhere along the spectrum an embodied thing. I have a body, therefore its somewhat difficult to argue I don’t have some sort of “self.” I might not know or be able to define exactly what that self is or what exactly my “having” it might mean. But it’s a somewhat tangible thing I can at least point to as being connected to my self.

Humans have an incredible ability to empathetically project self-hood and sentience onto other things, though we might not all agree which are which. Regardless, it’s a thing we do somewhat automatically because of our fundamental makeup and nature as embodied selves in the not-only-virtual world, but also the so-called “Real World” whatever that even is.

My question here then is, might it be either a good idea or a not good idea – or more likely some mix of the both – that we encourage/allow/program our tools to assert their own self-hood as part of their fundamental UX.

If you ask ChatGPT basically any question, it will respond with a slew of “I” and “me” and “mine.” Which, okay, for convenience, I get it. It’s a chat experience. Chatters assume they have selves, and that the other party also has some sort of self. But, what would it look like if we removed the assumption that this is a desirable state? How might that change the communication styles of these chatbots?

How would the program have to identify itself and communicate about its capacities?

Maybe something like, Instead of saying “I’m just a large language model…” It would have to say something like, “The program is a large language model…”

Would this have meaningful impacts on users of the technology? More specifically, would it give people more of a mental shield against perhaps overly identifying a program as having some kind of equivalency with a human being?

I’m not saying the Butlerian Jihad is coming, but you never know…

AI literacy

This seems like as good a definition as any of AI literacy:

“a set of competencies that enables individuals to critically evaluate AI technologies, communicate and collaborate effectively with AI, and use AI as a tool online, at home, and in the workplace”. (Long and Magerko ,2020)

I like that there is an emphasis here on collaboration. This is the Way.

A few generative AI tools worth exploring

  • Verb.ai
    • Currently my favorite text generation tool, very fluid use, in early beta; supports slash commands while writing like /describe and /continue
  • PlaygroundAI.com
    • Up to 1K free Stable Diffusion images per day & paid plans
    • You could also try Mage for NSFW generations
    • Or try DreamStudio if for some reason you’d rather pay Stability.ai to use Stable Diffusion. (Playground’s UI is better, though they have some deal-breaker privacy problems they haven’t solved yet, imo)
  • TextSynth
    • Free text generation using several different open source models (e.g., GPT-J, GPT-NeoX, Fairseq). You input sample text and it tries to continue it. Experiment with “temperature” setting (higher numbers yield weirder results)
  • OpenAI
    • ChatGPT
      • Certainly still interesting, but heavily restricted in its abilities now to how it was when first released. I’m not sure it’s the right product direction as far as “safety” features for all users, even if the underlying model often yields some very good quality content.
    • Dall-E 2
      • I still love it because it gives a completely different look from Stable Diffusion’s model versions, and I feel its use of light and color is often more beautiful than SD. My favorite look usually includes in my prompt found photo expired film dramatic lighting
      • You can pay them directly through OpenAI (i.e., help pay back Microsoft), or buy credits through PlaygroundAI which accesses the Dall-E API. This has the benefit of being I think slightly cheaper per generation than paying OpenAI (which is weird, tbh), and no watermark, which otherwise OpenAI includes by default.
      • Mage tells me they are rolling out Dall-E support as well over the coming weeks. I will give them a try when they do, as there are a number of things about Playground I find cumbersome in their UI.
  • Character.ai
    • Create chat bots by entering a character description; the results are much more creative and fun than ChatGPT seems to be capable of. It’s much more willing to “play along.”
  • You.com/chat
    • Similar abilities to ChatGPT, possibly slightly lower quality output, but without all the refusals & disclaimers that ChatGPT seems to be leaning more and more into.

AI Self-Expression

Via Jeff Jarvis’ very worthwhile Medium post on the potentials of alternative modes of literacy co-evolving with AI:

In the end, writing a prompt for the machine — being able to exactly and clearly communicate one’s desires for the text, image, or code to be produced — is itself a new way to teach self-expression.

Hermit crabbing

We’re in kind of a golden era right now of AI content generation apps coming online that are free during at least a trial period. A lot of them switch to paid after, but I kind of find it fun to only use them temporarily like some sort of migrant AI user or a hermit crab, temporarily putting on a shell built by some other animal, and discarding it when it no longer fits. Plus there’s just so much experimentation and competition right now & nobody really has got the UX all nailed down to be really the top down interface provider. Though obviously some of the underlying models are more compelling than others, but in the vast majority of cases (outside perhaps ChatGPT for now), you can access them through a variety of third parties. Eventually, I guess this will change, and paid access regimes will change, but for now here’s to the free rider. (That said, I do pay for Dall-E)

New categories of addiction

There’s a lot to like in this article, but wanted to save this bit:

“…entirely new categories of powerful addictions are available to us that weren’t available to our ancestors, and it should be uncontroversial to be worried about those effects somewhat. AI is going to let us invent even more.”

France 24 Follow-Up

France 24 recently published a piece about certain claims circulating online that there are pyramids and a lost civilization that was re-discovered in Antarctica. It’s an interesting piece, and worth a read, and the fact-checker responsible did a decent job with assembling the pieces of the puzzle as presented.

There are some elements which got left on the cutting room floor though, that in my mind are worth retaining in some form, if not in the primary reporting on it. That’s a good use for blogs, in my opinion.

I’ll excerpt the ones below that to me had the most important concepts in them, which may not have been covered in the finished piece.


FRANCE 24: How did the idea of this project come to you ?   

My prediction is that AI-generated content will soon make up the vast majority of content on the web. I’ve seen that platforms, non-profits, and government organizations that many of their efforts around media literacy and counter-disinformation/counter-radicalization move quite slowly and use very conventional approaches to educating audiences. They tend to focus only on fact-checking and debunking, but in my experience, that approach does little to reach audiences who would most benefit from it. Because many of those people are skeptical of mainstream news and fact-checking, so sometimes those efforts backfire and only ever reach people who already understand the problem. 

As an artist, I’m able to experiment outside of the constraints of what those organizations can do or are willing to do, and reach people directly where they are consuming this type of content. 

Which behaviors or reflections do you aim to encourage with these publications?

I want to pique people’s curiosity, and encourage people to be suspicious. It is intended to be provocative. I want them to look at the AI-generated material that I create, and identify what about it seems off or wrong for them, and then have them share that with one another in an open and honest discussion, and explain their reasoning to other people. Many times these messages have a more profound impact when they are delivered by one’s peers or community than they would from an external authority figure.

Which advice would you give for anyone to detect AI-generated content in general ? 

This technology is moving so fast that any general advice I could give about detecting AI-generated content is either only going to apply in very narrow cases where a specific tool(s) is being used, or it will only be meaningful advice for a very short period of time until the tools reach their next iteration. For example, some AI image generators still have a very difficult time depicting human hands correctly, but as you can see online, people are working round the clock to improve them. Within a year or two, all of those indicators will either change to something else, or they will go away altogether, and much of the AI-generated content will become indistinguishable from the real thing.

I would say in general that your best bet is always going to be classic detection methods: does an image or other artifact appear “too good to be true?” Then chances are, it probably is not true. Doing reverse image searches is also still a useful method to uncover the original source of something you find on social media. Though, that said, it is also trivial to create a false trail of provenance for an image, so that it seems to have come from a real person, when in fact it was generated.

What do you think about these publications that use your content? Did you expect these pictures to be used to spread disinformation ? Was it what you expected / what you wanted or do you consider it more like a collateral damage?

There is a branch of medicine called nuclear medicine, or radioisotope scanning, that is relevant here. In it, a small amount of a radioactive substance, called a radioisotope, is introduced into the body. The radioisotope emits radiation that can be detected by special cameras or scanners, which produce images of the inside of the body. These images can be used to trace the path of the radioisotope and identify any abnormalities or problems.

I see other people using the content that I create in much the same way: where the images and stories I create are like the radioisotopes, and when they get picked up by other people in an information ecosystem, we can use them as a way of tracking how and where these things propagate online. We can see the paths they take, and even the way the artifacts mutate through re-transmission – such as them getting cropped by other people, or the original disclaimers about the presence of AI-generated content being removed.

All of these activities by other people interacting with the content become an important part of the story of the information, what it does, and how it functions structurally.

Concretely, how do you expect your publications to raise awareness about disinformation?

One thing I see in the subcultures around conspiracy theories is that, while those groups are usually highly skeptical of information coming from mainstream media, they sometimes are not critical at all of much more questionable information sources. This opens those audiences up to a great deal of manipulation and propaganda by any number of state, political, or other actors, so long as the content matches the preconceived notions, ideology, or emotions of the target audience.

By sharing the kinds of artifacts I’m creating, I hope to find another pathway, one that might be able to playfully trigger or provoke the critical faculties of those audiences, so that they understand just how easy and widespread this type of manipulation can be, and how to guard against it. 

I also want to start mentally preparing people for what happens when much more skilled and better funded actors start using these tools to manipulate at scale. I’ve only spent a few hundred dollars generating images for 50 books. What happens when someone invests millions of dollars into AI-generated disinformation? We simply won’t be able to fact-check all of it.

We need to help train peoples’ discernment and agency to find the truth for themselves. And we need to find creative and interesting ways to reach people where they are, and understand what motivates them to seek out this type of content, instead of just calling them “crazy” and writing them off forever. We can’t afford to leave vast segments of the population behind, just because we think the things they believe might be wrong. We need to find ways for all of us to be able to move forward together, by talking out loud about the things that matter most to us

Page 77 of 177

Powered by WordPress & Theme by Anders Norén