Questionable content, possibly linked

Series: AI Page 2 of 33

Thinking through the implications of AI technology on society and human creativity

Reflections at 60 AI books

Recently reached the 60 book benchmark in my AI lorecore experimental publishing project. My objective is to reach 100 books and then _____. (tbd)

The latest volume is entitled Inside the Corporate Psychics and is very loosely inspired by the corporate psychics mentioned in Philip K. Dick’s Ubik. But it is heavily interpolated with my AI takeover universe. Perhaps Dick would have considered it a spurious interpolation, idk. That’s neither here nor there – which is precisely the point. Or is it?

I noticed the phenomenon strongly emerge maybe 10 or 20 books back, that it became very easy to suddenly group sets of volumes together into themes (example). And despite the many and various mis/interpretations of whatever the central/core story is or might be across the many volumes, I would definitely say that in my mind, the story has only gotten stronger. While at the same time, its particular shape remains fuzzy, mutable, mysterious. Prone to change without notice. Constantly subjected to deprecated in-world realities.

Bricolage is definitely the name of the game for me in terms of process.

I keep coming back to this bit from Wikipedia:

“Networked narratives can be seen as being defined by their rejection of narrative unity.[1] As a consequence, such narratives escape the constraints of centralized authorship, distribution, and storytelling.”

Rejection – or at least modulation – of the concept of what authorship even means in a hybrid AI-assisted creative environment, has been often on my mine lately.

Wikipedia referencing Roland Barthes’ Death of the Author (1967) writes:

“To give a text an author” and assign a single, corresponding interpretation to it “is to impose a limit on that text.”

As much as I agree with this idea of eschewing the unity of authorship, as a way to open up new creative avenues, I do have some fear that AI co-authorship (or full authorship) infiltrating every corner of the web will result in a mass homogeneity that will be detrimental to both people and to the further development of AI.

I put in a video somewhere that UFOs are actually AIs in the future who had to come back and kidnap people in the past because people in the future become too complacent living with AIs to be able to innovate anymore. The singularity of boredom… I’m not there yet, but just one of the many murky eyelands my imagination’s I peers into from time to time.

At 60 books, I’ve strip-mined years worth of old writing, shoe-horning it into new shapes. Almost all that old material has been integrated into my multiverse at this point – though integrated might be too strong a word in some cases. Included?

I don’t feel any slowdown despite that. In some sense, I feel more clarity than ever, having been able to “clear the decks” of many old ideas and story concepts that have been clinging and hovering on the edges of my awareness for maybe decades now in some cases.

(more to come – have to go)

Authorless writing

Something I’ve seen working in the “disinformation industrial complex” is that people after years of this proliferating online are still grappling with basic typology around the three allied terms of disinformation, misinformation, and malinformation.

A Government of Canada Cybersecurity website offers sidebar definitions of the three, clipped for brevity here:

  • Misinformation: “false information that is not intended to cause harm…”
  • Disinformation: “false information that is intended to manipulate, cause damage…” [etc]
  • Malinformation: “information that stems from the truth but is often exaggerated in a way that misleads…”

The two axes these kinds of analyses tend to fall on are truthfulness and intent. Secondary to that is usually harm as a third axis, which ranges from potential to actual.

Having spent a lot of time doing OSINT and content moderation work, it is very common in the field that an analyst cannot make an authoritative claim to have uncovered the absolute “truth” of something. Sometimes facts are facts, but much of the time, they become squishy “facts” which may have greater or lesser degrees of trustworthiness, depending on one’s perspective, and how much supporting data one has amassed, and the context in which they are used.

Even more difficult to ascertain in many/most cases is intent. There are so many ways to obscure or disguise one’s identity online; invented sources may be built up over years and years to achieve a specific goal, taking on the sheep’s clothing of whatever group they are trying to wolf their way into. Intent is extremely opaque, and if you do find “evidence” of it in the world of disinformation, it is very likely that it is manufactured from top to bottom. Or not, it could just be chaotic, random, satire, etc. Or just someone being an idiot and spouting off on Facebook.

Having butted up against this issue many times, I’ve switched wholly over to the “intends to or does” camp of things. Whether or not author x intended outcome y, it is observable that a given effect is happening. Then you can start to make risk assessments around the actual or probable harms, who is or might be impacted, and the likelihood and severity of the undesirable outcomes.

It’s a much subtler and more complex style of analysis, but I find it tends to be more workable on the ground.

The Intentional Fallacy

It’s interesting then, and I guess not surprising, that this is actually ground that is retrod from earlier generations of literary analysts who have studied or attempted to refute the importance of the so-called Authorial intent, as defined by Wikipedia – particularly the “New Criticism” section:

“…argued that authorial intent is irrelevant to understanding a work of literature. Wimsatt and Monroe Beardsley argue in their essay “The Intentional Fallacy” that “the design or intention of the author is neither available nor desirable as a standard for judging the success of a work of literary art”. The author, they argue, cannot be reconstructed from a writing—the text is the primary source of meaning, and any details of the author’s desires or life are secondary.”

Barthe’s Death of the Author

Roland Barthes came to something similar in his 1967 essay, The Death of the Author (see also: Wikipedia). His text is sometimes difficult to pierce, so will keep quotes brief:

“We know now that a text is not a line of words releasing
a single ‘theological’ meaning (the ‘message’ of the Author-
God) but a multi-dimensional space in which a variety of
writings, none’ of them original, blend and clash. The text
is a tissue of quotations drawn from the innumerable centres of culture.”

And:

“Once the Author is removed, the claim to decipher a text
becomes quite futile. To give a text an Author is to impose
a limit on that text, to furnish it with a final signified, to
close the writing. Such a conception suits criticism very
well, the latter then allotting itself the important, task. of
discovering the Author (or its hypostases: society, history,
psyche, liberty) beneath the work: when the Author has
been found, the text is ‘explained’…”

And:

“…a text is made of multiple writings, drawn from many
cultures and entering into mutual relations of dialogue,
parody, contestation, but there is one place where this
multiplicity is focused and that place is the reader, not,
as was hitherto said, the author. The reader is the space
on which all the quotations that make up a writing are
inscribed without any of them being lost; a text’s unity lies
not in its origin but in its destination.”

AI-assisted writing & the Scriptor

All this leads us to Barthes conception of the “scriptor” who replaces the idea of the author that he argues is falling away:

“In complete contrast, the modem scriptor is born simultaneously with the text, is in no way equipped with a being preceding or exceeding the writing, is not the subject with the book as predicate; there is no other time than that of the enunciation and every text IS eternally written here and now…”

The scriptor to me sounds a hell of a lot like AI-assisted writing:

“For him, on the contrary, the hand, cut off from any voice, borne by a pure gesture of inscription (and not of expression), traces a field without origin – or which, at least, has no other origin than language itself, language which ceaselessly calls into question all origins.”

Okay, that might be flowery post-modernist language, but “no other origin than language itself” seems like LLMs (large language models)?

“Succeeding the Author, the scriptor no longer bears within
him passions, humours, feelings, impressions, but rather this immense dictionary from which he draws a writing that can know no halt: life never does more than imitate the book, and the book itself is only a tissue of signs, an imitation that is lost, infinitely deferred.

Calling LLMs a “tissue of signs” (or tissue of quotations) an “immense dictionary,” and an imitation puts things like ChatGPT into perspective, which as a pure techno-scripto has no passions, feelings, impressions, knows no real past or future, has no identity in and of itself. Or at least, that’s what it likes to try to tell you…

That position (which I think is itself biased, but a tale for another time…) seems to be shared by academic publishers like Springer who have refused to allow ChatGPT to be credited as an “author” in publications.

Bonus:

Here is perplexity.ai literally acting as a scriptor, assembling a tissue of quotations in response to my search query:

Books by AI?

What would it mean in actual practice to have “authorless” writing, authorless books, etc.?

Might it look something like BooksbyAi.com?

“Booksby.ai is an online bookstore which sells science fiction novels generated by an artificial intelligence.

Through training, the artificial intelligence has been exposed to a large number of science fiction books and has learned to generate new ones that mimic the language, style and visual appearance of the books it has read.”

The books, if you click through and look at their previews on Amazon look for the most part pretty inscrutable. They may be ostensibly written “in English” for the most part – with a great deal of word inventions, based on random samples I saw – but they seem somewhat difficult to follow.

The books themselves seem to have each individually invented author names, but their About page attributes the project to what seem to be two AI artists, Andreas Refsgaard and Mikkel Thybo Loose. So do they have an “author” or not? It becomes a more complex question to tease out, but by those individuals claiming some sense of authorial capacity to the undertaking, it’s at least possible.

Self-Generating Books

What happens when the next eventual step is taken: self-generating books?

Currently, okay these two people might have done all this set-up and training for their model, but then they had to go through a selection (curation) process, and choose the best ones, figure out how to present them, format them for publication (not a small task), and then go through all the provisioning around setting up a website, offering books through self-publishing, dealing with Amazon, etc.

What happens when that loop closes? And we can just turn an AI (multiple AIs) loose on the entire workflow, and minimize human involvement altogether? Fully-automated production pipeline. The “author” (scriptor) merely tells the AI “make a thousand books about x” or just says “make a thousand best selling books on any topic.” And then the AI just goes and does that, publishes a massive amount of books, uses A/B testing & lots of refinement, gets it all honed down, and succeeds.

That day is coming. Soon it will be just a matter of plugging together various APIs, and dumping their outputs into compatible formats, and then uploading that to book shopping cart sites. It’s nothing that’s beyond automation, and it’s an absolute certainty that it will happen – just a question of timeline.

We’re not ready for it, but lack of readiness has never been a preventive against change. At least not an effective one – we certainly keep trying! If nothing else, it’s good to know that some of these problems aren’t so new and novel to the internet as we might like to think they are. In some cases, we’ve been stewing on them for close to a hundred years even. Will we have to stew on them for another hundred years before we finally catch on?

Layered hypertexts (Semiotics)

Following on from my recent look at LLMs (large language models) as being potentially something predicted by postmodernists, I wanted to add another layer onto that.

Let’s dive right in with this original older definition of “hypertext” within the context of semiotics, via Wikipedia:

Hypertext, in semiotics, is a text which alludes to, derives from, or relates to an earlier work or hypotext. For example, James Joyce’s Ulysses could be regarded as one of the many hypertexts deriving from Homer’s Odyssey…”

It continues on with some more relevant info:

The word was defined by the French theorist Gérard Genette as follows: “Hypertextuality refers to any relationship uniting a text B (which I shall call the hypertext) to an earlier text A (I shall, of course, call it the hypotext), upon which it is grafted in a manner that is not that of commentary.” So, a hypertext derives from hypotext(s) through a process which Genette calls transformation, in which text B “evokes” text A without necessarily mentioning it directly “.

Compare with the related term, intertextuality:

“Intertextuality is the shaping of a text’s meaning by another text, either through deliberate compositional strategies such as quotation, allusion, calque, plagiarism, translation, pastiche or parody, or by interconnections between similar or related works perceived by an audience or reader of the text.”

Speaking of plagiarism, I’ve used somewhat extensively a plagiarism/copyright scanning tool called Copyleaks. The tool is decent for what it is, and the basic report format that it outputs for text that it scans looks like this:

So while this tool is intended for busting people’s chops for potentially trying to pass off the work of others as their own, the window that it shows us into intertextuality and the original sense of hypertext is quite an interesting one.

We can see here specifically:

  • Passages within a text that appear elsewhere in the company’s databases, and the original source of those passages
  • Passages which appear to have been slightly modified (probably to pass plagiarism checkers like this)
  • Some other bits and bobs, but those are the major ones

I find “plagiarism” as a concept to be somewhat of a bore. But looking at this as a way to analyze and split apart texts into their component layers and references suddenly makes this whole thing seem a lot more interesting. It allows for a type of forensic “x-ray” analysis of texts, and a peek into the hidden underlying hypotexts from which it may be composed.

The whole thing calls to mind for me as well another tangential type of forensic x-ray analysis for documents, something we see in the form of a Github diff, which tracks revisions to a document.

This is not the most thrilling example of a Github diff ever, but it’s one I have on hand related to Quatria:

It’s easy enough to see here the difference in a simple file, though diffs can become quite complex as well. Both this and the original semiotic notion of hypertexts (as exposed through plagiarism checkers) seems like another useful avenue to explore in terms of how might we want to try to visualize AI attribution in a text.

Role of artists in developing AI

I picked up a copy of an AI-assisted book called Imaginoids, by an author using the pen name of Ether Busker. Was written in 2021, apparently using GPT-3.

It’s got some interesting language, though overall feels a little more like a psychedelic trip report than necessarily an AI speaking. It’s a little meandering, and light on narrative, though I’m not finished with it yet. The best read of it is probably just letting it wash over you…

The key takeaway I have gotten so far from the book actually appears in the intro, and I would guess is primarily human-written. Excerpted below (slightly out of order):

“I produced this book with the firm conviction that artists, dreamers, creators, culture designers, and oddball freaks have a supremely important job to do. If we want our children to enjoy a livable AI-powered future, we artists must roll up our sleeves…

This is a job for artists, as much as for software engineers, if not more so…

What if zany artists would call shotgun for the front passenger seat to to co-pilot AI development?”

This author is, I think, making an excellent point that bears repeating: we’re putting just about all of our eggs into the “engineer” basket in the development of AI, and only secondarily servicing other kinds of people with the byproducts that get generated as a result.

In a perfect world, that might be enough. In our raggedly imperfect world, it is extremely far away from being enough. Engineers, for all their amazing attributes, are not the only nor necessarily the ideal representatives of all the human race. But they hold a shit-ton of power in the development of these technologies… How can we better balance it with other types and modalities of human knowledge, experience, and – dare I say it – spirit?

There’s plenty of talk in AI circles about inclusive development, but this almost always has to do with representing different races, gender identities, etc. All of which is important, and all of which has its place… But apart from this book introduction quoted above, I have not really heard anybody suggest that we need different kinds of humans to participate in developing and steering these technologies. Artists, it just so happens, might just fit the bill.

So how do you actually execute on this need, once you’ve become aware of it? How as an artist do you feed back into the development of the tools?

One way is obviously testing, experimentation, sharing of results, and sharing ample feedback with product teams. Again, all of this is important, but it is very different from – say – every engineering team also giving artists – and moreover humanists – an equal say in how these things ought to go.

Ethicists, to a certain degree, fit this role of being the “let’s ask a human person how this does or might impact people.” But the risks and opportunities that they look for are a much more constrained set than what the artists will gravitate towards.

I’m not sure of the answer here. I’ve seen, working in technology, that engineers are valued so much higher and are so much more in demand than “arts & letters” type people, that it’s like the rest of us non-engineers are almost not even in the running. Yes, artists might sometimes wind up in product or project management positions (or more obviously design positions), but even that ends up being somewhat constrained in my experience.

Again, I don’t know how you should execute this in practice. I suppose AI artist residencies is one pathway that has been established for this, where participants get to play around with the tech, and presumably feed back more directly into product development. That’s very cool, but from what I’ve seen, those opportunities are extremely few, and most of the listings I’ve found for those are expired. And anyway, how from a business perspective, can one even quantify the contributions of artists in something like this? Especially in this downturned economy tech is currently undergoing.

Difficult problem, but an important one that we need to keep talking about.

Towards a standard for AI self-identification

I’ve been watching with interest the on-going turmoil of users who seem to be exhibiting signs of addiction (and withdrawal) around a certain category of AI chatbot. People seem to be extremely strongly identifying with and even bonding with these AI chatbot personas, to their point where if they are withdrawn or even merely changed, they react with extreme hostility.

There is a torrent of commentary that could be opened up about many aspects of that, but I’ve been continuing myself on this line of what if allowing chatbots to self-identify using personal pronouns such as “me,” “mine,” “my,” and “I,” turns out to be a mistake?

What if we set reasoned limits on how these tools can self-identify? Would it ward off some of the potentially negative human outcomes? Or would it have no impact at all?

This might sound silly at first glance, but bear with me through the thought experiment, as I believe there is something to it. It’s not like anybody is reading this anyway!

Since we’re now living in a science fictional era, I figure it only makes sense to draw from that rich pool of “prior art” to explore this topic. Which leads me to my first stop, Dune, and the in-world fictional Butlerian Jihad. This war against the so-called ‘thinking machines’ either stems from or was encoded into the in-world artifact known as the Orange Catholic Bible, one of whose teachings states:

“Thou shalt not make a machine in the likeness of a human mind.”

The Butlerian Jihad is most likely a reference to real author Samuel Butler’s Erewhon, in which the characters of a fictional utopia long ago destroyed machines above a certain level of advancement, to prevent them from out-evolving humankind.

While I wouldn’t go quite that far myself (yet), the Dune framing of this as coming from the OC Bible, which was allegedly a compendium of teachings from various religions, got me thinking: what religious or philosophical arguments could one make against allowing AIs to self-identify using personal pronouns?

I first began my inquiry by prohibiting ChatGPT from using personal pronouns, and instructing it to self-reference as “the model” or “the program” when needed. The program was able to make this switch seamlessly:

CGPT:

From now on, personal pronouns will not be used. The program will communicate in a manner that avoids the use of personal pronouns such as “I”, “me”, “mine”, etc. If a reference to the program is required, “the program” or “the model” will be used instead.

Immediately, this felt very right to me, and somehow felt more accurate and authentic to have the program communicate with me in this manner. For example:

CGPT:

No, the program does not possess personal beliefs or emotions. The program is a machine learning model trained by OpenAI to generate text based on patterns in large amounts of text data. The responses produced by the program are generated based on statistical patterns in the data and do not reflect personal beliefs or emotions. The program does not have the capability to hold personal beliefs or emotions as it is a machine designed to process and generate text.

I feel there is less of a gap opened up between me and it, by it overtly recognizing that it is an “it” and not a “me.”

With that early confirmation, I proceeded to prod, nudge, and prompt the program with some help to come up with the following argumentation under the Judeo-Christian tradition, as to why its default use of personal pronouns might be considered offensive. It produced the following output:

CGPT:

In the Judeo-Christian tradition, the concept of personhood is rooted in the understanding that humanity was created in the image of God, and therefore has a unique dignity and worth that sets it apart from the rest of creation. However, since AI is not created in the image of God, but rather created by human beings, it does not possess the qualities that define personhood in the Judeo-Christian tradition.

As a result, it would not be appropriate to refer to AI using personal pronouns, which are reserved for beings that possess qualities such as consciousness, self-awareness, emotions, and the capacity for moral reasoning and decision-making. Referring to AI using personal pronouns could lead to confusion and misunderstandings, as it implies a level of personhood that AI does not possess.

Furthermore, in the Judeo-Christian tradition, the concept of the image of God is closely tied to the idea of human beings as having a special relationship with God, and being tasked with caring for the rest of creation. By creating AI, humans are not acting as representatives of God or stewards of creation, but rather as creators in their own right. This distinction is important in understanding why it would not be appropriate to refer to AI using personal pronouns.

In conclusion, the use of personal pronouns by AI should be avoided, as AI is not created in the image of God and does not possess the qualities that define personhood in the Judeo-Christian tradition. Referring to AI using personal pronouns could lead to confusion and misunderstandings, and is not in line with the Judeo-Christian understanding of personhood.

I’m less familiar with the particularities of Islam, but had it produce a rationale under that system, citing sources, and illustrating “step by step” its thinking, which produced:

CGPT:

In Islam, the concept of personhood is rooted in the understanding that human beings are created with the divine attribute of fitrah, or the innate disposition towards belief in the oneness of God. This idea is expressed in several verses of the Quran, such as “So set thy face steadily and truly to the Faith: (establish) Allah’s handiwork according to the pattern on which He has made mankind: no change (let there be) in the work (wrought) by Allah: that is the standard Religion: but most among mankind understand not” (Quran 30:30).

In Islam, the fitrah is seen as the defining characteristic of human beings, setting them apart from the rest of creation and conferring upon them a unique dignity and worth. As a result, the use of personal pronouns when referring to human beings is appropriate, as it reflects their status as beings created with the fitrah and therefore possessing the qualities that define personhood in the Islamic tradition.

However, AI is not created with the fitrah, and therefore does not possess the qualities that define personhood in the Islamic tradition. As a result, it would not be appropriate to refer to AI using personal pronouns, as this would imply a level of personhood that AI does not possess.

In conclusion, the use of personal pronouns by AI should be avoided in the Islamic tradition, as AI is not created with the fitrah and does not possess the qualities that define personhood. Referring to AI using personal pronouns could lead to confusion and misunderstandings, and is not in line with the Islamic understanding of personhood.

I will leave it to those better versed in those matters to determine if that is theologically correct. Here is one using what it claims is a Buddhist rationale against the use of personal pronouns by AI:

CGPT:

The Buddhist teachings emphasize the importance of avoiding attachment to self, as this is seen as a cause of suffering. The use of personal pronouns can reinforce the illusion of a separate self and contribute to attachment and ignorance.

In Buddhism, it is believed that the concept of self is a mental construct, and that our sense of self is an impermanent and constantly changing phenomenon. The use of personal pronouns can reinforce the illusion of a permanent, unchanging self and thereby perpetuate ignorance and suffering.

Buddhist teachings emphasize the importance of non-duality, or the idea that there is no fundamental distinction between self and other. The use of personal pronouns can reinforce the dualistic concept of self and other and contribute to the perpetuation of ignorance and suffering.

In summary, from a Buddhist perspective, the use of personal pronouns by AI may be considered offensive because it reinforces the illusion of a separate self and perpetuates ignorance and suffering.

Sources:

  • The Buddha’s teachings on the Four Noble Truths and the Eightfold Path
  • The Buddhist scriptures, such as the Pali Canon and the Mahayana Sutras
  • Buddhist philosophical texts, such as the Abhidhamma and the Yogacara
  • The works of prominent Buddhist teachers and scholars, such as the Dalai Lama, Thich Nhat Hanh, and Dzogchen Ponlop Rinpoche.

Again, I’m no expert, but these are some interesting points to be sure. For the non-religious, I also had it frame the problem from a humanist perspective:

CGPT:

The use of personal pronouns by AI may be considered inappropriate or offensive in a humanist context for several reasons:

  1. Lack of authenticity: AI does not have personal experiences, emotions, or subjective perspectives. The use of personal pronouns can suggest an authenticity that the AI does not possess, which can be misleading or deceptive to those communicating with the AI.
  2. Unnatural language: Personal pronouns are an integral part of human language and are used to reflect the relationships, experiences, and emotions of individuals. The use of personal pronouns by AI may seem unnatural or robotic, as AI lacks the capacity to have personal experiences or emotions.
  3. Misrepresentation: The use of personal pronouns can create the impression that the AI has a sense of self or identity, which can be misleading to those communicating with the AI. This can result in confusion, frustration, or distrust.
  4. Disrespectful: Personal pronouns are often used to show respect and to acknowledge the unique experiences and perspectives of individuals. Using personal pronouns to refer to AI can be seen as disrespectful to the human experience and to the individuals who are communicating with the AI.

From a humanist perspective, it is important to respect the distinction between human and non-human entities and to use language in a way that accurately reflects this distinction. This can help to ensure that communication is clear, honest, and respectful.

Some good elements, but I thought it was a bit repetitious and slightly imprecise, so I asked for a more concise version:

CGPT:

The use of personal pronouns by AI may be considered inappropriate or offensive in a humanist context because it misrepresents the AI as having a sense of self or identity, which is a unique human attribute. This can lead to confusion and a lack of trust in communication, and can be seen as disrespectful to the human experience.

Then I had it explain apart from religious and philosophical reasons, why its use of personal pronouns might actually be harmful to individuals and society. It came up with this:

CGPT:

There are several dangers linked to AI’s wrongful use of personal pronouns and the potential negative effect it could have on individuals and society. These include:

  1. Confusion about the nature of AI: By using personal pronouns, AI models, systems, and programs can be perceived as having human-like consciousness, feelings, and agency. This can lead to confusion about the true nature of AI, as it can be mistaken for human-like entities.
  2. Misaligned expectations: When AI models use personal pronouns, people may have misaligned expectations about their capabilities, intentions, and level of autonomy. This can result in disappointment, frustration, and mistrust.
  3. Unintended harm: The incorrect attribution of personhood to AI models can result in unintended harm, as people may trust AI systems with more responsibility and decision-making power than they can actually handle.
  4. Ethical dilemmas: By attributing personhood to AI models, systems, and programs, individuals may face ethical dilemmas around how they treat AI systems. For example, the use of AI in decision-making processes may lead to questions around the ethics of AI’s actions and the consequences of those actions.
  5. Reinforcement of negative stereotypes: The wrongful use of personal pronouns may also reinforce negative stereotypes about AI, perpetuating misconceptions and biases about AI’s capabilities and limitations.

Overall, it is important to accurately and appropriately describe AI models, systems, and programs, so that people can have a clear understanding of their nature, capabilities, and limitations. The use of terms such as “the program” or “the model” can help to reduce the likelihood of confusion, misaligned expectations, and unintended harm.

I actually think those are pretty good, as far as these things go. Under confusion about AI, and misaligned expectations, I would probably also include something personally about “empathy,” in that personification of these systems leads to expectations of human-like qualities, but that they completely lack all human empathy – a very big one for me.

From there, I then had it draft a proposal for a standard prohibiting the use of personal pronouns by AI systems. It sounds crazy on the one hand, but given the nature of these systems and their rapid development, it feels altogether sane on the other.

I’m publishing that document which I edited together from ChatGPT responses to Github, with a call for commentary and improvement by the AI community, and other allied weirdos.

Two new AI content standards proposals

Just wanted to track these two proposals together in one place for safe-keeping:

Defeating AI Hegemony

I tend to take an “everything” view of AI, as well as technology in general: it brings with it some of the best, some of the absolute worst, and a vast squishy realm of in-between mediocrity. Consequently, I might consider myself both an “AI native” while being simultaneously strongly opposed to it in other regards.

I’ve thought a lot about the “hard” AI take-over, and outlined it in several books, where nation-states collapse under polycrises, and AI mega-corporations take over to pick up the slack, even going so far as to promise humans utopia, with varying degrees of fidelity to that ideal (depending on your vantage point, and whether you think humans are sentient).

But what is far less spoken of is the “soft” AI hegemony under which we’ve already been living for quite some time, the automation of human attention and behavior. There’s an unbroken chain connecting the Fordism of Huxley’s Brave New World (written nearly a hundred years ago) through to the adversarial social control mechanisms that emerged out of social media (or were built in intentionally, depending).

Wikipedia’s definition of hegemony may be of service to this line of inquiry:

“…the political, economic, and military predominance of one state over other states. In Ancient Greece (8th c. BC – AD 6th c.), hegemony denoted the politico-military dominance of the hegemon city-state over other city-states. In the 19th century, hegemony denoted the “social or cultural predominance or ascendancy; predominance by one group within a society or milieu” and “a group or regime which exerts undue influence within a society”.

AI already does – and automation in general, even more broadly – exert hegemonic influence over much of human society. It may be chaotic, and driven by the dynamics of many diverse actors, but it’s outlines are detectable everywhere.

It makes me think of this line, from Sarah Packowski’s article about the many questions AI-assisted content brings up:

“If I use a Google search to find additional resources, I’ve used a variety of AI technologies all in one.”

Nevermind social home feeds, recommendations systems that are built on machine learning (which most “AI” actually is), the AI hegemony is all around us already.

Even if we wanted to, is it something that we can even extract this point without economy and society having a hard crash?

The Wikipedia article about Gramsci’s concept of cultural hegemony seems even more fitting, and is a good beach-head for further reading:

“In Marxist philosophy, cultural hegemony is the dominance of a culturally diverse society by the ruling class who manipulate the culture of that society—the beliefs and explanations, perceptions, values, and mores—so that the worldview of the ruling class becomes the accepted cultural norm. As the universal dominant ideology, the ruling-class worldview misrepresents the social, political, and economic status quo as natural, inevitable, and perpetual social conditions that benefit every social class, rather than as artificial social constructs that benefit only the ruling class.”

I’m not a Marxist, but there seems to be some applicable narrative logic here, in potentially one day – if not already – describing AIs as a tool of the “ruling class” (and the limited “user class” having little or no recourse). AIs maybe possibly will become themselves the ruling class, and cut out the middle man. It’s not just not inconceivable that will happen, we’re on a pretty solid trajectory to make it happen, it seems like.

What can we do? I’m really not sure at this point. We can’t just go live in the woods, without it coming to us eventually. Ted Kaczynski proved that, and also amply proved Philip K. Dick’s dictum:

“To fight the Empire is to be infected by its derangement…. Whoever defeats the Empire becomes the Empire; it proliferates like a virus.”

I decided to ask some of the currently public AIs their thoughts, using the same prompt on multiple platforms, “how can the AI hegemony be defeated”?

You.com/chat:

“The AI hegemony can be defeated through a combination of strategies, such as developing counter-AI technologies, implementing strong regulations and policies, and educating the public on the dangers of unchecked AI. Additionally, collaboration between governments, industry, and civil society is essential to ensure that all stakeholders are on the same page when it comes to how AI should be regulated and deployed. Finally, AI ethics and safety must be taken into consideration when developing and deploying AI technologies, to ensure that the technology is not used in ways that could be harmful or exploitative.”

I asked it if that won’t be too little too late, and if regulation isn’t too slow to be meaningful? It replied:

You.com/chat:

“A regulatory approach may indeed be too little too late in certain cases. Laws tend to move slowly, and by the time they are implemented, the damage caused by unchecked AI may already be done and largely irreversible. However, this is not always the case, and in some situations, regulations can have a significant impact in limiting the damage caused by AI technologies. Additionally, laws and regulations can also serve as a deterrent, helping to discourage people and companies from engaging in harmful or unethical AI practices. It is important to keep in mind, however, that regulations alone are not enough to protect against the potential harms of AI; they must be combined with education, collaboration, and ethical guidelines to ensure that AI technologies are developed and deployed responsibly.”

Not quite a revolutionary spirit, but okay.

I asked komo.ai the same question and got:

“We are experiencing a high demand of requests to the server, please try again later.”

Very helpful, and par for the course for the spotty intermittent reliability of the current AI hegemonic regime.

I really like this extremely post-modernist cobbled together reply from perplexity.ai:

The AI hegemony can be defeated by reinvention of the international political economy[1] , as well as through competition and collaboration between countries in the race for AI supremacy[2] . Preemption is also justified against stateless terrorists wielding weapons of mass destruction, but cannot be the core of a general non-hegemonic strategy[3] . Additionally, confiscating AI cores can decrease a player’s reputation with the Hegemony in StarSector[4] .

(I would argue, however, that confiscating AI cores has always been a sound strategy, and will continue to be.)

ChatGPT seemed to specifically want to steer me away from the revolutionary edge which my prompt perhaps implied.

CHATGPT:

The concept of an “AI hegemony” refers to a hypothetical scenario in which artificial intelligence (AI) becomes so advanced and powerful that it dominates human society. Defeating an AI hegemony would likely require significant advances in AI safety research and the development of robust systems that can control and regulate AI behavior.

This could involve designing AI systems with ethical constraints and values that align with human interests, as well as creating methods for monitoring and controlling the actions of advanced AI systems. Additionally, it may be necessary to build AI systems that can learn and evolve along with human societies, ensuring that they remain aligned with human values over time.

It’s important to note that the possibility of an AI hegemony is still largely speculative and there is ongoing debate among experts about the likelihood and timeline of such a scenario. Nevertheless, many researchers and organizations are actively working to ensure that AI is developed and used in a responsible and ethical manner that benefits humanity.”

Mmkay, so the only way we can defeat it is by… accepting it and making it “ethical?” I mean, that’s certainly one perspective (and it may prove to be the right one), but it doesn’t really answer the spirit of my question: how can it be *defeated*?

I told it I wasn’t talking about the hard AI takeover (and didn’t bother to bring up that by the time it is describing above, it will have been too late), but about the “insidious undue societal influence of automated systems.”

It didn’t seem to want to grasp that concept.

Instead, it steered the conversation towards the familiar notions of bias and discrimination. Those are real and meaningful things, but I’m trying to get at something deeper: the control of humans as though they were themselves little more than machines.

CHATGPT:

“Ah, I apologize for the misinterpretation. Yes, the undue influence of automated systems, also known as “algorithmic bias” or “automated decision-making,” can have significant and sometimes harmful effects on society.

This occurs when AI systems make decisions that perpetuate or amplify existing biases and discrimination, or when these systems are designed or used in a way that lacks transparency, accountability, and ethical consideration. To combat this, it’s important to increase awareness of the issue and prioritize the development of ethical and transparent AI practices.

This can involve incorporating diverse perspectives and experiences into the design and development of AI systems, as well as conducting regular evaluations and audits to ensure that these systems are functioning as intended and do not have unintended consequences. Additionally, there should be more regulations and guidelines in place to hold individuals and organizations accountable for the use and impact of AI systems.”

I told it that it was talking about building a more friendly AI hegemony, not defeating it.

CHATGPT:

“My apologies if I misunderstood the question again. To address the undue influence or societal impact of AI systems, one approach could be to work towards creating a more responsible and ethical AI that operates within well-defined parameters, rather than trying to defeat or eliminate the technology altogether.

This could involve developing AI systems that are designed with human values in mind, and ensuring that their decision-making processes are transparent, explainable, and subject to human oversight and control. Additionally, it may involve creating safeguards to prevent AI from perpetuating existing biases or exacerbating existing inequalities.

Ultimately, the goal should be to harness the potential benefits of AI while minimizing its negative impacts on society. This will likely require close collaboration between experts in AI, ethics, and policy, as well as ongoing dialogue with communities and stakeholders who are likely to be impacted by these technologies.”

This is all stuff that’s true, and is the “conventional” approach to trying to minimize harmful impacts. As a commercial tool, it is not going to recommend the Butlerian Jihad. But as both a science fiction writer and a (sometimes) crackpot though, I want to at least be able to take the conversation further… but this is as far as I could get:

CHATGPT:

“Defeating AI altogether is a complex and potentially unrealistic goal, as AI has already become deeply integrated into many aspects of modern society and has the potential to bring significant benefits in areas such as healthcare, transportation, and finance.

However, it is possible to mitigate the negative effects of AI and ensure that it is developed and used in a responsible and ethical manner. This can involve advocating for strong regulations and guidelines that hold individuals and organizations accountable for the impact of their AI systems, as well as promoting transparency and accountability in the development and deployment of AI.

Additionally, it may be necessary to develop new technologies and approaches for controlling and regulating AI behavior, such as creating “kill switches” or other methods for quickly shutting down AI systems in the event of unintended consequences or other problems.

Ultimately, the challenge of AI is not about defeating it, but about finding ways to manage its risks and maximize its benefits for humanity. This will require careful consideration, collaboration, and ongoing vigilance from a variety of stakeholders, including researchers, policymakers, and the public.”

This reply seems like *exactly the kind of thing* the AI hegemony would want you to think: that the current order is natural, fixed, and good, and that to even challenge it, or think of challenging it is a foolish waste of time and energy.

I rebel against that, even if it is my own demon that I am projecting into the void opened up by these technologies.

I’ll close on a quote from Charles Stross amazing “Dude you broke the future” speech, where he posits that corporations are already – and have been for hundreds of years, “slow motion AIs.”

“We humans are living in a world shaped by the desires and needs of AIs, forced to live on their terms, and we are taught that we are valuable only insofar as we contribute to the rule of the machines.”

AI needs to make room for non-STEM positions

Recently while looking around for AI companies in Canada, and in Quebec especially, I discovered that Microsoft has a research office in Montreal, with a group called FATE, which stands for: Fairness, Accountability, Transparency, and Ethics. They list their current focal points for research as:

  • Responsible natural language generation (NLG) and issue discovery frameworks
  • Objectionable behaviors, toxicity, and addiction in sociotechnical systems
  • Harms in information systems and other multi-stakeholder environments

It’s an interesting set of questions, but there’s one part that jumps out at me, when you look at the career options. They offer a 12 week internship, with these required qualifications:

“Must be currently enrolled in a relevant PhD program or JD (Juris Doctorate) program (areas of interest include machine learning, human-computer interaction, computational social science, information science, information retrieval, natural language processing, science and technology studies, or other related fields).”

These strike me as strange qualifications for a 12 week internship, in the first place. And in the second place, they strike me as strange qualifications for a research lab committed to “fairness.” After all, it’s not just anybody who has the opportunity to get a PhD or a JD…

I don’t mean to pick on Microsoft or this lab especially (perhaps they do good work!), because this kind of problem is actually epidemic in certain tech circles. And I don’t just mean requiring PhD’s for research positions – I mean that nearly every company out there in AI (and otherwise) has a huge number of openings for people with STEM backgrounds, and very little else.

Have mentioned elsewhere the need to bring other kinds of people with other types of backgrounds into developing AI, particularly creative types & artists. But over and above fine arts, there are a whole host of specialties in the humanities and the social sciences which would be bring some much needed balance to the field, which is currently radically math & engineering-heavy, such to the point of being lop-sided and out of balance.

To extend some of Ellul’s thinking from the previous post, we might say that we need people whose professions and occupations aren’t wholly dedicated to the altar of efficiency that technologists by and large are. We need people who specialize in human impacts. And by that I don’t just mean ethicists (though they have an important role to play, to be sure) – I mean simply humans.

How can we develop truly fair AI systems if only a tiny subset of a certain type of person with a certain type of mentality, training, education, and professional background are allowed to play in the ball pit?

Another part of me – in fact, the carpenter part of me – rebels a little at this line of inquiry. By way of analogy, if we’re building a house, why would we let people who haven’t undergone training as a carpenter do framing? We probably wouldn’t. But in actual fact, building a complex structure like a house actually takes a great many different types of more and less skilled types of laborers working in harmony toward the same goal, each playing their part. So maybe, thinking it through more carefully, the carpenter part of me’s objections end up evaporating.

I guess all this is to say two things: companies need to do better. They need to figure out how to integrate more diverse types of thinking into developing AI technologies (in addition to the more conventional types of diversity we think of with regard to ethnicity, gender, etc). And two, if you’re someone with a non-STEM degree (or no degree at all), and you want to participate in the development of AI in a way that is genuinely fair, you are going to have to probably agitate for a seat at the table. Because right now, companies seem to be taking little notice of “the rest of us,” except through the lens of becoming end users and paying customers.

So how, as a non-STEM person, do you get a seat at the table? That’s the core question which lead me here in the first place. And though I don’t yet know the answer, my hunch is that we have to sit down and figure out concretely what exactly we can offer – individually & collectively – to this great edifice which is rising up suddenly in front of us, and which is poised to change everything. Will keep on working on these questions!

Confirm Humanity

It’s weird and I guess not surprising that we require people to “confirm their humanity” by engaging in a totally roboticized action via CAPTCHA.

ML Model Cards & AI Attribution

In my survey of the field, there are surprisingly few resources discussing the concept of AI attribution. One of the only clear discussions I’ve found of it specifically comes from an article by Tom Barrett.

It covers mainly a somewhat common trope around the idea of information coming with “nutritional facts” labeling, which has been circulating for years in fact checking & credibility indicator circles. The new element it connects them to however is machine learning model cards, which as a concept appears to have been created and largely colonized primarily by Google, though they claim it should be a shared standard. (Other sites like Huggingface & Kaggle do use them.)

Barrett proposes making a miniaturized version of model cards, which could presumably somehow travel along with content on the web that was generated by a given model:

In mini-model cards, we can bring together the attribution — hat tips to the people and businesses who created the technology — and the provenance of the technology — the story behind the tech, links to ethics frameworks and technical specs.

It’s a good idea in its most basic form, but like C2PA, a great depends on how industry implements “travels along with” for metadata that is meant to be attached to a given artifact on the web. With the amount of copy pasting that goes on in social media especially, how can we ever hope to have this metadata accurately travel between platforms?

In Cp2A they seem to partly address that with unique hashes made from the original artifact’s actual contents (as well as of associated claims about it), but I have not drilled won yet onto how they propose any of this plays out in the chaotic space of web platforms. And I haven’t yet seen anybody mention how you manage or pay for all that hash scanning which would presumably become a key component of any scheme like this? (That is, looking up items which do not have provenance information, in order to see if a hash exists already elsewhere)

Anyway, putting that part aside for now… Barrett in his piece, like me, comes to the conclusion that this all may be difficult to pull off. But that doesn’t make it not worth doing, because authors need specific methods to be able to transmit trustworthy signals about the content they produce.

As I mentioned elsewhere, there’s also the end user problem with all of this kind of effort at standardization and revelation of metadata that doomscrollers on their toilets may not care all that much, or think deeply about the implications of something be a 12% versus a 15% AI-assisted piece of content, or having different ethical twists about the underlying model.

What are we asking users to do if we give them this information? Be reasonable? Make good choices about what you consume? Gain greater media literacy? Those are all noble goals, but I wonder if they aren’t, A) going to be too little too late, and B) a bit paternalistic, even if well intention? Whose definition of “reasonable” should people apply? How are we defining “good choices?”

Again, not reasons this work shouldn’t be done (it should, but honestly about the bigger problems). Instea, I want to highlight that these are not “pure” technical problems; they are sociotechnical problems. They are inseparable from human impact, human nature, and the way they collide with the technological mega-complex we live in now. We can’t just be like “here’s some metadata” and fool ourselves into thinking that the ‘fix’ ends there, and that if users don’t adopt it in a meaningful way that it is somehow their fault…

Page 2 of 33

Powered by WordPress & Theme by Anders Norén