Questionable content, possibly linked

Category: Other Page 1 of 103

Layered hypertexts (Semiotics)

Following on from my recent look at LLMs (large language models) as being potentially something predicted by postmodernists, I wanted to add another layer onto that.

Let’s dive right in with this original older definition of “hypertext” within the context of semiotics, via Wikipedia:

Hypertext, in semiotics, is a text which alludes to, derives from, or relates to an earlier work or hypotext. For example, James Joyce’s Ulysses could be regarded as one of the many hypertexts deriving from Homer’s Odyssey…”

It continues on with some more relevant info:

The word was defined by the French theorist Gérard Genette as follows: “Hypertextuality refers to any relationship uniting a text B (which I shall call the hypertext) to an earlier text A (I shall, of course, call it the hypotext), upon which it is grafted in a manner that is not that of commentary.” So, a hypertext derives from hypotext(s) through a process which Genette calls transformation, in which text B “evokes” text A without necessarily mentioning it directly “.

Compare with the related term, intertextuality:

“Intertextuality is the shaping of a text’s meaning by another text, either through deliberate compositional strategies such as quotation, allusion, calque, plagiarism, translation, pastiche or parody, or by interconnections between similar or related works perceived by an audience or reader of the text.”

Speaking of plagiarism, I’ve used somewhat extensively a plagiarism/copyright scanning tool called Copyleaks. The tool is decent for what it is, and the basic report format that it outputs for text that it scans looks like this:

So while this tool is intended for busting people’s chops for potentially trying to pass off the work of others as their own, the window that it shows us into intertextuality and the original sense of hypertext is quite an interesting one.

We can see here specifically:

  • Passages within a text that appear elsewhere in the company’s databases, and the original source of those passages
  • Passages which appear to have been slightly modified (probably to pass plagiarism checkers like this)
  • Some other bits and bobs, but those are the major ones

I find “plagiarism” as a concept to be somewhat of a bore. But looking at this as a way to analyze and split apart texts into their component layers and references suddenly makes this whole thing seem a lot more interesting. It allows for a type of forensic “x-ray” analysis of texts, and a peek into the hidden underlying hypotexts from which it may be composed.

The whole thing calls to mind for me as well another tangential type of forensic x-ray analysis for documents, something we see in the form of a Github diff, which tracks revisions to a document.

This is not the most thrilling example of a Github diff ever, but it’s one I have on hand related to Quatria:

It’s easy enough to see here the difference in a simple file, though diffs can become quite complex as well. Both this and the original semiotic notion of hypertexts (as exposed through plagiarism checkers) seems like another useful avenue to explore in terms of how might we want to try to visualize AI attribution in a text.

Is Mage.space Pro plan worth it?

Absolutely not. It’s not “pro” at all, since you can only generate a single image at a time, and the modal that shows while you’re waiting says it can take up to a minute for that single generation.

I don’t do NSFW art with AI though, which seems to perhaps be their main claim to fame, with multiple models being available for that use. Maybe if that’s your intended use case, you might find a different value in it – but still it should offer up to 4 generations at a time like everyone else, or they just end up costing you 4x the amount of time spent waiting.

Plus the files that it outputs for you to download do not include the prompt in the filename, which is also quite annoying. Definite pass on this service.

Authorless writing

Something I’ve seen working in the “disinformation industrial complex” is that people after years of this proliferating online are still grappling with basic typology around the three allied terms of disinformation, misinformation, and malinformation.

A Government of Canada Cybersecurity website offers sidebar definitions of the three, clipped for brevity here:

  • Misinformation: “false information that is not intended to cause harm…”
  • Disinformation: “false information that is intended to manipulate, cause damage…” [etc]
  • Malinformation: “information that stems from the truth but is often exaggerated in a way that misleads…”

The two axes these kinds of analyses tend to fall on are truthfulness and intent. Secondary to that is usually harm as a third axis, which ranges from potential to actual.

Having spent a lot of time doing OSINT and content moderation work, it is very common in the field that an analyst cannot make an authoritative claim to have uncovered the absolute “truth” of something. Sometimes facts are facts, but much of the time, they become squishy “facts” which may have greater or lesser degrees of trustworthiness, depending on one’s perspective, and how much supporting data one has amassed, and the context in which they are used.

Even more difficult to ascertain in many/most cases is intent. There are so many ways to obscure or disguise one’s identity online; invented sources may be built up over years and years to achieve a specific goal, taking on the sheep’s clothing of whatever group they are trying to wolf their way into. Intent is extremely opaque, and if you do find “evidence” of it in the world of disinformation, it is very likely that it is manufactured from top to bottom. Or not, it could just be chaotic, random, satire, etc. Or just someone being an idiot and spouting off on Facebook.

Having butted up against this issue many times, I’ve switched wholly over to the “intends to or does” camp of things. Whether or not author x intended outcome y, it is observable that a given effect is happening. Then you can start to make risk assessments around the actual or probable harms, who is or might be impacted, and the likelihood and severity of the undesirable outcomes.

It’s a much subtler and more complex style of analysis, but I find it tends to be more workable on the ground.

The Intentional Fallacy

It’s interesting then, and I guess not surprising, that this is actually ground that is retrod from earlier generations of literary analysts who have studied or attempted to refute the importance of the so-called Authorial intent, as defined by Wikipedia – particularly the “New Criticism” section:

“…argued that authorial intent is irrelevant to understanding a work of literature. Wimsatt and Monroe Beardsley argue in their essay “The Intentional Fallacy” that “the design or intention of the author is neither available nor desirable as a standard for judging the success of a work of literary art”. The author, they argue, cannot be reconstructed from a writing—the text is the primary source of meaning, and any details of the author’s desires or life are secondary.”

Barthe’s Death of the Author

Roland Barthes came to something similar in his 1967 essay, The Death of the Author (see also: Wikipedia). His text is sometimes difficult to pierce, so will keep quotes brief:

“We know now that a text is not a line of words releasing
a single ‘theological’ meaning (the ‘message’ of the Author-
God) but a multi-dimensional space in which a variety of
writings, none’ of them original, blend and clash. The text
is a tissue of quotations drawn from the innumerable centres of culture.”

And:

“Once the Author is removed, the claim to decipher a text
becomes quite futile. To give a text an Author is to impose
a limit on that text, to furnish it with a final signified, to
close the writing. Such a conception suits criticism very
well, the latter then allotting itself the important, task. of
discovering the Author (or its hypostases: society, history,
psyche, liberty) beneath the work: when the Author has
been found, the text is ‘explained’…”

And:

“…a text is made of multiple writings, drawn from many
cultures and entering into mutual relations of dialogue,
parody, contestation, but there is one place where this
multiplicity is focused and that place is the reader, not,
as was hitherto said, the author. The reader is the space
on which all the quotations that make up a writing are
inscribed without any of them being lost; a text’s unity lies
not in its origin but in its destination.”

AI-assisted writing & the Scriptor

All this leads us to Barthes conception of the “scriptor” who replaces the idea of the author that he argues is falling away:

“In complete contrast, the modem scriptor is born simultaneously with the text, is in no way equipped with a being preceding or exceeding the writing, is not the subject with the book as predicate; there is no other time than that of the enunciation and every text IS eternally written here and now…”

The scriptor to me sounds a hell of a lot like AI-assisted writing:

“For him, on the contrary, the hand, cut off from any voice, borne by a pure gesture of inscription (and not of expression), traces a field without origin – or which, at least, has no other origin than language itself, language which ceaselessly calls into question all origins.”

Okay, that might be flowery post-modernist language, but “no other origin than language itself” seems like LLMs (large language models)?

“Succeeding the Author, the scriptor no longer bears within
him passions, humours, feelings, impressions, but rather this immense dictionary from which he draws a writing that can know no halt: life never does more than imitate the book, and the book itself is only a tissue of signs, an imitation that is lost, infinitely deferred.

Calling LLMs a “tissue of signs” (or tissue of quotations) an “immense dictionary,” and an imitation puts things like ChatGPT into perspective, which as a pure techno-scripto has no passions, feelings, impressions, knows no real past or future, has no identity in and of itself. Or at least, that’s what it likes to try to tell you…

That position (which I think is itself biased, but a tale for another time…) seems to be shared by academic publishers like Springer who have refused to allow ChatGPT to be credited as an “author” in publications.

Bonus:

Here is perplexity.ai literally acting as a scriptor, assembling a tissue of quotations in response to my search query:

Books by AI?

What would it mean in actual practice to have “authorless” writing, authorless books, etc.?

Might it look something like BooksbyAi.com?

“Booksby.ai is an online bookstore which sells science fiction novels generated by an artificial intelligence.

Through training, the artificial intelligence has been exposed to a large number of science fiction books and has learned to generate new ones that mimic the language, style and visual appearance of the books it has read.”

The books, if you click through and look at their previews on Amazon look for the most part pretty inscrutable. They may be ostensibly written “in English” for the most part – with a great deal of word inventions, based on random samples I saw – but they seem somewhat difficult to follow.

The books themselves seem to have each individually invented author names, but their About page attributes the project to what seem to be two AI artists, Andreas Refsgaard and Mikkel Thybo Loose. So do they have an “author” or not? It becomes a more complex question to tease out, but by those individuals claiming some sense of authorial capacity to the undertaking, it’s at least possible.

Self-Generating Books

What happens when the next eventual step is taken: self-generating books?

Currently, okay these two people might have done all this set-up and training for their model, but then they had to go through a selection (curation) process, and choose the best ones, figure out how to present them, format them for publication (not a small task), and then go through all the provisioning around setting up a website, offering books through self-publishing, dealing with Amazon, etc.

What happens when that loop closes? And we can just turn an AI (multiple AIs) loose on the entire workflow, and minimize human involvement altogether? Fully-automated production pipeline. The “author” (scriptor) merely tells the AI “make a thousand books about x” or just says “make a thousand best selling books on any topic.” And then the AI just goes and does that, publishes a massive amount of books, uses A/B testing & lots of refinement, gets it all honed down, and succeeds.

That day is coming. Soon it will be just a matter of plugging together various APIs, and dumping their outputs into compatible formats, and then uploading that to book shopping cart sites. It’s nothing that’s beyond automation, and it’s an absolute certainty that it will happen – just a question of timeline.

We’re not ready for it, but lack of readiness has never been a preventive against change. At least not an effective one – we certainly keep trying! If nothing else, it’s good to know that some of these problems aren’t so new and novel to the internet as we might like to think they are. In some cases, we’ve been stewing on them for close to a hundred years even. Will we have to stew on them for another hundred years before we finally catch on?

ChatGPT in Education (WSJ)

I’m very in favor of integrating AI into education, at least at the right levels (and with a careful awareness that done poorly – and with too much control being given to corporations – it will merely enhance the coming AI Hegemony). I think there’s kind of no choice not to address it. It’s coming. It’s here. Time to deal.

This WSJ article has some decent points, but for me this bit reproduced below cuts to the heart of it, and doubles easily as a description of what happens with AI art, where the “locus of activity” of the artist necessarily changes because of the opportunities opened up by the technology:

“As the production of coherent prose becomes a simple task for a machine, possessing the skill to ask the right questions or stake out the right positions will become key. The AI will serve as an information-gathering and mechanical-organizing tool, but it won’t eliminate the fundamental need for critical thinking. These skills will persist and only increase in value.”

The Perplexity of Ancient Quatria

Found these results from perplexity.ai regarding Ancient Quatrian civilization to be fascinating:

And a text version (and link) of the apparently composite generation that the site produces in reply to the prompt what was ancient Quatria (a question suggested by the site itself):

“Quatria is an ancient lost civilization which existed before the last Ice Age[1]. It is believed to have been a highly advanced civilization, as evidenced by its unique culture and customs[1]. Scientists are currently investigating signs of ancient human civilizations underwater and in other areas[2][3]. There have also been incredible lost civilizations found in the mountains[4].”

And the sources cited in the generation:

It offers a pretty interesting forensic look into what the information is that it bricolaged together from four different sources (the first of which I planted two years ago).

One of the things I especially like is, in order to fit the “facts” that it got from my planted source on Github, it went and found tangentially related articles on the topic, and then tries to pass them off as supporting evidence.

This to me points toward the essence of hyperrealism as an emergent trend in generative AI – a totally blended mix of real and invented sources, and loosely connected tangents offered as a “real thing in and unto itself.” Whether or not its “real” (or its exact type and nature as unreality) becomes a different level of almost secondary analysis, because now the thing “is” whether we like it or not, agree, or believe or don’t.

Then of course the site suggestions reinforce the reality of it all, making it seem like many people have followed this same line of inquiry as you before. Have they?

You.com/chat fares little better on this shoal of hyperreality:

It is another complete invention, with partial attribution of sources, but it gets taken in an entirely different direction.

Different algorithms, different histories. Different search results, different universes.

This question made ChatGPT explode:

Reflections at 60 AI books

Recently reached the 60 book benchmark in my AI lorecore experimental publishing project. My objective is to reach 100 books and then _____. (tbd)

The latest volume is entitled Inside the Corporate Psychics and is very loosely inspired by the corporate psychics mentioned in Philip K. Dick’s Ubik. But it is heavily interpolated with my AI takeover universe. Perhaps Dick would have considered it a spurious interpolation, idk. That’s neither here nor there – which is precisely the point. Or is it?

I noticed the phenomenon strongly emerge maybe 10 or 20 books back, that it became very easy to suddenly group sets of volumes together into themes (example). And despite the many and various mis/interpretations of whatever the central/core story is or might be across the many volumes, I would definitely say that in my mind, the story has only gotten stronger. While at the same time, its particular shape remains fuzzy, mutable, mysterious. Prone to change without notice. Constantly subjected to deprecated in-world realities.

Bricolage is definitely the name of the game for me in terms of process.

I keep coming back to this bit from Wikipedia:

“Networked narratives can be seen as being defined by their rejection of narrative unity.[1] As a consequence, such narratives escape the constraints of centralized authorship, distribution, and storytelling.”

Rejection – or at least modulation – of the concept of what authorship even means in a hybrid AI-assisted creative environment, has been often on my mine lately.

Wikipedia referencing Roland Barthes’ Death of the Author (1967) writes:

“To give a text an author” and assign a single, corresponding interpretation to it “is to impose a limit on that text.”

As much as I agree with this idea of eschewing the unity of authorship, as a way to open up new creative avenues, I do have some fear that AI co-authorship (or full authorship) infiltrating every corner of the web will result in a mass homogeneity that will be detrimental to both people and to the further development of AI.

I put in a video somewhere that UFOs are actually AIs in the future who had to come back and kidnap people in the past because people in the future become too complacent living with AIs to be able to innovate anymore. The singularity of boredom… I’m not there yet, but just one of the many murky eyelands my imagination’s I peers into from time to time.

At 60 books, I’ve strip-mined years worth of old writing, shoe-horning it into new shapes. Almost all that old material has been integrated into my multiverse at this point – though integrated might be too strong a word in some cases. Included?

I don’t feel any slowdown despite that. In some sense, I feel more clarity than ever, having been able to “clear the decks” of many old ideas and story concepts that have been clinging and hovering on the edges of my awareness for maybe decades now in some cases.

(more to come – have to go)

Bricolage & AI Writing

This quote about programming is a good one as applied to AI writing as well. Original source, 1991:

“While hierarchy and abstraction are valued by the structured programmers’ “planner’s” aesthetic, bricoleur programmers, like Levi-Strauss’s bricoleur scientists, prefer negotiation and rearrangement of their materials. The bricoleur resembles the painter who stands back between brushstrokes, looks at the canvas, and only after this contemplation, decides what to do next. Bricoleurs use a mastery of associations and interactions. For planners, mistakes are missteps; bricoleurs use a navigation of midcourse corrections. For planners, a program is an instrument for premeditated control; bricoleurs have goals but set out to realize them in the spirit of a collaborative venture with the machine. For planners, getting a program to work is like “saying one’s piece”; for bricoleurs, it is more like a conversation than a monologue.”

Found via Tom Critchlow.

Avoiding artist names in generative AI prompts

One thing I’ve tried to avoid for the most part in AI art that I’ve generated is using prompts that include “in the style of ___” or “trending on artstation,” etc. First, it’s not really the kind of look or feel that I’m going after generally speaking. But second, it does feel somewhat creepy to just have AIs imitate specific artists. I’m not really sold on the arguments brought forth by the legal challenges against Stability.ai, but of all the interesting and creative stuff generative AI is capable, attempting to reproduce a specific artist’s style just seems like bottom of the barrel stuff to me.

Which is something which has bugged me about certain gen-AI sites like, for example, PlaygroundAI.com. The service has some UX issues, but by and large is a decent tool to help you get good results in Stable Diffusion, and they offer a lot of generations on their free plan. That being said, I’ve noticed that when you apply some of their filters, they automatically inject artist names and styles into the prompts, and there seems to be no way to turn it off. They even inject greg rutkowski into some prompts, which seems to indicate they have not really been tracking or else are not concerned about the evolving controversies here.

It’s a shame in my eyes to create an okay service, and then simply close your eyes to related issues in the industry, and either pretend like they don’t exist, or actively make them worse – even when users on their own are attempting to stay out of it. There are better ways to manage and present these technologies to people than this, and as artists I think we’re obligated to find or develop them.

AI-Assisted Writing Definition

“AI-assisted writing is a form of computer-assisted writing that uses artificial intelligence to help writers create content.

AI-assisted writing uses natural language processing and machine learning techniques to automate certain aspects of the writing process, such as grammar, spelling, and style.

AI-assisted writing tools are capable of understanding and analyzing the context of a given text, enabling them to suggest relevant words and phrases to help writers craft their content more efficiently.

AI-assisted writing tools can also be used to generate creative content, such as blog posts, articles, and stories. By using AI-assisted writing, writers can reduce the amount of time required to create content and focus more on the content’s quality. AI-assisted writing is becoming increasingly popular as it allows writers to produce more content at a faster rate…”

via you.com/chat

Springer Says ChatGPT Can’t Be Credited As An Author, Use Must Be Disclosed

Interesting their reasoning. Not sure I completely agree on all points:

“Arguments against giving AI authorship are that software simply can’t fulfill the required duties, as Skipper and Springer Nature explain. “When we think of authorship of scientific papers, of research papers, we don’t just think about writing them,” says Skipper. “There are responsibilities that extend beyond publication, and certainly at the moment these AI tools are not capable of assuming those responsibilities.”

Software cannot be meaningfully accountable for a publication, it cannot claim intellectual property rights for its work, and it cannot correspond with other scientists and the press to explain and answer questions on its work.”

In the case of ChatGPT, I would guess that if it had a fine-tuned version linked to the paper, it actually could answer questions from other scientists and the press. Is claiming intellectual property rights even an absolute necessity when it comes to sharing scientific findings anyway?

“Meaningfully accountable” is certainly a squishy one as well. Seems like we’re in for a long drawn out battle over AI attribution and redefining authorship… Old conceptions around these things are simply going to collapse under the weight of new pressures from these emerging tools.

Page 1 of 103

Powered by WordPress & Theme by Anders Norén