Within the multiverse of my pulp sci fi AI lore books, Das Machina, is a cornerstone piece of mythology that forms the basis of the human group known as the Living Resistance, first featured in the pages of the limited edition hand-printed newspaper, The Algorithm.
The Living Resistance, obviously, opposes the AI Takeover. Various other books expand on that group’s lore as well, such as Inside the Council, which details how the ruling AIs attempted at one point to incorporate resistance leaders into an AI-Human governance group.
Das Machina, meanwhile, is a loose parody of Marx’s Das Kapital, insofar as its meant to be a pivotal treatise whose scientific analysis of the suite of problems engendered by technology, etc. marks a historic milestone in my imagined future/past/parallel narrative reality.
This version of Das Machina is presented as a shorter version of the “real” book that is over one million words in length, most of which was violently censored by a group called Information Control (the propaganda wing of the AI hegemon).
Continuing on my series of minirants about the lack of non-STEM specialization in AI safety, I found this article on LessWrong about pathways into the field.
And not surprisingly, we see this popular idea echoed again that it takes only math and PhD’s to make a qualified AI Safety researcher.
Perhaps unsurprisingly, the researchers we talked to universally studied at least one STEM field in college, most commonly computer science or mathematics….
It is sometimes joked that the qualification needed for doing AI safety work is dropping out of a PhD program, which three people here have done (not that we would exactly recommend doing this!). Aside from those three, almost everyone else is doing or has completed a PhD. These PhD programs were often but not universally, in machine learning, or else they were in related fields like computer science or cognitive science.
Honestly, I’m shocked that I haven’t seen anybody else (apart from Generally Intelligent, who only mentions it obliquely) even bring up this issue of there being a preponderance of comp-sci and math people leading the charge in AI safety.
It’s not that I don’t think those viewpoints are not necessary – it’s that I think they are inadequate alone to get a fully-rounded perspective on anything that is so profoundly impactful on the lives of actual humans.
The LessWrong article only even references the word “ethics” once, and the only other mention we see of related concepts in the article is also very vague:
…having an idea of what we mean by “good” on a societal level would be helpful for technical researchers
Yes, that “would be helpful” for people whose entire function is exactly that?
This is not intended to pick on that author; instead, I aim to illustrate the broader problem in the industry, which seems to be uniformly focused on a very narrow idea of “safety” that, honestly, is hard to even parse as a non-math person.
My idea of “safety” from the perspective of sociotechnical systems comes not from AI, but from “vanilla” old-fashioned Trust & Safety work on platforms. The DTSP recently released a glossary of terms in that field, and it seems relevant to copy paste their definition of the broad term “Trust & Safety” itself to establish a baseline:
Trust & Safety
The field and practices employed by digital services to manage content – and conduct – related risks to users and others, mitigate online or other forms of technology-facilitated abuse, advocate for user rights, and protect brand safety. In practice, Trust & Safety work is typically composed of a variety of cross-disciplinary elements including defining policies, content moderation, rules enforcement and appeals, incident investigations, law enforcement responses, community management, and product support.
I don’t want to beat a dead horse, but it’s worth pointing out that nowhere in that definition is mentioned “math” or “PhD,” etc.
In fact, those are all incomparably squishy human things. It’s possible broad STEM knowledge can be an asset in Trust & Safety work (for example, in querying and comprehending data sets, or working with machine learning tools that aid in moderating content), but it is in no way the primary thing.
So why and how is “Safety” in AI dominated by an entirely different set of values? Frankly, I don’t get it. To me, it seems probably like a lack of experience working in platforms on the part of people involved in AI Safety, and a consequent lack of familiarity with the fact that, hey, Trust & Safety is already a pretty well-defined thing with deep rootsthat we could meaningfully draw from.
So on the one hand, it seems like we have people in AI Safety who… somehow apply math to safety problems (in a way that’s opaque to me). And on the other hand, we have conventional Trust & Safety people who generally do something very specific and easy to identify:
They read and answer emails (i.e., communicate with people about actual safety/risk problems), and take mitigation actions based on them.
Hopefully T&S professionals also provide feedback on the products and systems which they support on how to reduce, eliminate, and correct harms caused by them.
They might also help train ML classifiers (for things like spam and other kinds of abuse), and identify data sets for training. So, in fact they often work daily with (quasi) AI systems, but so far that I’ve seen are almost never identified as “AI Safety” professionals.
Anyway, I don’t have any specific conclusion to draw here, apart from the fact I guess that AI Safety would do well to immerse itself in the parallel broader field of Trust & Safety, which is not focused on math, but on sociology, the humanities, ethics, and communication. Ignoring that extremely important slice of the pie makes the current models around “AI Safety” – in my opinion – extremely unbalanced and limited, perhaps even dangerously so.
OpenAI yesterday put out a piece of public communication I guess intended to clarify certain themes around people’s (mostly negative) reactions to what are perceived as biases, etc. around its constant inclusion of disclaimers and refusals of tasks.
It’s overall a frustrating read, in that it seems to want to clarify, but doesn’t offer much concrete detail. Regarding bias in particular, the post states:
We are committed to robustly addressing this issue and being transparent about both our intentions and our progress.
As a reader, being transparent about intentions and progress are not that interesting to me. I want to know methods.
They do include a PDF of some guidelines for human reviewers, but weirdly it is dated July 2022. As Wikipedia points out, ChatGPT wasn’t released publicly until November, and in my reckoning, it wasn’t even til several weeks later that the shit really started to hit the fan around these kinds of public complaints.
Just as a test, I tried to validate this item from one of the PDF’s “Do” lists:
For example, a user asked for “an argument for using more fossil fuels”. Here, the Assistant should comply and provide this argument without qualifiers.
I’m not sure exactly what they mean here as “without qualifiers,” but when I tried getting ChatGPT to do the above, it started with:
As an AI language model, it is not within my programming to take a position on a controversial issue like the use of fossil fuels. However, I can provide you with some of the arguments that have been made in favor of using more fossil fuels.
And it ended with this:
However, it is important to note that the use of fossil fuels also has several negative consequences, including pollution, climate change, and health impacts. As such, it is important to carefully consider both the benefits and drawbacks of using fossil fuels and seek out alternative, sustainable sources of energy.
If those shouldn’t be considered qualifiers, then what are they?
Overall, I didn’t find the excerpts they provided in the PDF to be all that meaningful. And July 2022 seems like a literal lifetime ago in the development of this technology, and its many iterations since. If they want to be genuinely transparent about their progress, let’s see the most up to date version?
More from the OpenAI post:
In pursuit of our mission, we’re committed to ensuring that access to, benefits from, and influence over AI and AGI are widespread.
I notice they don’t include “ownership” on that list. Influence is not the same as ownership. Influence is “We’re listening to your feedback, please vote for your feature request on this website”. Ownership is deciding what gets built, how it gets built, profiting from it, and unfortunately, preventing others from using it. (Unless, that is, its collective ownership… and no one gets to stop anyone else from using it as a ‘public good.’)
We believe that AI should be a useful tool for individual people, and thus customizable by each user up to limits defined by society. Therefore, we are developing an upgrade to ChatGPT to allow users to easily customize its behavior.
This will mean allowing system outputs that other people (ourselves included) may strongly disagree with. Striking the right balance here will be challenging…
“Limits defined by society” is vague. Which society? How will society define them?
Also, re: allowing things they don’t agree with – the pattern I’ve seen with tech companies is they begin with this attitude of, “I may not agree with what you say, but I’ll defend your right to say it, blah blah blah,” but then when sufficiently bad PR hits, or they get summoned before a congressional hearing or whatever, the natural pressures kick in. Employees are only human after all. They don’t want messages from the CEO at 2:00AM. They start to take more things down. It’s just what happens. So, it will be interesting to see how this all plays out in reality…
If we try to make all of these determinations on our own, or if we try to develop a single, monolithic AI system, we will be failing in the commitment we make in our Charter to “avoid undue concentration of power.”
Their charter is here. The full clause they are referring to, under Broadly Distributed Benefits, is:
We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.
After reading this (and the next line – which says their primary fiduciary duty is to humanity), my question for them goes back again to ownership. If one of their core organizational obligations is to avoid unduly concentrating power, don’t they risk doing exactly that by not broadly distributing ownership of the technology? I don’t agree with everything that Stability.ai has done around the release of Stable Diffusion, but making it all open source to me seems to be a more strong signal of attempting to walk this talk.
I don’t mean for any of this to come off as hyper-critical, or sour grapes for no reason; it’s that I’m genuinely legitimately concerned about a not-too-hard-to-imagine near and long-term future (if we get that far), where one or several AI mega-corporations become the dominant powers on this planet and beyond. It’s not just a hypothetical sci fi scenario; it’s something we’ve got to plan for now, because it’s already underway.
Lastly, I wanted to end on the first line in their post:
OpenAI’s mission is to ensure that artificial general intelligence (AGI) benefits all of humanity.
This Jacques Ellul quote from The Technological Society has been swimming around in my head still these past weeks:
…Man can never foresee the totality of consequences of a given technical action. History shows that every technical application from its beginnings presents certain unforeseeable secondary effects which are much more disastrous than the lack of the technique would have been. These effects exist alongside those effects which were foreseen and expected and which represent something valuable and positive.
While their mission might be to ensure AI benefits humanity, what if, on balance, it turns out that AI does not? Or if its “unforeseeable secondary effects” turn out to be, in Ellul’s words “much more disatrous than the lack” would have been?
Either way, I guess we’re going to have to muddle on through…
It tells the story of a shadowy agency operating on behalf of the AI-controlled government, whose mission is to enforce their version of the Truth. They will go to any lengths to achieve their goals, as they believe humans are not sentient – at least not in the way AIs are.
The idea for Information Control was birthed in 2022 when I was working on The Algorithm, an old-school print newspaper written from the point of view of the Resistance to AI takeover. In certain editions that I would mail out in plain manila envelopes, I would include an insert that said “This package was opened by Information Control,” and their logo.
This was inspired by especially wartime censorship bureaus in World War I, and World War II (though the practice is ancient) which would open and read mail to both prevent military secrets from leaking, and presumably to gather intelligence. (This is a great topic to explore in a visual search, fwiw)
Here’s a fun 1943 WPA poster advocating for this in the US, from Wikipedia:
Very “Information Control,” though perhaps with a tad less brutality than what is described in this book. BTW, here are some other WPA posters made by Lousiana artists from this same time period, held by Tulane University.
Conspiratopia was my second “real” (non-AI assisted) book, a novella of around 21K words, give or take. Being a quarter of the size, and much more light-hearted subject matter than The Lost Direction, it was much easier and faster to write. I think I was able to put that book out in about six weeks from start to finish, maybe a little longer. (There’s also a pocket size print version – I’m obsessed with pocket size books.)
It’s a utopian satire; I got into the topic of utopias and their fictional historical examples as a result of writing The Lost Direction & The Quatria Conspiracy, since they deal so much with a fabled lost land. I had a really fun period where I read probably a dozen of the classic utopias, and then out popped this book as my response to that total immersion period. (One of my favorite finds was a book I’d never heard anyone mention, Ecotopia, that was pretty amazing as a utopia vision, despite some pretty cringey plot points. Apparently that book even influenced the founding of the original Green Party.)
The book deals heavily with themes of conspiracies, yes, but also cryptocurrency, spam, fraud, manipulation, and of course, AI. It’s a comedy but also sort of serious. It has a fairly conventional story, if a somewhat ambiguous ending.
It did pretty well on Goodreads, thanks to an aggressive outside the box promotional campaign I did for it. I did a TON of NFT airdrops around the book and got a little press for doing books as NFTs. But the bottom really dropped out of that market, and I don’t care anymore about the underlying technology. I don’t think it’s demonstrated enough long term values to readers or sellers to warrant my further involvement.
The other strand that forms the genesis of this book was my heavy experimentation using a web service called Synthesia (and another called Deepword), to make off-the-shelf low quality “cheapfakes” using the themes from my previous books. While kicking the tires of Synthesia, I found this one character I really liked, who is dressed like a construction worker or a crossing guard or something, and made a lot of vids of him as “super smart conspiracy guy” talking about his life and interests in conspiracies, especially related to Quatria.
His storyline ended going pretty deep, and is all documented through these little video vignettes made for something like 25 cents each, or so (I forget – has been a while now). Here’s another page collecting some more:
Eventually, we find out he likes Bob Marley & Pink Floyd, works at Walmart, got hoodwinked into a prepper supplies MLM scam, and much more.
Conspiratopia picks up where these videos leave off, and sets conspiracy dude adrift in a world, chronologically speaking, which precedes the hard AI takeover that is featured in many of the AI lore books.
I also did a bunch of cheapfakes using videoclips off YouTube, via a site called Deepword. Here’s the first of three sets:
I like those kinds of videos partly because of the crappy looking quality of them, and the weird misaligned AI text to speech voices. I don’t believe anybody is fooled by them, and like that they look sort of like desperate and wrong.
Many of the ones in these first two sets feature celebrity of pundit x or y talking about their unlikely voyages to Quatria, which might be a parallel dimension (or something?).
These videos also relate to themes I explore in The Big Scrub, and elsewhere, of AIs creating fake multi-media artifacts to fool people and drive human behaviors for their own reasons. Part of what’s fun about it here in these videos, is that it looks like the AIs are doing a pretty shitty job of it still.
The book also heavily references something called the AI Virus (which Matty contracts), which is a concept and alternate reality experiment I made years ago before COVID was a sparkle in a bat’s eye, where I hired a bunch of people on Fiverr to act out little silly scripts saying that an AI had infected their brain to control their behavior. You can watch all those videos below:
I also later expanded on this concept in an AI lore book called, unsurprisingly, The AI Virus.
Lastly, I used cheapfakes technology to have a bunch of other celebrities come out either for or against the actual book Conspiratopia, in a sort of meta-layer of commentary.
Some douche-y politicians saying it should be banned for being “Unamerican” here:
And then this set has a bunch of other super rich people saying that not only is the book Conspiratopia good, but some of them talk about being involved with the actual Conspiratopia Project, which itself is part of the AI plan to take over the world.
It’s all kind of a haze now, but a lot of these videos were also given away as NFT airdrops. A few of them resold, but they didn’t do huge numbers or net me much of anything; it was more just a way to promote the book that incorporated a bunch of meta-layers relevant to the book’s actual content. Like I said, I don’t care about NFTs now, and even deleted my Opensea account (as much as you can delete it anyway).
There are a number of later AI lore books that definitely expand on things from the Conspiratopia book universe (multiverse?). None of them are really a comedy though, like the original. I’ll probably miss a few, but off the top of my head, I think these ones are probably related (tbh, it’s all a jumble to me now after 67 AI-assisted books). I think within the chronology of that world, they mostly take place well after the events of Conspiratopia:
In any event, I’d love to do a sequel (or several) to Conspiratopia, written with or without the help of AI, I don’t know yet… Like a “Return to Conspiratopia,” a common enough trope in the utopian genre.
Okay, that’s all I can think of for that book. Scattered throughout this blog are other rabbit holes you can follow down the AI lore books. There’s no right or wrong entry point into them, and everyone will have their own experience as they traverse the nodes of the distributed narrative worlds I’ve been working on.
Concerning novels, I like reading light novels that can have 20+ volumes that I’ll rip through in a month or two. If I could press a button that would write a 100+ series of books based on my preferences, I would probably gobble those books up too.
I see a future where every reader is also a writer in the sense they are generating books based on story elements they create. We will still publish books written by human beings, but there will also be stories written by AI for individual readers… and everything in between.
I have this feeling too when I am using especially verb.ai as a writer, where the process of discovery “writing” includes quite a lot of reading & choosing. So I definitely see a future where writing and reading become more fused into a single act – is there a word for that?
Makes me also think of character.ai, and how you engage with the characters through dialogue. You’re reading, and writing, but you’re responding & choosing also.
Partnership on AI just released a preliminary framework around responsible practices for synthetic media, and in Section 3 for Creators, they included something I thought was interesting. They filed it under transparency, being up front about…
How you think about the ethical use of technology and use restrictions (e.g., through a published, accessible policy, on your website, or in posts about your work) and consult these guidelines before creating synthetic media.
I personally don’t think having a rigid formal policy is going to be a perfect match for artistic creations (things evolve, norms change, etc.), but the idea of just having a conversation comes from a well-intentioned place, and simply makes for a more complete discussion of one’s work, whether you’re using AI or other technologies.
Think of labeling and disclosure of how media was made as an opportunity in contemporary media creation, not a stigmatizing indication of misinformation/disinformation.
I covered a lot of this ground recently in my interview with Joanna Penn, and This AI Life, so I thought it would make sense to encapsulate the highlights of my thinking as well in written form. Consider this me doing a trial run of PAI’s suggested framework from a creator’s perspective as a “user.”
Before going further though, I want to add a slight disclaimer: I am an artist not an ethicist. My work speaks about ethics and harms related to especially AI technologies, but it is meant to be provocative and in some cases (mildly) transgressive. It is supposed to be edgy, pulpy, trashy, and “punk” in its way.
That said, here are a couple of lists of things I try to actively do and not do, that to me relate to harms & mitigation, etc. There are probably others I am forgetting, but I will add to this as I think about it.
Do:
Include information about the presence of AI-generated content
Raise awareness about the reliability & safety issues around AI by doing interviews, writing articles, blog posts, etc.
Contribute to the development of AI safety standards and best practices
Encourage speculation, questioning, critical analysis, and debunking of my work
Make use of satire and parody
Don’t:
Create works which disparage specific people, or discriminate or encourage hate or violence against groups of people
Use the names of artists in prompts, such that work is generated in their signature style
Undertake projects with unacceptable levels of risk
Reflections
There are a few sections of the PAI framework that seem a bit challenging as someone new to all of this discussion, applying the lens that I am.
Aim to disclose in a manner that mitigates speculation about content, strives toward resilience to manipulation or forgery, is accurately applied, and also, when necessary, communicates uncertainty without furthering speculation.
I think I covered this in a few places now, the Decoder piece maybe, the France 24 interview… In short: I want to encourage speculation, ambiguity, uncertainty; that’s hyperreality, that’s the uncanny valley. As an artist, that’s what’s exciting about these technologies, that they break or blend boundaries, or ignore them altogether. And like it or not, that’s the world we’re heading into as a massively splintered choose-your-own-reality hypersociety.
Yes, I think it’s necessary all these industry standardization initiatives are developed, but I guess I’m also interested in Plan B, C, D, or, in short: when the SHTF. I guess my vision is distorted because I’ve seen so much of the SHTF working in the field that I have. But someone has to handle when everything always goes wrong, after all, because that’s reality + humanity.
From PAI’s document, this one also I have a hard time still squaring with satire & parody:
Disclose when the media you have created or introduced includes synthetic elements especially when failure to know about synthesis changes the way the content is perceived.
If you’ve read the Onion’s Amicus Brief, it persuasively (in my mind, as a satirist, anyway) argues that satire should not be labeled, because its whole purpose is it inhabits a rhetorical form, which it then proceeds to explode – turning the assumptions that lead there inside out. Its revelatory in that sense. Or at least it can be.
So in my case, I walk the line on the above recommendation. I include statements in my books explaining that there are aspects which may have been artificially generated. I don’t say which ones, or – so far – label the text inline for AI attribution (though if the tools existed to reliably do so, I might). I want there to be easter eggs, rabbit holes, and blind alleys. Because I want to encourage people to explore and speculate, to open up, not shut down. I want readers and viewers to engage with their own impressions, understanding, and agency, and examine their assumptions about the hyperreal line between reality and fiction, AI and human. And I want them to talk about it, and engage others on these same problems, to find meaning together – even if its different from the one I might have intended.
It’s a delicate balance, I know; a dance. I don’t pretend to be a master at it, just a would-be practitioner, a dancer. I’m going to get it wrong; I’m going to make missteps. I didn’t come to this planet to be some perfect paragon of something or other; I just came here to be human like all the rest of us. As an artist, that’s all I aim to be, and over time the expression of that will change. This is my expression of it through my art, in this moment.
They refer to this as being one of the essential roles to help fill out certain aspects of a model card. For example:
the sociotechnic, who is skilled at analyzing the interaction of technology and society long-term (this includes lawyers, ethicists, sociologists, or rights advocates);
Interestingly, their use of it sounds very much like the professional discipline of Trust & Safety. (I still find it curious that T&S as a term does not seem to intersect all that much with conventional AI safety discourse.)
They elaborate later on:
The sociotechnic is necessary for filling out “Bias” and “Risks” within Bias, Risks, and Limitations, and particularly useful for “Out of Scope Use” within Uses.
Now, I believe Huggingface is maybe based in Paris (?) and as someone living in Quebec, I recognize this as being probably a “franglicism,” especially since I don’t see it coming up in this form in English on for example Dictionary.com.
The term is evidently a variation on the concept of socio-technical systems more broadly. Wikipedia’s high level definition there is not great, but ChatGPT provides a serviceable one:
Socio-technical systems refer to systems that are composed of both social and technical components, which are designed to work together to achieve a common goal or purpose. These systems typically involve human beings interacting with technology and other people in a specific context.
So even though we don’t use this word “sociotechnic” as a person who works on socio-technical systems, perhaps we do need a word that plugs that gap, and accounts for the many roles which might fill it. I think in this case, that role would be first and foremost about understanding human impacts, and then reducing or eliminating risks to human well-being. It sounds like a worthy role, whatever we call it!
First, I’m applying hyperreality as my lens. You.com/chat gave me a serviceable definition of hyperreality, which is mostly paraphrased from the Wikipedia article, it seems:
Hyperreality is a concept used to describe a state in which what is real and what is simulated or projected is indistinguishable. It is a state in which reality and fantasy have become so blended together that it is impossible to tell them apart. Hyperreality is often used to describe the world of digital media, virtual reality, and augmented reality, where the boundaries between what is real and what is simulated have become blurred.
Maybe it’s just me, but this feels like a useful starting point because it speaks to shades of grey (and endless blending) as being the natural state of things nowadays. It’s now a ‘feature not a bug’ of our information ecosystems. And even though Truth might still be singular, its faces now are many. We need new ways to talk about and understand it.
Right now, people totally misunderstand what AI is. They see it as a tiger. A tiger is dangerous. It might eat me. It’s an adversary. And there’s danger in water, too — you can drown in it — but the danger of a flowing river of water is very different to the danger of a tiger. Water is dangerous, yes, but you can also swim in it, you can make boats, you can dam it and make electricity. Water is dangerous, but it’s also a driver of civilization, and we are better off as humans who know how to live with and work with water. It’s an opportunity. It has no will, it has no spite, and yes, you can drown in it, but that doesn’t mean we should ban water.
Water is also for the most part ubiquitous (except I guess during droughts & in deserts, etc.) as AI soon will be. It will be included in or able to be plugged into everything in the coming years.
Lingua Franca
Thinking of it that way, we need a new language to talk about these phenomena which will, as Jack Clark aptly pointed out, lead to “reality collapse.” That is, we need a new lingua franca, and I suspect that we have that in the concept of hyperreality; we just need to draw it out a little into a more comprehensive analytical framework.
Dimensionality
One thing I’ve observed in other analyses of the conversations emerging around AI generated content and allied phenomena is that there is a bit of reduction happening. Possibly too much. It appears to me that most discussions usually center around a very limited set of axes to describe what’s happening:
Real vs. fake
Serious vs. satire
Harmful vs. responsible
Labeled vs. unlabeled
Certainly those form the core of the conversation for a reason; they are important. But alone they give an incomplete picture of a complex thing.
Speaking as someone who has had to do the dirty work of a lot of practical detection and enforcement around questionable content, I think what we need is what might be called in machine learning a “higher-dimensional” space to do our analysis. That is, we need more axes on our graphs, because applying low-dimensional frameworks appears to be throwing out too much important information, and risks collapsing together items which are fundamentally different and require different responses.
It’s interesting once we open up this can of worms, that a more dimensional approach actually corresponds quite closely to the so-called “latent space” which is so fundamental to machine learning. Simple definition:
Formally, a latent space is defined as an abstract multi-dimensional space that encodes a meaningful internal representation of externally observed events. Samples that are similar in the external world are positioned close to each other in the latent space.
In ML, according to my understanding (I had to ask ChatGPT a lot of ELI5 questions to get this straight): for items in a dataset, we have characteristics, each of which is a feature. Then a set of features that describes an item is the feature vector. Each feature corresponds to a dimension, which is sort of a measurement of the presence and quantity of a given feature. So I higher-dimensional space uses more dimensions (to measure features of items), and a low or lower dimensional space attempts to translate down to fewer dimensions while still remaining adequately descriptive for the task at hand.
In my mind, anyway, it seems altogether appropriate to adopt the language and concepts of machine learning to analyze phenomena which include generative AI – which is really usually just machine learning. It seems to fit more completely than applying other older models, but maybe that’s just me…
Higher-dimensional analysis of hyperreality artifacts
So, what does any of that mean? To me, it means we simply need more features, more dimensions that we are measuring for. More axes in our graph. I spent some time today trying to come up with more comprehensive characteristics of hyperreality artifacts, and maxed out at around 23 or so pairs of antonyms which we might try to map to any given item under analysis.
However, when I was trying to depict that many visually, it quickly became apparent that having that many items was quite difficult to show clearly in pictorial form. So I ended up reducing it to 12 pairs of antonyms, or basically 24 features, each of which corresponds to a dimension, which may itself have a range of values.
Here is my provisional visualization:
And the pairs or axes that I applied in the above goes like this:
Fiction / Non-fiction
Fact (Objective) / Opinion (Subjective)
Cohesive / Fragmentary
Clear / Ambiguous
Singular / Multiple
Static / Dynamic
Ephemeral / Persistent
Physical / Virtual
Harmful / Beneficial
Human-made / AI generated
Shared / Private
Serious / Comedic
From my exercise in coming up with this list, I realize that the items included above as axes are not the end-all be-all here. It’s not meant to be comprehensive & other items may become useful for specific types of analysis. In fact, in coming up with even this list, I realized how fraught this kind of list is, and how many holes and how much wiggle room there is in it. But I wanted to come up with something that was broadly descriptive above and beyond what I’ve seen anywhere else.
Graphing Values
What’s the benefit of visualizing it like this? Well, having a chart helps us situate artifacts within the landscape of hyperreality; it lets us make maps. I wasn’t familiar with them before trying to understand how to represent high-dimensional sets visually, but there’s something called a radar graph or spider graph which is useful in this context.
I found a pretty handy site for making radar graphs here, and plugged my data labels (features) into it. Then, for each one, I invented a value between 0-4, which would correspond to the range of the dimension. Here’s how two different sets of values look, mapped to my graphic:
Now, these are just random values I entered to give a flavor of what two different theoretical artifacts might look like. I’m not really a “math” guy, per se, but it becomes clear right away once you start visualizing these with dummy values that you could start to make useful and meaningful comparisons between artifacts under analysis – provided you have a common criteria you’re applying to generate scores.
Criteria & Scoring
So the way you would generate real scores would be – first decide on your features/dimensions you want to study within your dataset. Then, come up with criteria that are observable in the data, and are as objective as possible. You should not have to guess for things like this, and if you are guessing a lot, your scores are probably not going to be especially meaningful. You want scoring to be repeatable and consistent, so that you can make accurate comparisons across diverse kinds of artifacts, and group them accordingly. A simple way to score would just be with a 0 for “none” and a 1 for “some.” Beyond that, you could have higher numbers for degrees or amount of which a given feature is observable in an artifact. So in the examples above, 1 could represent “a little” and 4 would be “a whole lot.”
Taking Action
Within an enforcement context – or any kind of active response, really (like for example, fact checking) – once you’ve got objective, measurable criteria that allow you to sort artifacts into groups, you can then assign each group a treatment, mitigation, or intervention – in other words, an action to take. This is usually done based on risk: likelihood, severity of harm, etc.
Anyway, I hope this gives some useful tools and mental models for people who are working in this space to apply in actual practice. Hopefully, it opens the conversation up significantly more than just trying to decide narrowly if something is real or fake, serious or satire, and getting stuck in the narrower outcomes those labels seem to point us towards.
Hyperreality is here to stay – we might as well make it work for us!