One common feature in the Nippon TV and Anderson Cooper 360 videos me or my work appear in is the depiction of experts using tools to (hopefully kinda) determine if a given image was or wasn’t *probably* made using AI image generation.
While I understand the desire for tools to fish back out some measure of “certainty” from the murky depths of hyperreality, I think we’re embarking on a path which is potentially even more dangerous than mere generative AI on its own: off-loading our truth-telling capacities to AI.
The position we’re setting ourselves up for culturally here is:
- An AI creates an image (or other artifact)
- Another AI analyzes the image & returns a score indicating its likelihood of having been generated
- Based on the score, and their threshold tolerance (i.e., what score ranges they allow), a platform or other provider decides to accept or to block the content.
- Since the majority of end users likely won’t run this filtering & analysis on their own, they are leaving yet another determination about goodness & truth in the hands of platforms. That’s a lot of trust to put in platforms – too much.
Putting aside all the problems with false positives and false negatives in these systems, the problem again applies of using overly simplistic analysis to collapse content decisions into a sole dimension of real vs. fake.
I noticed on the Anderson Cooper 360 screengrab, for example, this language of “AI-Generated Fake Image.” When they show painted works on camera, I wonder if they use similar labeling, like “Human-Generated Fake Image?” My point is that the problem is much more diverse and multi-dimensional than we are currently analyzing for as a collectivity.
An AI-generated image might be a simulated depiction, but it is still an objectively “real” artifact – it exists, it has contexts, subtexts, intents and effects, authors and audiences. Simply determining its method of origin is only one small piece of the puzzle, one which if we focus on too closely, we’re likely to lose sight of the big picture.
It’s not for nothing that my AI Lore books (and the 4-issue limited edition hand-printed underground newspapers which preceded it) feature a shadowy organization called Information Control, tasked, after the AI Takeover, with managing the proper flow of information in the AI-controlled human cybernetic society. They determine what is true and untrue, and they scrub out anything that contradicts their rulings. The books are cautionary fables precisely because this impulse is so strong in human nature, to hand off blithely to someone else to tell us what’s true and what’s false, and what’s good, and what’s bad…
If we want to use provenance tools like this, instead of focusing narrowly only on the question of real and fake, let’s at least broaden the scope, and take in all available signals we can – let’s even invent our own. The future is non-linear and multi-dimensional. We can’t get there from here unless we’re willing to take a few quantum leaps in understanding…
Leave a Reply
You must be logged in to post a comment.