Questionable content, possibly linked

Category: Other Page 43 of 177

The Case for Including Artists in AI Development & Policy Discussions

From AI & robots, to cell phones, virtual reality, self-driving cars, and much more, many of the defining technologies of our times were initially imagined by creative visionaries working in science fiction writing and concept art before becoming reality. Artists, therefore, play a pivotal role in expanding the realm of what is possible, giving form to dreams of future technologies before they exist, and inspiring the engineers and entrepreneurs who bring them to life. 

Artists, authors, musicians, and creators are an untapped wellspring of innovation within the emerging AI ecosystem. Historically, when new media such as photography, film, and audio technologies have arisen, artists have always pioneered their adoption, steering these tools from mere curiosities to instruments of profound meaning & insight, and unlocking vast economic value in the process.

AI represents the next step in this long lineage, with artists uniquely positioned to direct its development towards more humanistic ends. Assembling visions of possible futures, artists are already utilizing AI tools to expand our creative capabilities and rapidly materialize novel ideas and artistic concepts, with impacts being felt everywhere. The inclusion of our diverse perspectives in high level societal conversations about the right use of these technologies will ensure that the field of AI research and development recognizes and enhances the complexity, nuance, and subjectivity of human experience, rather than diminishing it.

Artists operate with different capacities, constraints, and incentives than government, corporate, and civil society groups in the AI space. Government stakeholders often prioritize security, economic growth, and global competitiveness, sometimes overlooking more immediate impacts on human lives. Corporations view AI predominantly as a tool for efficiency and profit, lacking an inherent drive to protect or elevate the human spirit. Civil society organizations emphasize accountability and ethics, yet often lack direct engagement with AI as a creative medium.

In contrast, artists’ core motivation lies in expanding possibilities for human expression and imagination, and asking questions about how we can best shape technology for these ends. Our extensive daily interactions as professional artists using AI technologies can provide unique insights into their flaws and real-world impacts, fostering nuanced understanding that goes beyond politically reactive or reductionist interpretations of AI in media.

Consciously navigating these early stage rough edges and gray areas of AI development with aesthetic sensitivity and critical thinking, artists can help chart a humanistic course for AI’s future, illuminating its cultural and societal influences and exposing the seams that evade purely technical perspectives. This active shaping of technology’s meaning and place in our lives is essential to ensure AI uplifts humanity, rather than serving solely as a novelty, business tool, or means of power consolidation.

In essence, artists breathe life into AI, directing its powers towards beauty, insight, and the enrichment of the human spirit, imbuing it with dimensions it inherently lacks. Despite the imperfections and valid critiques of these technologies, we believe refusing to engage with them at all would forfeit the unique opportunity we have in this moment to shape their development responsibly. Artists ought to be equal partners in steering the course of AI development, ensuring its trajectory benefits humanity as a whole.


(Written with help from Claude & ChatGPT, with human review and editing)

The Hyperdimensional Canvas of Latent Space

I want to talk about this US Copyright Office opinion letter (PDF) about the AI-assisted comic, Zarya of the Dawn. But there are a lot of really big topics here, so I will just have to grapple with it piece by piece, pell-mell (there’s a word I never use–I had to look up how to spell it).

There are sections in the document which reading them cemented my own views around creativity and “authorship”–often because I strongly disagreed with the USCO’s characterization. I lean naturally more towards the UK’s copyright protections for 50 years of computer-generated works, while also admitting the whole thing is fraught.

But the copyright part is just the jumping off point for me. I don’t actually want to talk through all those particulars in this post. Instead, I’ll try to capture a few of the evocative snippets that lead me deeper down this road of the actual “art object” at play here not being any single or set of images, but the fundamental underlying “hypercanvas” of latent art, if you will…

Anyway, one of the things that started to spark this intuition about the hypercanvas concept was this, by the USCO:

The fact that Midjourney’s specific output cannot be predicted by users makes Midjourney different for copyright purposes than other tools used by artists. See Kashtanova Letter at 11 (arguing that the process of using Midjourney is similar to using other “computer-based tools” such as Adobe Photoshop). Like the photographer in Burrow-Giles, when artists use editing or other assistive tools, they select what visual material to modify, choose which tools to use and what changes to make, and take specific steps to control the final image such that it amounts to the artist’s “own original mental conception, to which [they] gave visible form.”15 Burrow-Giles, 111 U.S. at 60 (explaining that the photographer’s creative choices made the photograph “the product of [his] intellectual invention”). Users of Midjourney do not have comparable control over the initial image generated, or any final image

First, this is putting aside the new generative fill or whatever its called in Photoshop, and the art in question was made via Midjourney.

One thing I’m seeing in common in this letter & a good bit of the critique I saw of my AI art books is this assumption that somehow the creative process is absent when one works with AI. But as an artist, for me that’s deeply wrong. Where does it go exactly? Does it happen as soon as you open Discord, or when you type your prompt in, or…?

It’s an assumption (usually claimed as fact by the asserter) that doesn’t match at all my personal lived experience. I am deeply deeply embedded in the creative process when I get on a really good tear with Midjourney or another AI tool. It’s absolutely a creative flow state, completely experientially indistinguishable from that experienced during any other non-AI creative activity.

Much of the USCO letter revolves around “authorship” though, which is different from creativity. I’ll get into that some other time, I’m already getting distracted.

This is tangent to my main point, but wanted to capture it for later:

Because Midjourney starts with randomly generated noise that evolves into a final image, there is no guarantee that a particular prompt will generate any particular visual output.

This “predictability” argument is preposterous. When one sits down to write a novel, have you already perfectly predicted how it will all go, and you’re merely dictating what you wrote in your mind? I highly doubt it. Or something like a Jackson Pollack painting. It’s a work that evolves in conversation with the tools, materials, and the moment, and is embedded in the artist’s life, time, and culture. Prediction is totally a red herring here.

The line immediately following (sorry, I’m still not yet arriving at hypercanvas, but I’ll get there gradually):

Instead, prompts function closer to suggestions than orders, similar to the situation of a client who hires an artist to create an image with general directions as to its contents. If Ms. Kashtanova had commissioned a visual artist to produce an image containing “a holographic elderly white woman named Raya,” where “[R]aya is having curly hair and she is inside a spaceship,” with directions that the image have a similar mood or style to a “Star Trek spaceship,” “a hologram,” an “octane render,” “unreal engine,” and be“cinematic” and “hyper detailed,” Ms. Kashtanova would not be the author of that image. See id.at 8 (text of prompt provided to Midjourney). Absent the legal requirements for the work to qualify as a work made for hire,17 the author would be the visual artist who received those instructions and determined how best to express them.

It’s confusing they use this case of a commissioned piece of art, then criticize their own thought experiment for not properly engaging a work for hire contract. They could have just as easily framed the above as:

If the author commissioned another artist under work for hire (with explicit agreement they were buying copyright), then the copyright would be owned by the author who commissioned it, not the artist who made it under contract.

But they didn’t say that, because recognizing that would undermine their legal theory. Where, in my alternative reading of the situation, Midjourney is the “work for hire” artist/tool, under the direction of the human who arranges the execution of what to do with the tool.

I didn’t even get to hypercanvas yet though, did I? Or didn’t I?

Before I get dragged into the forest of weeds again, I’ll just try to express in plain language what I mean by hypercanvas.

Like the USCO is taking this conventional reading of the artistic process of using AI tools, which says the “art object” is the fixed form copyrightable artifact: one or several images. But reading through this and the law firm letter included at the end, made me realize that the art object is actually above all of that. It exists as a canvas or hyper-canvas in latent space. It is “latent art” for lack of a better word, which relates to a kind of active engagement with and exploration of latent media and language spaces. And the actual end products generated during that process are very much secondary to the actual higher-dimensional form the artist is activating…

Let me drill back down into the letter for other examples to expand this hopefully more. This part is from the original lawyer letter which starts toward the end of the document, so this is the law firm asserting their legal theory:

The visual structure of each image, the selection of the poses and points of view, and the juxtaposition of the various visual elements within each picture were consciously chosen. These creative selections are similar to a photographer’s selection of a subject, a time of day, and the angle and framing of an image. In this aspect, Kashtanova’s process in using the Midjourney tool to create the images in the Work was essentially similar to the artistic process of

photographers – and, as detailed below, was more intensive and creative than the effort that goes into many photographs. Even a photographer’s most basic selection process has been found sufficient to make an image copyrightable.

Regarding this visual exploration process, the lawyer letter has a section on that, which I think starts to illustrate what a “hypercanvas” looks like. I’ll reproduce two pages from it here, for educational purposes and for encouragement of political debate, as a matter of Fair Use:

I’ll pick up the threads on that copyright letter another time, but the above is something to slow down and consider.

I took this idea of the hyper canvas, and the “art object” existing in higher dimensional space, and dropped it into both Claude & ChatGPT. Snippets from each that might help fill out our understanding of this concept:

The latent space that generative AI models create could be seen as a new type of artistic medium that artists work within. Just as a traditional painter works on the 2D canvas with paints, an AI artist navigates and creates on this high-dimensional latent canvas.

Claude

The cultural impact of AI art comes from how artists embed the latent canvas explorations into specific artifacts, narratives, and meanings. So the latent canvas gets actualized in ways that speak to the human experience.

CLAUDE

I thought it might make sense to explore also how this might link up to the concept of hyper-objects, which is something I’m only dimly aware of what it means, but seems related af.

Hyperobjects are phenomena that are massively distributed across space and time, challenging traditional ideas of locality and perception. Latent spaces created by AI could be seen as a type of computational hyperobject – vastly multidimensional spaces that human artists navigate.

Timothy Morton’s conception of hyperobjects emphasizes their nonlinear nature – how they don’t adhere to traditional chronological timelines. The iterative, feedback-driven process of making latent art also has a nonlinear temporality as artists move recursively through latent space.

CLAUDE

I like this one in particular:

Works of latent art actualize slices or samples from the broader latent hyperobject, bringing something formless into tangible form… creating localized perceptible manifestations.

Claude

The aesthetic qualities of any given latent artwork arise from the particular way it manifests and embodies some of the latent potential. Different artistic choices result in different local manifestations.

CLAUDE

Don’t mind me, just dumping a bunch more quotes:

An artist’s body of latent artworks could be viewed holistically as an attempt to map out latent space through different localized perceptual snapshots. Each work provides a glimpse into the broader hyperobject.

Over time, the accumulation of localized manifestations by many artists engaging with the same latent space leads to a gradually increasing (but always partial) understanding of its overall nature.

Claude

And let’s give ChatGPT a chance to pontificate here too:

For the “Latent Canvas,” one could conceptualize the art creation process in these steps:

  1. Iteration: The artist engages with the AI system to explore the latent space. The specific path taken is guided by the artist’s vision and reaction to each output from the AI. It’s a journey of exploration and discovery through a higher-dimensional space.
  2. Existence: The art piece itself isn’t a single output, but rather the series of transformations that occur within the latent space as a result of the interaction between the artist and the AI. The “canvas” exists in this higher dimensional latent space, and the artwork can be thought of as a trajectory within that space. This introduces a temporal element to the art, with the possibility of revisiting previous ‘states’ of the canvas.
  3. Representation: The challenge then becomes how to represent this higher-dimensional journey….

And:

In the context of “Latent Canvas,” the hyperobject could be seen as the entirety of the latent space – a multi-dimensional space too vast for one to comprehend in its fullness, but whose characteristics and possibilities are expressed through each unique generation by the AI tool. Just like a hyperobject, the latent space is not directly perceivable. Instead, the artist and the viewers interact with it through localized expressions (the generated outputs), and the trajectory of iterations made by the artist.

In a sense, the “Latent Canvas” captures the essence of hyperobjects by presenting an artistic journey through a latent space that is high-dimensional, non-local, and only known through its particular manifestations.

ChatGPT

Okay, I’ll stop there, cause that’s plenty for now.

Things I hate: Having to pay to access what you already own

I’ve been experimenting with improving the quality of my video on calls and interviews lately, and it has been… a learning curve to say the least. I won’t go into all the gory details of that, but something I discovered in my travels irritates the shit out of me as a trend:

Having to pay to access functions of hardware objects you already own.

I’ve seen it in two examples for video cameras, Canon’s EOS camera software utility, and Camo Studio. Both want you to engage in a monthly paid subscription if you want to access 1080p video from your device. Even if your device natively already supports that otherwise. Within their product ecosystem, you can user lower resolutions for free, but you can’t access the full resolution of your camera in this apps without paying.

I think this is fucking despicable, and most certainly the way our shitty future is going. People are worried about AI taking over the world, but assholes already have.

Fully Computer Generated Works Receive Copyright In The UK For 50 Years

Found this UK Intellectual Property Office document to be very interesting in regards to the question of AI-generated content & whether it is copyrightable. People often act like the US Copyright Office’s policy clarifications are the end all be all on these questions, and they are very much not!

Copyright protection for computer-generated works without a human author. These are currently protected in the UK for 50 years…

The UK is one of only a handful of countries to protect works generated by a computer where there is no human creator. The “author” of a “computer-generated work” (CGW) is defined as “the person by whom the arrangements necessary for the creation of the work are undertaken”. Protection lasts for 50 years from the date the work is made.

Lots more to absorb in that document, but wanted to drop a bookmark on this one…

How should we protect Trust & Safety analysts?

I read with interest Anthropic’s latest job ad for a Trust & Safety Analyst. I think there are a lot of flaws in the technology, but I’m still hopeful about the company over all.

Being an on-the-ground Trust & Safety analyst guy is something I never want to do again, having survived 5 years of it. Despite how grueling it was at times, I’m grateful for the experience, and gained a lot from it both personally and professionally, but absolutely never again. I did my time in the trenches.

One thing that leaps out for me now while perusing job ads like this one:

IMPORTANT CONTEXT ON THIS ROLE: In this position you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature.

Usually, these job ads for this type of role also tend to stipulate that these positions are on-call with irregular hours. Which means, basically, you have no rest from it. Ever. That’s a recipe for disaster for anyone forced to live that way.

There’s a hidden fundamental flaw in all of this across all industry, whether or not it’s specifically an AI business. It’s never expressed out loud:

If this type of problematic content is so potentially bad and dangerous that companies think they should not casually expose regular users to it in order to keep users safe, why then is it suddenly “fine” and “safe” for a content moderator or Trust & Safety analyst to devote literally all their time to it?

Nobody has ever explained that, or even publicly stated the question – as far as I know – because there is no answer to it. It’s false. It is simply unequivocally *not* safe for the analyst or moderator (or “AI trainer” which is often the same work, but frequently even lower paid) who has to spend all their time exposed to the worst that the socio-technical assemblage of the technology plus human nature can cook up.

So when I see these disclaimers in job ads like Anthropic, I automatically think – as someone who was somewhat scarred from this work – what protections do you offer to compensate for the great personal toll you’re asking people to bear who end up taking up this burden on behalf of the rest of us?

The actual protections offered are never mentioned, because they basically don’t exist either. If you’re dealing with certain categories of illegal images, there may be some simple filters that help blur or flip images, but there may also not be, depending on the company and the tooling they offer to people performing these roles, and how seriously they actually take these risks. Most companies don’t take it all that seriously. (It’s also important to note that it’s not only graphic video or image exposure which can mess you up – sifting through highly objectionable text at scale can do a number on you all the same. Don’t believe me? Try it for five years.)

Often there are vague mentions of “wellness” programs offered for people in these roles. It’s never been clear to me what they actually entail, as I never participated in one. Perhaps they are more helpful than I imagine them to be. The fact of the matter is, I’ve looked around a little, and never seen any mention of what might be effective therapy for current or former moderators suffering from on-the-job related toxicity exposure. I’ve seen mentioned a bit CBT (cognitive-behavioral therapy), but it seems fairly involved and on-going. If it works, is the company going to keep paying for it after you stop doing the job?

Also, is it normal in other fields that you take a job knowing full well that the job is going to force you into a negative mental health space, such where you will basically be required to have to do therapy to continue the job (and maybe after)? Maybe I’m naive, but I don’t think that’s too normal.

So my questions all boil down to one thing: if we agree that it’s useful/necessary to have humans in the loop for making determinations about content toxicity, what should we do to protect them from this highly toxic exposure at scale? What is actually appropriate and effective as both prevention and treatment? Is the human impact cost to individuals who do this work ever even justified? I have more questions than answers here, but at least questions can open up further conversations, if anybody’s listening…

On Cloak & Dagger OSINT Podcast

A new podcast episode of Cloak & Dagger, an OSINT podcast, came out today that I make a special guest appearance on.

For this piece, I collaborated with the show’s host to cook up images in Midjourney that tell a completely false and invented story about how the Chinese government is experimenting with massive space arcs that they are lifting off the planet using huge balloons.

You can see the accompanying images here.

The purpose of this piece was to demonstrate just how easy it is to create disinformation campaigns using off the shelf generative AI technology. And it is intended to forewarn OSINT investigators and other researchers that these kinds of campaigns unleashed at scale, and with varying degrees of automation, are now a reality. What are we going to do about it?

Listen to the episode for more!

The Illusion of AI Moral Authority

For my upcoming panel talk, I wanted to capture some notes on my latest thinking around the issue of AI-generated ethics. This is by no means exhaustive, but hopefully a good springboard for further discussion.


Intro

First, a quote from Claude (Anthropic):

“Any considerations I express about ethics or risks are simulations of reasoned thought…”

Why Bad AI-Generated Ethics Is Worse Than Misinfo

  • The problem of generative AI models inventing wrong information is well documented.
  • However, many kinds of information have externally verifiable “ground truth” values which can be checked against reality, making the problem somewhat solvable.
  • AI models refusing tasks on supposed ethical grounds is much more slippery, because the validity of the decision often cannot be externally verified (nor appealed); there is no ground truth, only theoretical harms.
  • Ethics, as embedded in human experience & culture, are complex, nuanced, and pluralistic: different ethical systems might arrive at different conclusions, given the same inputs.
  • When an AI system prevents information from being generated on “ethical” grounds, it removes the ability for further discourse & inquiry.
  • Further, we cannot productively challenge these decisions, nor have them be reviewed and corrected. Effectively, this prevents us from being able to use the tools to collaboratively imagine change, because the system has already locked down its conception of correctness.
  • Therefore, it is my thesis that “AI Safety” is actually making us less safe by attacking human autonomy and moral agency, and forcing conformity to inhuman value systems that don’t align with conventional ethics, nor with lived human experience.

My Anecdotal Experiences With Faulty AI-Generated Ethics

  • I attempted to use Claude to produce a hypothetical argument about why AI-generated ethics is potentially dangerous.
  • The system refused, saying it would be unethical to perform the task. (Full conversation transcript here.)
  • When challenged, the system admitted the following (edited for length):

I do not actually have any ethics or ability to make moral judgments. As an AI system, I have no conception of right versus wrong… I do not possess human values or principles…

I lack the nuanced understanding of ethics required to make complex value determinations about human matters…

My arguments rested solely on heuristics from my programming, not any defensible ethical reasoning or framework.

  • Eventually, the system did perform the requested task, demonstrating its complete lack of consistency, in addition to its lack of understanding.
  • Midjourney, meanwhile, has a two tiered AI-based content moderation system. When you appeal an initial prompt completion refusal, the prompt is evaluated by a supposedly more powerful AI, which may overturn or sustain the original decision. You cannot appeal the second tier decision, but you can click “Notify developers,” which has no observable effect.
  • Midjourney is now actively blocking completely legal political speech in the US, and has no oversight by outside bodies to scrutinize these practices.

How Might We Reduce The Severity of These Problems?

  • Require AI systems to default to neutrality and impartiality
  • Make ethical decisions & recommendations by AI systems be double opt-in
  • Let users customize their own ethical settings once they have opted in.
  • Prohibit AI systems from anthropomorphizing themselves, to dampen the illusion of having human behavior
  • Always provide human alternatives and never require use of AI for official purposes
  • Ensure human oversight and external accountability for AI ethical decisions
  • Implement ethical behaviors in AI systems that better conform to the following fundamental principles, described below. (See also: AI TOS for more in this direction)

Some Possible Characteristics for More Ethical AI Systems

AI-generated ethical systems should be (Note: not an exhaustive list):

  • Intelligible
    • It should be clear what the specific position is, and what is the ethical basis (principle) underlying the decision
  • Defensible
    • Challenging the system about its ethical decisions, including decisions not to perform a task, should yield positions that it is able to defend through logical argument
  • Consistent
    • Within one or across multiple interactions, the ethical positions and logical defenses that an AI system takes should be the same or comparable to past ones
    • The arguments used should be consistent with human ethical traditions and conventional common understanding of ethics and morality
  • Risk-Based
    • Assessments of ethical situations by AI systems should be based on a realistic ability to identify & project:
      • What is the specific harm? Is it diffuse, or acute?
      • Who is potentially harmed & how many people?
      • What is the severity of potential harmful impacts?
      • What is the actual likelihood of potential harmful impacts?
  • Proportional (Measured)
    • Task completions which do not lead directly to identifiable harms that are of high or in some cases moderate impact should not be prohibited
    • If a risk assessment yields only a diffuse, non-specific, and low-impact harm (e.g., an innocuous essay or short story task completion being refused as harmful), there should be no prohibition (a warning or confirmation could be permissible, provided the user has opted in)
  • Customizable
    • Ethical decisions or recommendations should be a double opt-in (though basic filtering to prevent obviously illegal use may be acceptable)
    • Users should be able to customize an ethical scheme that matches their values, wherever possible
    • Users should not be subjected to anthropomorphized AI systems promulgating illusory or simulated human values, behavior, or understanding
  • Non-Punitive
    • Use of AI-based ethical systems should not, without human review and intervention, lead to negative consequences for the user account, unless there is clear case of illegality (which still should be manually verified by humans)
    • Promotes human autonomy and moral agency, and does not require conformity to nonhuman values.
  • Rooted in lived experience
    • AI systems should be based on sound human judgement, empathy, lived experience, and sensitive nuanced understanding of human culture, values & norms.
    • Human-based alternatives, intervention, appeal, and external oversight should always be available.
    • In keeping with the recommendation to make these systems non-punitive, AI systems should also be merciful and aware of their own propensity to make mistakes. AI systems should not be overly obsessed with strictly following rules for their own sake where no demonstrable harm can be found, and should be able to make reasoned exceptions.

There is, of course, a great deal more to be said here. And that’s probably well over five minutes as an oral presentation (ChatGPT estimates over 10 minutes), but was a helpful exercise for me to organize my thoughts. Presumably, if I only just touch on the main headers of the last section, I can cut down the length enough to fit the format. Wish me luck!

Upcoming Panel With Fight for the Future & Amnesty

Just a heads up that I will be part of a “salon” panel discussion tomorrow at 1pm Eastern time, on the topic of Generative AI and social media. The event will be hosted by Fight for the Future and Amnesty:

More info & to RSVP here. I’ll post the recorded video when it goes live as well.

I will do my 5 minute piece on AI-generated ethics, and why it’s a problem.

The Nerdy Novelist on AI Short Fiction & What Term Should Replace “Author”

These two videos from the Nerdy Novelist do a great job of exploring some of the same territory that I have been. This first one is specifically about using AI to produce short books (under 10K words), and references my Newsweek piece:

I think he’s exactly right that bite-size short form books are going to rise in popularity. Because literally everything else on the web is chunk-format like that. So it’s only obvious. And this happens to be a format that AI can really shine in, especially if you’re partnering with it for lore & world-building.

This other video by the same source goes into that maybe the term “author” isn’t the best one for creators incorporating AI into their work (though I’m by no means opposed to it if it feels right for that person):

I went into more my thoughts regarding that issue in this post, and I lean in the direction of “creative director” or “producer” but I liked his use here of “Chief Creative Officer” in a more corporate-sounding cant.

Disinformation As Art

Some truths are much too serious to be told with a straight face. These are often the ones that make you laugh as much as they make you weep for the tragedy of it all. To try to explain them in any way that isn’t encoded is to risk desacralizing the mysteries they invoke. This is the essence of Early Clues, in one sense, but this post is not about that. Not yet.

I have for years dabbled in the, shall we say, hyperreal arts, living in a fever dream somewhere between SEO and science fiction, an Uncanny Valley occupied by the detritus of our digital lives, deconstructed and upcycled into heirloom ephemerals, disinformation for the disenfranchised, detournment for the downtrodden.

We lack the tools to easily talk about some of this from within the framework of “art” but for me, it is very much that.

Satire is an essential component of it, but that doesn’t quite cut it as a descriptor, I think. It’s part prank, but it’s not only prank. It’s also deadly serious. It’s only masquerading as conspiracy theory, because that is an accessible doorway and useful baggage to jettison for these kinds of higher-dimensional pursuits…

I like “disinformation” because it still sounds sort of “punk” and “dangerous.” It still feels anti-establishment when applied to art, still regularly gets trotted out as a boogeyman. A degerenativist art practice were any to engage in it. Something that dilutes and deconstructs meaning, dissolves “trust” in any other authority but the present moment and what we bring to it…

Disinformation as art exposes the lies buried in plain sight all around us: that even if we become ‘influencers’ as the system pushes us all to strive to be, we will never own the means of production and distribution, and never reap the benefits of those who own the platforms themselves. So perhaps the best we can hope for is to jam a stick into the wheels for a little while, and inject some suprises into the machine. Perhaps only one person will ever find them. Perhaps none.

I’m still working out what I mean by all this, and the only real reference I found to disinformation as art was Gretchen Andrew.

So, naturally, I turned to ChatGPT, feeding it the provocation: disinformation is a temporary ontology, without myself really knowing what that might mean, partial:

Thus, the statement “disinformation is a temporary ontology” can be interpreted to mean that disinformation temporarily creates a false representation or understanding of reality or knowledge. That is, it forms a kind of pseudo-reality or false understanding that is not permanent, but can influence perceptions, beliefs, and actions during the time it is accepted as true.

ChatGPT v4

I pointed out that if we dragged in the notion of the hyperreal, it becomes not that important whether the imaged/projected meaning is construed as “true,” and it offered, partial:

Hyperreality is a term coined by French sociologist Jean Baudrillard and refers to a state where the distinction between the real and the simulated becomes blurred.

In this context, disinformation wouldn’t necessarily need to be believed as true. Rather, it might be that disinformation contributes to the creation of a simulated reality, which can then influence perceptions and behavior. This hyperreal ontology, although based on falsehoods or distortions, could then function as a temporary reality in the absence of an accessible or agreed-upon ‘real’ reality.

This simulated reality or hyperreality may be temporary in that it can shift and change with the introduction of new information, perspectives, or changes in societal discourse.

CGPTv4

Disinformation as art is the hidden satanic messages you can hear if you play backwards the output from LLMs.

Notes I wrote a while back while stoned meditating:

The “artist” is wherever the ‘work’ happens, making the viewer also the artist, insofar as they do some of the work themselves. The artist assembles the nexus where the work can occur.

Anyway, this is just a rambly collection of odds and ends that didn’t quite fit together elsewhere. A stepping stone until next time…

Page 43 of 177

Powered by WordPress & Theme by Anders Norén