Tim Boucher

Questionable content, possibly linked

Listening As A Creative Act

I’ve written about this before – I don’t know where and don’t feel like searching for it – about when working with generative AI that the role of the artist (I don’t like the word “creator” for a variety of reasons here) becomes something like the First Viewer. Or First Reader, or First Listener, or First Whateverer. Discoverer.

If anyone can create something similar with gen AI (which… I’m not really sure is democratizing, so much as it is a flattening & homogenization – to be truly democratizing, I think it would have to honor human uniqueness a great deal more than it does – rather than forcing all outputs into a rather constrained, if sometimes pretty box), then the question becomes almost less about what was created and more about the who and the context of the Act of Discovery. What drove this person there? How did they seek it out? What did they do when they found it? How does sharing it with others change it?

There’s that line in Billy Joel’s ‘Summer, Highland Falls‘ (from the excellent Turnstiles album) that goes:

And I believe there is a time for meditation
In cathedrals of our own

Sometimes sharing our own private world-building with others can be richly rewarding. Other times, it can be like opening up your private mental-emotional life and its secret signs and signifiers to a bunch of strangers with bad intentions and grabby hands. (I’m still wrapping my head around that phrase btw:)

In each case, the sign can be broken into two parts, the signifier and the signified.  The signifier is the thing, item, or code that we ‘read’ – so, a drawing, a word, a photo.  Each signifier has a signified, the idea or meaning being expressed by that signifier.  Only together do they form a sign.  There is often no intrinsic or direct relationship between a signifier and a signified – no signifier-signified system is ‘better’ than another.  Language is flexible, constructed, and changeable.  de Saussure uses the word ‘arbitrariness’ to describe this relationship.

Anyway, there’s a concept in Jungian psychology which has always interested me as an artist: active imagination. In that context it doesn’t mean that your imagination is working too hard. It means like you engage actively through waking states and physical acts (like artwork or journaling) to engage with what is perceived to be the contents of the subconscious mind as represented through dreams, and visions, etc.

There are many different ways of doing that, but the whole thing cleaves very close to how I’ve always used AI. It’s been an exploration of the technologies themselves and their raw limits and capabilities for sure, but as expressions of the parallel deeper explanation of the realms of the self and other as expressed through artwork and storytelling I’ve been working on for decades.

It’s why I don’t care at all about this criticism of “using AI in art makes it not art” or makes it “not yours” because you didn’t “create” it etc. First, I’ve been thinking about this and like Tolkien’s concept of subcreation here, but I won’t get into it because this is already digressive enough and it has to stop somewhere. Second, none of those divisions, categories, and labels even exist in my mind when I get into that flow state and everything is working, and you’re getting the results from the machine that match what you’re after well enough to proceed on to the next part, the next step, the next try. The exploration goes on and on.

To me it’s a deep and extremely creative Act of Listening. You listen for the small voice, you peer through the dark and find the little light, and you keep going. You don’t try to explain it to yourself, though everyone demands you do it for them – if they can even be bothered to care. And why should they? It’s your world, your subcreation, why even take the risk of letting them in? Why not keep it locked up tight and tidy and never let anyone else’s ships sail those inner seas and sully those waters with their unwelcome waste products.

But I think the answer is we have to respect the active-creative process of listening of others as well. And to share deeply is to enrich not just one’s own listening, but that of others as well. Not all are listening, even fewer are whatever whatever. Don’t reduce listening to merely a passive act, and the rest will take care of itself. Seek. Find. Invoke. Create. Repeat.

So for me, whether it has been for bookmaking or musicmaking or other kinds of internet merrymaking, using AI has always been a tool of this emerging brand of active-creative listening, a kind of listening that bears fruit, that invokes a new thing into existence, which has the potential to become a touchpoint not just for oneself but for no one can know how many countless others who too are sitting at home listening, and waiting for their sign.

Idea: Exposing Journalistic Trails To Re-Build Trust

A website called Editor & Publisher has an article about how journalists are already using AI, so newsrooms need a policy to guide proper use. Skimming that gave me an idea…

What if part of journalism became a kind of record-keeping trail regarding the actual research that lead to a given piece? What if that even included detailed notes from AI prompts and results which contributed to a given piece? I recognize that much of what goes into developing a story might be confidential: talking to sources off the record, or following trails that lead to dead ends. Or having an editor axe certain parts of what you worked on – or sometimes an entire article. But what if that forensic trail could at least identify like [Source Protected] or [Redacted By Editorial] or for AI prompts, if they veered off topic into personal affairs could have sections marked [Redacted for Personal Data Protection].

Sure there would be technical questions to resolve, but there already are if we look into cases of for example questionable AI-assisted reporting like Margaux Blanchard. Right now, we have little to no transparency into cases like that, and this does nothing but further erode trust in the institutions caught in those webs. I’ve also seen in my own work, plenty of misreporting and completely false representation of “facts” which if we had more complete forensic tracking of the development of those stories (including for example archived versions of the pieces they were based on, and their metadata), it would be easy to identify where the errors were introduced.

Of course, I’ve rarely seen anyone address the underlying question and assumptions: does being a trusted brand in media actually improve your bottom line? Is “trust” essential to repairing what is essentially a system wide failure of media business models? I’m actually not so sure; it might be a puzzle piece, but it’s not going to stem the tide of the way things are headed… but is something like this still worth a try?

EDIT:

Here’s what I realized after writing this: people don’t even read the article, so why would they dive into some kind of forensic trail from which an article was composed in the first place?

Who Gets To Say What “World Music” Is?

Have been swirling around this topic for quite some time in my current AI music project, of who gets to decide what world music is? There’s a good David Byrne essay from 1999 here, but the long and short of what I’ve been exploring through all this is that we all live on the “world,” so why is music from my country simply called “music” but music from your country is called “world music?” Do I not live in the world? What are my rights here? What are my obligations?

While I’ve seen a lot of talk about protecting artists’ rights (which I agree with, while also generally thinking training is Fair Use as long as its transformative), I’ve seen less about participating in and contributing to the common collective cultural inheritance of all humanity throughout history. The entirety of recorded human knowledge and experience and culture transformed through time and space, digitized, turned into an LLM/image/video generator, etc. (Which is more or less the plot of The Continuity Codex btw: a thumb-drive sized AI based on all human knowledge is hunted by authorities for… reasons) As a common cultural heritage of everyone who equally lives on “the world”… I guess I end up thinking about it a bit cosmically at a certain point, like the Akashic Record or something, but less New Age and more concrete like, This is Happening…

If it’s wrong to make “AI world music” (is it?) is there someone somewhere who has more of a right to make it than I do? Do certain cultures own words that describe certain instruments? What’s going to stop anybody from using any kind of sound or word or style, whether or not they are a “valid” member of any given identity group who has authority to act within that tradition? How do we decide who has that authority? These are just a few of the dozens of linked questions that swirled in my head the last couple of months working on this project. Not because I have answers to them, but because I’ve become obsessed with the variables that fall out when you shake the possibilitrees…

One other tangent I took in my sketches and explorations went something like this: Is it possible to make “world music” using AI where you don’t name any specific culture, geography, or specific known type of music? So you’re not potentially butting up against something you probably shouldn’t be using unless you’re certified? (I say that in jest, but also serious at the same time, because life is like that now) And I found the answer was yes, it is possible. But that was not the whole answer that I found. The real one was that it’s interesting to try to include all compatible possibilities, and not just those which are or aren’t approved or necessarily appropriate. Transgression, after all… it may be bad and wrong at times, but maybe there’s something to it also. Maybe sometimes we can forgive ourselves for being wrong and bad, and just be whatever it is that we are. Maybe there’s a way through folly that really does end in wisdom. It’s certainly a fool’s errand to try and find out…

Anyway, that turned into a really long digression that probably buried any real point I was trying to make, and this was just meant to be a short link post out to this 2019 Guardian article about how World Music as a term is fraught with colonial cultural baggage.

EDIT:

I remembered part of my buried point, that … oh wait no. Lost it again.

But I did find this essay about cultural appropriation which I thought had a few interesting points that I’m still digesting about how some of these questions risk a sort of over-commodification of culture, turning things into discrete units of “property” etc.

Oh wait no! I remember it again. It has to do with the Honor System. If certain things are off limits culturally, whether that has to do with cultural appropriation or with something like Sora 2 using peoples’ likeness or brands, etc without permission… but those things are at the same time not only technically feasible, and highly believable, but also widespread among the general populace, how can we expect any kind of “Honor System” to hold up if the technology allows it?

Anyway, another half-baked take. Gonna put these back in the oven for a while and hopefully come up with something better…

EDIT 2:

The related concept of Recuperation from the Situationists (as opposed to detournement) is I think interesting enough to warrant inclusion in this conversation also:

In the sociological sense, recuperation is the process by which politically radical ideas and images are twisted, co-opted, absorbed, defused, incorporated, annexed or commodified within media culture and bourgeois society, and thus become interpreted through a neutralized, innocuous or more socially conventional perspective. More broadly, it may refer to the cultural appropriation of any subversive symbols or ideas by mainstream culture.

The concept of recuperation was formulated by members of the Situationist International, its first published instance in 1960. The term conveys a negative connotation because recuperation generally bears the intentional consequence (whether perceived or not) of fundamentally altering the meaning behind radical ideas due to their appropriation or being co-opted into the dominant discourse. It was originally conceived as the opposite of their concept of détournement, in which images and other cultural artifacts are appropriated from mainstream sources and repurposed with radical intentions.

On Spotify Partnering With Music Companies, Something Something “Responsible” AI

Been seeing this headline today about Spotify partnering with Sony, Universal, Warner, Merlin, etc. to do something something “responsible” AI, my question is, why don’t they just add the simple label that Deezer already has when it detects AI-assisted content? Why don’t they require already their distributors like Distrokid to simply have a checkbox at time of upload that says, “I used AI to help create this music.” Wouldn’t that be the simplest ‘low hanging fruit’ product change they could make? The fact they don’t do that most barebones minimum thing makes me suspicious of any of the rest of it, frankly…

Does AI Music “sound bad?”

Sometimes, yeah for sure. Definitely not all, the more I have listened. And I would say that I have kind of had to learn to listen to it, because it sounded wrong and weird at first. Absolutely. But over time, I’ve come to actively like certain aspects and sound qualities inherent to it.

One good example I saw a lot in Suno v4.5+ was what I’ve come to call the “AI warble” particularly more noticeable on male versus female generated vocals. Something about that model does women’s voices better than men’s to my ears.

But yeah there is def a crunchy sonic weirdness that sometimes veers into off-putting depending on the track. I always select against generations like that and hard delete anything I myself wouldn’t want to listen to. But I’ve also noticed that the sound quality of these songs varies tremendously between devices, formats, streaming platforms, and speaker quality. Flaws or embellishments you hear in one might be rendered very differently through listening in other contexts.

I think also what we coming from a prior “real” music background might hear today as “bad” will in a not a long time sound nostalgic as we look back on the AI tools and how they’ve progressed in the intervening time.

With generating AI artifacts for production and artistic purposes, I think there’s always a forward-facing consideration too. What might have “bad” sonic qualities in a tune today, you might be able to remaster it and get a new render using better systems in 2 or 5 or 10 years. So is what is perfectly good or bad right now really the best and only consideration? It’s a strong “it depends” on all this I guess…

“AI-Generated” Label vs. Truth Values

There’s a user-generated content I’ve experimented with which has rules about labeling AI-generated content. Presumably they scan submitted texts for signs of that, and reject the submitted post until you fix it and/or apply the label.

When the articles thus tagged in their publish UI are made public, they have a little tag that appears at the top of the page “AI-Generated.”

I’ve got complicated opinions about this, but it will take some picking apart, so I’ll just drill down on one side of it. Telling me if something is or isn’t AI-generated does nothing to tell me about its supposed truth value, origin, agenda, author, etc. It’s a supposedly meaningful context signal that ultimately isn’t – I think – all that meaningful as a determiner of action or belief of the recipient/viewer/audience. It could be “AI-generated” and true, after a rough text gets cleaned up and formalized. And that’s still I think a good and even authentic use of AI…

So what do we really want from these labels?

(To be continued)

Sketches & Case Studies

I recorded an episode for the Audiobook Cafe podcast a few weeks back that I really enjoyed. The host, Jacob Shymanski, made a point that really hit home, regarding especially my AI lore book series. Namely, that they are like sketches or studies. They are not formally “finished” or polished or remotely perfect works. They are, in fact, deeply flawed and strange. And executed quickly in order to take advantage of and document the state of the tools used and my own thinking and creative process using them in that moment.

What I’ve done with AI music, and with media coverage around AI-related “mysteries,” I also see more or less the same way: as something like case studies, as answers to a series of what ifs that I pose to myself, and to others. Namely, what if someone else (or many someone elses) comes along and does what I did, but better – or what if they did it extremely maliciously? (Instead of merely mischievously like I have done, to capitalize on the absurdity of it all.)

Within that, I don’t offer definitive answers, explanations, or assertions. I’m not selling anything you have to believe in. I prefer if you don’t. I prefer if you make your own studies and sketches, but for better or worse, here are mine…

Fake, The New Normal

Psychology Today:

The sad reality is that believability has replaced truth as the new currency of cognition. We prize, even affirm what seems plausible, not what is proven. The fake isn’t only tolerated. It is functional and smooths the edges of uncertainty, offering just enough reality to let us keep scrolling. Just enough. […]

We once said “seeing is believing,” but that perspective has flipped. Now, believing comes first. Algorithms and filters shape our perception long before our eyes do. A fake image that aligns with our worldview feels more real than a genuine one that contradicts it.

In that sense, maybe fakery is less an act of deception than of collaboration. We participate in it, polishing the world until it reflects back a version we can live with. The fake doesn’t impose itself on us, we invite it in. Perhaps we have even become (willing or unwilling) co-authors of our illusions.

The Future Is Endless On-Device, On-Demand AI Premium Custom Content

Pretty much the title: the more AI stuff I see and work on deeply across media, it seems clear that the future is not about creating one-off AI artifacts and uploading them to various distribution services. (Though I think there’s a place for that for now as like “keyframes” or something in latent space) Instead, it is about endless, on-device, on-demand custom AI-generated completely post-reality content of any flavor anywhere anytime, with endless versions, variations, alternates, and remixes.

How to not get lost in such a sea of endless? Build a boat. There’s no other way.

Brand Nation on Post-Reality Era, Margaux Blanchard

This piece is an interesting read on the fake AI journalist known by the alias Margaux Blanchard. The whole article is worth reading, but here’s the ending:

Margaux Blanchard and the Velvet Sundown are emblematic of the ‘post-reality’ environment we now inhabit – one whereby AI agents, synthetic personas and AI-generated content intertwine with human work, often undetected. When they are exposed, they become stories in their own right and their reach grows.

The implications point to a future where trust, authorship and authenticity must be constantly interrogated.

Welcome to the post-reality era.

Page 2 of 204

Powered by WordPress & Theme by Anders Norén