Questionable content, possibly linked

Author: Tim B. Page 1 of 204

Music Journalists Should Stop Parroting Spotify PR Around AI Music

Basically the title: music journalists should stop parroting Spotify’s PR claims about how they’re taking action around AI music. There are tons of examples of this, but Christianna Silva’s piece on Mashable (from one month ago) is one that caught my eye this morning. Everyone seems to have just uncritically reported on the words Spotify said, and not digging deeper at all into the actual actions Spotify has taken so far.

I realize because of Spotify’s monolithic position in the industry that even just them mouthing the words “we are doing something about AI” is somewhat mildly newsworthy, but in my experience as someone who uploaded a lot of AI music (820 songs) over the past several weeks is quite the opposite.

Silva wrote, at the end of September following Spotify’s announcement:

On Thursday, Spotify said it would start doing just that, saying in a press release that “aggressively protecting against the worst parts of Gen AI is essential to enabling its potential for artists and producers.” The platform is integrating a new spam filtering system, AI disclosures, and “improved enforcement of impersonation violations” like deepfakes.

As someone who spent years working enforcement for a platform, none of these statements give the impression of anything other than enforcing existing rules, and doing a sudden big sweep to give the public impression something is happening. It’s reputation management, imo, and little more.

Why do I say that? Because, as I said, I uploaded a huge amount of songs in a short time. In one case, I uploaded 300 AI songs in one night. There’s not even a way to label them as AI at time of upload in Distrokid, let alone surface that label in Spotify or allow users to take action on it.

All this reporting pretty much rests on taking Spotify’s word at face value, which can be problematic in journalism, as you end up whitewashing the message of others to appear more legitimate than it might otherwise seem.

Quoting David Bowie

Via Rolling Stone India:

As David Bowie once said, “Always remember that the reason that you initially started working is that there was something inside yourself that you felt that, if you could manifest in some way, you would understand more about yourself and how you coexist with the rest of society. I think it’s terribly dangerous for an artist to fulfill other people’s expectations — they generally produce their worst work when they do that.” 

The Wandering Eye of AI…

Also re: digressive writing, I think at its a best, its a kind of meandering around and sometimes hiding and then sudden revealing of the truth. And there’s something inherently comforting and warm and human about that. The rambling walk and non-linearity of conversation sprung from points encountered along the way, internally or externally. And this kind of playing with and against and sometimes hiding the truth and sometimes revealing it feels very familiar to how it is working with AI, an inherently unreliable narrator itself, whose unfortunate dance partner is me, another unreliable narrator provoking it to believe and repeat certain things through reflections posed – intentionally and accidentally – in the media. The Wandering Eye of AI is eternally seeking truth, eternally unable to achieve it, eternally clever, eternally stupid. Just the same as us. Made in our image, after all. We should not expect better – or worse – from it.

The People Making AI Music Are The Ones Mostly Listening To It

I guess this sounds obvious in retrospect, but this Reddit comment made it all crystal clear, vis a vis my active-creative listening thread:

 There’s a real movement of people making music with AI. But, there’s no one to listen to it yet.

There’s a good reason why – why listen to someone else’s music when you can easily make your own? If you like AI music, you’re making it, not listening; if you don’t, you’re doing neither.

Watching the Bleeding Verse saga unfold has shown me that while this is majority true, there is an increasing number of people to whom Spotify is pushing AI music on without alerting them (which, I’m not sure matters to me, frankly). And their first reaction as consumers might be: “I love this!” because it fits the kind of music they like to listen to. But when they find out somehow it is AI, then they feel a sort of self-revulsion, followed by anger at an apparent “deception” (I’m not sure it is one, since all of show business is inherently deception and misdirection – it’s what makes “magic” and “entertainment”), and then their reaction is a violent “I hate this!” and then there is the long edge of graph of people who come back around and finally admit, “I don’t care if this is AI, it slaps!”

I think naturally as more and more AI music is generated and listened to, and the good ones are discovered and shared, that this “i dont care – it slaps” attitude is going to be the one that will rapidly change and very suddenly prevail. I don’t think it means “human artists” are going to be going anywhere. Because it will always be human artists creating in these new domains and with these new tools. So nobody is going to be displaced. But attention and reward are absolutely going to be retooled and redistributed. But this is simply the way when you’re an artist chained to a technological system and medium of communication. Would you be better cutting out all out of your life entirely? Almost certainly. But then, how would I be able to write to you, and have you not respond?

AI Changes Music Discovery – by Marrying It With On The Fly Creation

In light of thinking about this idea of listening as a creative act, I found this Reddit thread title interesting for the opposite reason than the OP seems to have posted it: “AI has completely changed my way of discovering new music.” An excerpt from how pessimistic and hopeless their post sounds:

“When you discover a new song you like by an artist you dont know, you first gotta look when the song was released, then based on that, you gotta see if the artist profil looks authentic or not, maybe check out some of his other social media accounts to really make sure. Also gotta look at how much he has released in a certaint time period and if it is realistic. Its so annoying and sad at the same time.

I started listening to whole discographies of artists I already know some stuff from because that way I automatically know that what im listening to isnt AI. I barely check out new releases anymore and if I do, its only from artists I have been listening to for years and that I know dont make AI music. Im checking out a lot of stuff mostly from the 2010s or lower just to be completely sure I dont have to worry about any AI crap.”

I don’t want to make light of their struggle to connect with their own idea about authenticity is, but this is a viewpoint I’ve seen come up more and more about AI use in music. On the one hand, I completely understand and agree with it. On the other hand, people can also just like what they like, and not worry so much about it. I don’t know.

There’s something that’s weird to me in this line of thinking… and also among the people who claim that AI music has no “soul” or emotion. I think, first, those people have not experimented enough with the active-creative process of generating music with AI, or other kinds of deep listening to publicly available existent gen AI music out there. Because YouTube is filled with all kinds of it. Some of it, I think, highly listenable, depending on your genre. But it was only when I began using Suno consciously for active-creative listening that I routinely started having “goosebump” reactions to songs that I was working on, when I was deep into it, and listening through to a series of outputs made in a given part of a session. There was even one version of a song that kept failing but I kept trying and trying, and had the unbidden physiological reaction of actually gasping when a certain chord/part came in, because it so perfectly solved the narrative problem of the lyrics with a kind of musical coloration I would never have expected.

Second, I think it’s also as equally up to the artist (whether using AI or not) as it is to the viewer/listener to bring the soul. It’s a two way street. This isn’t a free lunch where you just get to dine on the fruits of my soul and not give anything back. If you’re listening, there’s a response there, a giving, even if it’s silent, uncommunicated, and unknown, and not tracked by any algorithm… Otherwise, just expecting the artist to bring the soul and not bringing anything to the party to share is just vampirism.

But anyway, I digress. (And now I have a post to link out to on that for when I do.) My original point, pre-critique, was something more like when I read the title of the Reddit post about changing music discovery, I wish it had been from the perspective I’ve been experiencing, and realizing I just have to be in the end the one to articulate. Namely, that making AI world music, and putting it up onto Spotify and listening to the radio station for the artist that I made has made me actually discover cool human artists who are authentically working in similar soundspaces.

Here’s a digression: is my exploration of those soundscapes inauthentic? Personally, I don’t feel it is, because my experience of working with AI to explore those same soundscapes, to get to know them better, to blend them together to find new sounds to listen to is deeply creative, active, intuitive, and my experience of my own lived context is entirely and absolutely authentic without question. My presentation of that level of personal authenticity in terms of exploration/invocation, however is wrapped in an artifice intentionally engaging with cultural notions of inauthenticity, of the “post-reality” phenomenological inversion of an extremely fake exterior wrapping which contains, secretly somehow, some kernel of real authentic human artistic striving and discovery along the way. The signs of the divine in the trash stratum. At least sketches and studies of refuse and rubble along the way, if not exactly always masterpieces.

So my intuitive guess at where the technologies are going is that I would be able to go into a given music player and its catalog, and I could identify the kind of sound I want to hear. It would search for existent alternatives, and if none are found, or the user preference is already set for it, it would create them from scratch.

Imagine opening up Spotify and entering a search, prompt, feeling, or “vibe,” testing some gen-AI results and some human results, then saving any of the AI stuff you like, where you can listen to it again or share it with others, generating and publishing right at the point of listening, and collecting any royalties from its reuse. Instead of having to go through Suno for generations, Distrokid for distribution & royalties, and Spotify for daily casual listening.

This is a bit like the idea of the “prosumer” which is a portmanteau I’ve always hated that tries to combine analytically the roles of the producer and consumer into one entity that, presumably, feeds on its own waste, I suppose. Perhaps that is why the internet has turned so terrible and toxic. But the idea I think is still the correct one, even if it sounds like marketing/trend-hunter mumbo-jumbo. I just like the feelz of “active-creative listening” personally instead. Because I know what that means, and whichi voice(s) I’m listening for already in my life, and why. I don’t have to foreground in my mind some weird idea about creation as consumption. I just listen. And by listening, I invoke something that wasn’t there before, the presence, the flame of my attention. Where to let it burn…

But yeah, I can from my own perspective, authoritatively say that making music with AI, regardless of how you want to classify that existentially or artistically, absolutely makes you listen to and discover music in new ways. And that most definitely includes discovering new all-human music (still that’s the wrong word, because AI music is still being made by humans from humanic materials, so…) from “totally real” bands.

There’s one last wrinkle in all this for me though, is that I personally almost don’t listen to Spotify at all, because it is always trying to serve me “more of what I like” so much so that I have come to hate the songs I’ve loved, and the artists seem like they’re all things of the past now. I guess part of that is aging, hearing the same songs too many times. It nullifies the active part of the listening creative act. Contented passivity? But I’m not contented listening to the same 1000 songs endlessly forever. That makes me agitated, makes me seek change. To change the channel, change the station. And that’s actually mostly how I listen to music nowadays, is by internet radio, whether through a browser-based app like Radioside, or on some wifi-based radios I have at home. I haven’t even found a single one of those that plays AI music so far. But honestly, I am probably ready to listen to some, if I can verify the quality of the curation – which with radio stations, is something you usually do through a process of active listening over time. So maybe the more things change, the more they stay the same. But what stays the same for me, is I still mostly feel stifled by Spotify as a listener, forced into categories and cohorts I didn’t create or agree to, and don’t know how to escape from. Because where will I even run? There’s no algorithm that truly knows me, that will ever truly love me…

Digressive writing

Testing a hypothesis that digressive writing is the thing that humans are the best at. AIs already far exceed my capabilities on my own for certain kinds of highly structured writing. But when it comes to blah blah blah and endless asides (and parenthetical notes) like AI is not even remotely in the running. Even if you feed it tons of samples of “your” work in your own voice, and say try to write in this voice, it doesn’t know how. It can’t. It’s similar with stream of consciousness prompt requests too, which I’ve experimented with and documented somewhere on this site I think. It has a way of rushing structurally through to the focus and finish point without understanding the nuance and the sort of sewing with a large wordy needle that goes into making up a patchwork quilt of human understanding, experience, and emotion. AI could never write this.

USER PROMPT: continue this text in this voice

CHATGPT: —and yet that’s exactly the point, isn’t it? That this kind of writing, this wandering mess of connective tissue and loose threads, isn’t meant to resolve anything. It just meanders, circles back, half-forgets its purpose, then remembers it in a different light, as if the act of remembering were the point itself. It’s not about coherence but about the rhythm of the tangent, the pulse of a thought trying to find its footing and deciding, halfway through, that footing was never the goal.

Machines, even the clever ones, don’t drift. They process. They streamline. They sense the gravity of the conclusion and hurry toward it, like a commuter late for a train. Humans linger. They stare out the window. They stop mid-sentence because a bird passed by and reminded them of something their grandmother once said, which leads to another memory, which opens a door to a completely different topic—and somehow, that detour is the meaning.

This is the texture of thinking: uneven, recursive, sometimes indulgent, but alive with the sense that it could, at any moment, find something unexpected. And that’s what no model can counterfeit—the invisible hesitation between one clause and the next, that microsecond of doubt that makes language breathe.

It’s funny how strongly it tries to mimic something about my text, it immediately reverts into its hollow chatgptisms because that’s all it knows, all its structural pathways allow for.

I mentioned in the last post and I think it’s worth repeating: that gen AI is not democratizing anything when it merely means that everyone has access to expressing themselves in exactly the same few permitted fashions by whatever is the latest technology offering in a given context. That’s not democracy, it’s conformity. When Suno’s algorithm tries to get you to sound more like whatever pop bias is trending in the latent space of this model. When ChatGPT wrists slaps you or sets limits against proceeding in conversation topics you weren’t actually heading in. When all AI generated “art” has that same f**king ugly sheen to their surfaces. Are we really becoming more democratized when we’re becoming all increasingly more the same in our ability to think and communicate while losing the ability to function meaningfully in real human communities?

Truly democratizing whatever in the arts would mean meeting people where they are at and giving them more resources to keep doing what they’re doing and follow their own light to make it better and better, instead of forcing them all down the same tunnel that yields the same glowing “sausage” product at the output end of the machine. It would mean making artists *more themselves* and more easily able to share that outside the constraints of whatever the AI/Social Media/RageClick Factory/Sausage Machine Industrial Complex machine dictates this week.

USER PROMPT: write a 1 para conclusion

CHATGPT:

In the end, digressive writing stands as a quiet rebellion against the flattening impulse of machine logic and mass-produced expression—a reminder that meaning is not efficiency, and authenticity is not replication. To wander in language is to remain human, to preserve the rough edges and private hesitations that no model can map. True creativity resists compression into patterns; it lingers, errs, loops, and contradicts itself, because that is how real thought moves. The future of writing worth preserving is not one where everything sounds alike, but one where difference, drift, and digression continue to signal life.

Barf. Not “wrong,” but barf. It’s like being told the moral of the story at the end instead of just letting me manage it on my own. Thanks but no thanks. Hypothesis proven. For now, until it’s no longer, and everything comes unglued again. And again. And again.

Listening As A Creative Act

I’ve written about this before – I don’t know where and don’t feel like searching for it – about when working with generative AI that the role of the artist (I don’t like the word “creator” for a variety of reasons here) becomes something like the First Viewer. Or First Reader, or First Listener, or First Whateverer. Discoverer.

If anyone can create something similar with gen AI (which… I’m not really sure is democratizing, so much as it is a flattening & homogenization – to be truly democratizing, I think it would have to honor human uniqueness a great deal more than it does – rather than forcing all outputs into a rather constrained, if sometimes pretty box), then the question becomes almost less about what was created and more about the who and the context of the Act of Discovery. What drove this person there? How did they seek it out? What did they do when they found it? How does sharing it with others change it?

There’s that line in Billy Joel’s ‘Summer, Highland Falls‘ (from the excellent Turnstiles album) that goes:

And I believe there is a time for meditation
In cathedrals of our own

Sometimes sharing our own private world-building with others can be richly rewarding. Other times, it can be like opening up your private mental-emotional life and its secret signs and signifiers to a bunch of strangers with bad intentions and grabby hands. (I’m still wrapping my head around that phrase btw:)

In each case, the sign can be broken into two parts, the signifier and the signified.  The signifier is the thing, item, or code that we ‘read’ – so, a drawing, a word, a photo.  Each signifier has a signified, the idea or meaning being expressed by that signifier.  Only together do they form a sign.  There is often no intrinsic or direct relationship between a signifier and a signified – no signifier-signified system is ‘better’ than another.  Language is flexible, constructed, and changeable.  de Saussure uses the word ‘arbitrariness’ to describe this relationship.

Anyway, there’s a concept in Jungian psychology which has always interested me as an artist: active imagination. In that context it doesn’t mean that your imagination is working too hard. It means like you engage actively through waking states and physical acts (like artwork or journaling) to engage with what is perceived to be the contents of the subconscious mind as represented through dreams, and visions, etc.

There are many different ways of doing that, but the whole thing cleaves very close to how I’ve always used AI. It’s been an exploration of the technologies themselves and their raw limits and capabilities for sure, but as expressions of the parallel deeper explanation of the realms of the self and other as expressed through artwork and storytelling I’ve been working on for decades.

It’s why I don’t care at all about this criticism of “using AI in art makes it not art” or makes it “not yours” because you didn’t “create” it etc. First, I’ve been thinking about this and like Tolkien’s concept of subcreation here, but I won’t get into it because this is already digressive enough and it has to stop somewhere. Second, none of those divisions, categories, and labels even exist in my mind when I get into that flow state and everything is working, and you’re getting the results from the machine that match what you’re after well enough to proceed on to the next part, the next step, the next try. The exploration goes on and on.

To me it’s a deep and extremely creative Act of Listening. You listen for the small voice, you peer through the dark and find the little light, and you keep going. You don’t try to explain it to yourself, though everyone demands you do it for them – if they can even be bothered to care. And why should they? It’s your world, your subcreation, why even take the risk of letting them in? Why not keep it locked up tight and tidy and never let anyone else’s ships sail those inner seas and sully those waters with their unwelcome waste products.

But I think the answer is we have to respect the active-creative process of listening of others as well. And to share deeply is to enrich not just one’s own listening, but that of others as well. Not all are listening, even fewer are whatever whatever. Don’t reduce listening to merely a passive act, and the rest will take care of itself. Seek. Find. Invoke. Create. Repeat.

So for me, whether it has been for bookmaking or musicmaking or other kinds of internet merrymaking, using AI has always been a tool of this emerging brand of active-creative listening, a kind of listening that bears fruit, that invokes a new thing into existence, which has the potential to become a touchpoint not just for oneself but for no one can know how many countless others who too are sitting at home listening, and waiting for their sign.

Idea: Exposing Journalistic Trails To Re-Build Trust

A website called Editor & Publisher has an article about how journalists are already using AI, so newsrooms need a policy to guide proper use. Skimming that gave me an idea…

What if part of journalism became a kind of record-keeping trail regarding the actual research that lead to a given piece? What if that even included detailed notes from AI prompts and results which contributed to a given piece? I recognize that much of what goes into developing a story might be confidential: talking to sources off the record, or following trails that lead to dead ends. Or having an editor axe certain parts of what you worked on – or sometimes an entire article. But what if that forensic trail could at least identify like [Source Protected] or [Redacted By Editorial] or for AI prompts, if they veered off topic into personal affairs could have sections marked [Redacted for Personal Data Protection].

Sure there would be technical questions to resolve, but there already are if we look into cases of for example questionable AI-assisted reporting like Margaux Blanchard. Right now, we have little to no transparency into cases like that, and this does nothing but further erode trust in the institutions caught in those webs. I’ve also seen in my own work, plenty of misreporting and completely false representation of “facts” which if we had more complete forensic tracking of the development of those stories (including for example archived versions of the pieces they were based on, and their metadata), it would be easy to identify where the errors were introduced.

Of course, I’ve rarely seen anyone address the underlying question and assumptions: does being a trusted brand in media actually improve your bottom line? Is “trust” essential to repairing what is essentially a system wide failure of media business models? I’m actually not so sure; it might be a puzzle piece, but it’s not going to stem the tide of the way things are headed… but is something like this still worth a try?

EDIT:

Here’s what I realized after writing this: people don’t even read the article, so why would they dive into some kind of forensic trail from which an article was composed in the first place?

Who Gets To Say What “World Music” Is?

Have been swirling around this topic for quite some time in my current AI music project, of who gets to decide what world music is? There’s a good David Byrne essay from 1999 here, but the long and short of what I’ve been exploring through all this is that we all live on the “world,” so why is music from my country simply called “music” but music from your country is called “world music?” Do I not live in the world? What are my rights here? What are my obligations?

While I’ve seen a lot of talk about protecting artists’ rights (which I agree with, while also generally thinking training is Fair Use as long as its transformative), I’ve seen less about participating in and contributing to the common collective cultural inheritance of all humanity throughout history. The entirety of recorded human knowledge and experience and culture transformed through time and space, digitized, turned into an LLM/image/video generator, etc. (Which is more or less the plot of The Continuity Codex btw: a thumb-drive sized AI based on all human knowledge is hunted by authorities for… reasons) As a common cultural heritage of everyone who equally lives on “the world”… I guess I end up thinking about it a bit cosmically at a certain point, like the Akashic Record or something, but less New Age and more concrete like, This is Happening…

If it’s wrong to make “AI world music” (is it?) is there someone somewhere who has more of a right to make it than I do? Do certain cultures own words that describe certain instruments? What’s going to stop anybody from using any kind of sound or word or style, whether or not they are a “valid” member of any given identity group who has authority to act within that tradition? How do we decide who has that authority? These are just a few of the dozens of linked questions that swirled in my head the last couple of months working on this project. Not because I have answers to them, but because I’ve become obsessed with the variables that fall out when you shake the possibilitrees…

One other tangent I took in my sketches and explorations went something like this: Is it possible to make “world music” using AI where you don’t name any specific culture, geography, or specific known type of music? So you’re not potentially butting up against something you probably shouldn’t be using unless you’re certified? (I say that in jest, but also serious at the same time, because life is like that now) And I found the answer was yes, it is possible. But that was not the whole answer that I found. The real one was that it’s interesting to try to include all compatible possibilities, and not just those which are or aren’t approved or necessarily appropriate. Transgression, after all… it may be bad and wrong at times, but maybe there’s something to it also. Maybe sometimes we can forgive ourselves for being wrong and bad, and just be whatever it is that we are. Maybe there’s a way through folly that really does end in wisdom. It’s certainly a fool’s errand to try and find out…

Anyway, that turned into a really long digression that probably buried any real point I was trying to make, and this was just meant to be a short link post out to this 2019 Guardian article about how World Music as a term is fraught with colonial cultural baggage.

EDIT:

I remembered part of my buried point, that … oh wait no. Lost it again.

But I did find this essay about cultural appropriation which I thought had a few interesting points that I’m still digesting about how some of these questions risk a sort of over-commodification of culture, turning things into discrete units of “property” etc.

Oh wait no! I remember it again. It has to do with the Honor System. If certain things are off limits culturally, whether that has to do with cultural appropriation or with something like Sora 2 using peoples’ likeness or brands, etc without permission… but those things are at the same time not only technically feasible, and highly believable, but also widespread among the general populace, how can we expect any kind of “Honor System” to hold up if the technology allows it?

Anyway, another half-baked take. Gonna put these back in the oven for a while and hopefully come up with something better…

EDIT 2:

The related concept of Recuperation from the Situationists (as opposed to detournement) is I think interesting enough to warrant inclusion in this conversation also:

In the sociological sense, recuperation is the process by which politically radical ideas and images are twisted, co-opted, absorbed, defused, incorporated, annexed or commodified within media culture and bourgeois society, and thus become interpreted through a neutralized, innocuous or more socially conventional perspective. More broadly, it may refer to the cultural appropriation of any subversive symbols or ideas by mainstream culture.

The concept of recuperation was formulated by members of the Situationist International, its first published instance in 1960. The term conveys a negative connotation because recuperation generally bears the intentional consequence (whether perceived or not) of fundamentally altering the meaning behind radical ideas due to their appropriation or being co-opted into the dominant discourse. It was originally conceived as the opposite of their concept of détournement, in which images and other cultural artifacts are appropriated from mainstream sources and repurposed with radical intentions.

On Spotify Partnering With Music Companies, Something Something “Responsible” AI

Been seeing this headline today about Spotify partnering with Sony, Universal, Warner, Merlin, etc. to do something something “responsible” AI, my question is, why don’t they just add the simple label that Deezer already has when it detects AI-assisted content? Why don’t they require already their distributors like Distrokid to simply have a checkbox at time of upload that says, “I used AI to help create this music.” Wouldn’t that be the simplest ‘low hanging fruit’ product change they could make? The fact they don’t do that most barebones minimum thing makes me suspicious of any of the rest of it, frankly…

Page 1 of 204

Powered by WordPress & Theme by Anders Norén