Questionable content, possibly linked

Category: Other Page 32 of 177

SYNONYM: The Greatest 80s/90s Canadian Hair Band You’ve Never Heard of

I haven’t had time to develop the backstory for this properly, so will do a half-assed release here instead, even if it lacks the verisimilitude this way that a more elaborate staging might give.

So yeah, I had the idea to make up a synthetic “hyperreality” hair band from the 80s/90s out of fully generative AI content. After discussing the matter a great deal with ChatGPT, we ended up landing on the band name SYNONYM, loosely inspired by the real Canadian hair band you’ve never heard of, Alias.

Alias, a “real” band IRL (ex-Sheriff & Heart members – btw this song by Sheriff kinda rules), had a confusingly-similarly titled song to Extreme’s classic “More Than Words,” but Alias’ hit was called “More Than Words Can Say.” Which is yeah, hella similar. Here is Alias below. :

The song is “fine.” I didn’t grow up in Canada, so I only heard it for the first time recently. I don’t have any particular nostalgic attachment to it like I do some of the others mentioned here that I grew up with. But it got me wondering about markets like Canada, that are adjacent to, but largely parellel to US pop culture or modern music history. There are a few intersections here and there from the Canadian side crossing the border, but mostly things just chug along apart it seems… Anyway the whole thing got me thinking: what if there were entire huge untapped worlds of good popular music that had existed for decades, but we just never heard it?

Hence, SYNONYM was born, which I rather like as a name for an AI band here. I imagine that all of their songs were extremely similar to other popular hair bands and power ballads. Because that’s pretty much what AI excels at.

I ended up doing a decent set of images in Midjourney for their “very well-known” hit single music video, called “Louder Than Love.” (Archived here as backup.) These images are inspired in part by Extreme, in part by LA Guns, in part by GnR’s “Patience“, among I’m sure many others in this category that I’m forgetting.

Sadly, the music generation side of AI tech is not yet as good as the visual side. I experimented quite a bit with Suno.ai to see if I could get any passable prototype results to use as audio samples of SYNONYM, but they were so far off the mark that they’re not worth sharing here.

In any event, here is the full final set of SYNONYM pics, and below are some special highlights. Enjoy! Maybe I’ll find a way to incorporate the full untold story of SYNONYM into the AI Lore books in the not too distant (alternate) future (or past).

For whatever it’s worth, I believe these were some of my first tries with Midjourney v6 Alpha. Pretty impressive overall.

Quoting Gemini Protocol FAQ

Speaking of DOT:

Reading that starts as soon as the page loads, without you first having to carefully click past a pop-up window which actively tries to mislead you into “consenting” to something nobody actually wants or needs, and which continues right to the end of the page without being interrupted by another pop-up begging you to subscribe to a newsletter. Gemini pages are downloaded once and rendered once, and then they stay that way for as long as you care to look at them. Nothing changes in response to you scrolling around or time passing.

Source: Gemini FAQ

Agree the modern web is an endless sea of pushy trash.

Power Reader Diamond (Visual Concept)

When I started smoking a couple decades ago now, I had this experience for a while where visual text on screen would sometimes appear almost textured, where like shapes within the columns of text would sort of jump out at me visually. I would I guess ascribe it to the substance, but the impression has stuck with me so long now that I need to exteriorize it in order to let it grow into something else.

This was always the root of the concept:

That there would be some kind of app, or tool, or way of reading or… something… which would be composed in its essential form as a diamond, through which words of a text would rapidly scroll for reading, and that somehow, this technique would be much faster than regular conventional reading, and it would enable the user to scan through far greater volumes of textual material (and other compatible multi-modal inputs & sources). I’m imagining something you could just as easily use to read a book as scan through in a meaningful way a large dataset.

Thinking through questions around intertextuality also seems to yield applicability of whatever this kernel of an idea is here, a feeling, a fleeting stoned impression…

Whatever replaces our current old stodgy way of browsing the web will have to be able to visualize for us all the wexes, all the intertextual references and borrowings, connections to and from other sources, other voices, other texts, other interpreters and commentators. Enter the Power Reader.

In the interest of getting these ideas out of my head, so they stop swirling endlessly, I worked with Dalle3 to draft some concepts for this visually, but it got stuck and constrained in its own limited beliefs about the usefulness and desirability of “apps” and “cellphones” both of which I think I eschew (No Apps No Masters – a topic for another blog post). Still, the visual explorations have some additional nuggets of truth in them I think. There’s definitely something her worth expanding further.

Here’s the full set, an archive, and highlights below, hotlinked out of Imgur (and therefore likely broken by the time you see them) for your viewing pleasure:

Refusal of the Image

As part of my quest towards DOT technology, I’ve been trying to find an easy to install text-based web browser. But it is a challenge to find one on Mac that isn’t terminal based or require a lot of tinkering and 3 hours spent installing Python dependencies or whatever.

While I continue searching for one that “just works when you install it” (like any normal standalone Mac app), I have been enjoying a Firefox extension for the time being called Image Video Block.

I’ve been wondering if this is a form of iconoclasm, the breaking of the icons or images (with long historical precedent). The refusal to be governed and held sway by images being fired at high velocities and high brightnesses into your brain constantly, and the subtle continuous violence that tends to do to your thinking, without you being always constantly aware of it.

I disabled in that extension images, video, “media” (whatever that means), Flash, SVG’s and canvas (I think that’s a reference to HTML5). It’s weird to get used to a little at first, but interestingly also…. it’s NOT that weird to get used to. Some little buttons might be missing on certain sights, but browsing the web without all the junky image garbage seems liberating, refreshing. And, of course, you can allow-list certain domains, like I did for Wikipedia (where it now is slightly shocking to see images show up, after browsing around mostly without any elsewhere).

There’s a rant I want to go on about computers, cell phones, screens greedily gobbling up one’s “visual cone” (leading to obsessive and repetitive behaviors) but I’m still working out my thoughts on that. But breaking the images, dulling down all the shiny surfaces, turn off the colors, radically reducing functionality and intentionally adopting limits with technology seems to me right now like a Narrow Road to Happiness™. Probably won’t work for everyone not willing to let go of a lot of the things technology promises as “good” (but which are actually bad – like always-on notifications), but it works for me, right here, right now.

Quoting Phillip Toledano on AI Art

By way of Washington Post, reporting by Yan Wu:

As these examples show, creative professionals might still have an advantage in the world of AI art. Aesthetic taste, culture and skills honed over years can substantially influence the quality of AI-generated images. “If AI is not for you, that’s fine. But shouting about it is like shouting at the sea,” Toledano said. “It’s here. Be curious.”

Reply to the Verge: Fair Use is not copyright violation

Wanted to post a brief reply to this piece on the Verge by journalist Emilia David about a new organization called Fairly Trained, which aims to be sort of a “Fair Trade for AI” if I understand it correctly, offering certifications for AI models trained entirely on licensed data.

The Verge’s headline is, I think, technically inaccurate. It states: “AI models that don’t violate copyright are getting a new certification label.”

They also say this about Fairly Trained, the would-be certifying body:

Fairly Trained claims it will not issue the certification to developers that rely on the fair use argument to train models.

I think this journalist maybe took too much at face value Fairly Trained’s claims about what Fair Use actually is. Their blog post goes a bit further than what’s stated in the Verge. Quoting from that:

[…] this certification will not be awarded to models that rely on a ‘fair use’ copyright exception or similar, which is an indicator that rights-holders haven’t given consent for their work to be used in training.

[Quoting an exec at Universal Music Group] ‘We welcome the launch of the Fairly Trained certification to help companies and creators identify responsible generative AI tools that were trained on lawfully and ethically obtained materials.’

If you have a generative AI model that doesn’t rely on scraping and fair use arguments for its training data, we’d love to hear from you…

My contention with all of this is as simple as it is currently unpopular: anything that qualifies as Fair Use does not constitute a violation of copyright.

Stanford has a decent page on Fair Use here. Excerpt:

Such uses can be done without permission from the copyright owner. In other words, fair use is a defense against a claim of copyright infringement. If your use qualifies as a fair use, then it would not be considered an infringement.

So I think we can separate this announcement by Fairly Trained and the Verge’s coverage of it out into two things:

  1. The claim that Fair Use is a violation of copyright – my understanding is that it is not, and this claim probably doesn’t hold water under scrutiny.
  2. The recognition that creators have a legitimate desire to have greater control than they do under current Fair Use laws, which seem to plainly permit these kinds of uses in AI training.

While taking issue with the first one, I support the second one fully, and agree that we need new radical ways for artists (I hate the word “creators” because it reeks of ‘creating content’ instead of – can’t we all just be artists creating more than just endless ‘content’?) to be able to contribute high quality material to fully licensed data sets where everybody knows what they are getting into, and there are clear mechanisms set up that make sure that artists themselves get directly paid, and not intermediaries like collecting societies in France seeking to change the law in their favor at the likely expense of contributing artists.

I do think there is a place for these kinds of certifications and other allied efforts, but I don’t find it very useful for their purveyors to push seemingly inaccurate legal conceptions. I don’t see who that benefits. We can say we want to change how the law is, or how it ought to be interpreted, but we should also recognize what it actually today says and how it actually has been interpreted in the past. From there, we can point ourselves towards more informed aspiration, and build the realities we want to see one Jira issue at a time…

SD Card Readers as DOT Tech

I won’t go on and on about it here, but I’m not in favor of letting kids play with cell phones. I don’t have one myself for good reason – I think they are just too addictive, and the only way to avoid it is to just not have one. I understand in the real world, compromises must be made, and one of those I made recently was in finding and purchasing this interesting example of does one thing (DOT) technologies that I wrote about recently.

It’s a lowly SD card reader for a hunting trail cam made by Wildgame Innovations. It’s called the Trail Pad Swipe, and it looks like there’s a larger version called the Trail Pad Tablet that is maybe along the same lines. It’s intended use seems to be enabling you to check images in the field from the SD card in a trail camera (which I do have one for fun & wildlife observation, btw, so this is always a possible fallback use, even if one I don’t engage with as an activity much lately – I gotta get one set up again this winter!).

The things I want out of a device like this for my intended use case is actually to reduce the functionality of the Almighty Cell Phone™, and the like… cold sleek fetishistic neo-brutalist beauty of the object itself. I want a device, in other words, that is shittier, and less “sticky” behaviorally than a cell phone.

And that is not. Fucking. Riddled. With. Ads. And. Trash. Holy. Shit.

So what is the “DOT” that an SD card reader does? It reads SD cards. It does not surf the internet, connect to wifi, let you check your email, send you notifications, draw you down endless corridors of social media, apps, and other bullshit and train you from a young age to obey the hegemony that says that all of those things are necessary and normal components of life at any age (hint: they are not), let alone in childhood.

Anyway, people can do what they want, what I am doing is finding an alternative that does one thing, while taking advantage of the benefits that supervised use of digital devices might bring educationally.

Based on my experience, it seems that you can put JPG images and AVI files onto an SD card from your laptop pretty easily – though I had a lot of trouble getting AVI file outputs from other video codecs using a Mac. That’s unresolved, but I was able to download some AVIs and tested them on the device. They played no problem, which is great news. It also has a headphone jack for listening. I’d prefer a speaker too, but will accept it given the challenge of finding anything at all that fits in this product space.

A basically more rough replacement for a cell phone or small tablet that does “nothing” except read two file formats on SD cards is not exactly on everybody’s shopping list. But one thing I’ve thought a lot about is that why, as consumers, are we all forced to use the same shopping list either? Why can’t we easily, for example, order up the components we want in the configuration we want, and like… some AI factory in outer space assembles them and sends it to me via parachute at a very reasonable price?

That’s a future I would actually be okay with, because it would mean that I would still have some measure of control over devices and their functioning, and I could pick from the available options to find one that suits me. Even if that meant choosing to live in such a way that none of them suited me. Choosing to live without them. That’s another obvious solution down this road, but I’m not entirely there yet, so it’s better to shape it in a direction I can live with.

I can live with SD cards, because I can decide what goes onto them. In my research for do one thing technologies here, I came across a device whose name I now forget (Yoto), but you can buy (proprietary) credit-card sized branded audio recordings of popular books (that I already own in print). But you can also buy blanks and make your own. I guess that’s cool, but I could also now with this SD card reader buy any random cheap SD cards, and now load the entire Wikiart image data set on it (or a similar large image set of the natural sciences), and we’re off to the races.

The only “flaw” I’ve seen in this device is that while it does allow the swipe action (and a shitty slow zoom) for going picture to picture on the photo galleries, it doesn’t allow for flipping the device on its side to view portrait-orientation pictures correctly. Instead, it squishes them to fit in the landscape-orientation view screen. Also, I don’t think there’s a way in the video player to pick back up where you left off, or fast forward or rewind videos, but I also need to look at it again more closely.

It’s funny though: it’s “bad” in one way that it can’t do all of those things, but in another way, what I want to find and share is the limits of technology, and figuring out experientially what are the good and bad parts of it (along with its outcomes), and which ones, in which configurations do we actually want to have in our lives. That, to me, is ultimately the point of all of this, and one of the biggest things we need to figure out individually and collectively in our lives.

Styles as Supersets of Characteristics in AI art

There was a part in the recent Chamber of Progress webinar where I tried to express a point about ML/AI in its current form’s training being made up of what I think of as a ‘measurement of dimension.’ That is, that the training process is composed of – in my limited understanding – sets of measurements of attributes or features within items in the datasets. This seems to be called its dimensionality (its amount of attributes), though these terms are used it seems in different ways by different sources.

In the video, I refer to it as a measurement of dimension. Because that makes as much sense to me linguistically as any of this made up gobbledeegook AI jargon does.

But we could also reasonably I think in plain language say characteristics and still be on the right track. ML/AI training looks at characteristics in data: images, text, etc. That means splitting apart things at a kind of micro level. If you look at 10,000 images of dogs, or bananas, or anything, you can start to break out lists of attributes, size, color, shape, and so on. Texture.

The beauty part then is that the characteristics originally measured in specific artifacts in the training corpora then become abstracted or put in a blender, so to speak, divorced from their original meaning and context. Enabling them to be mixed up and re-assembled into completely new, never-before-seen creations, assemblages of statistical sampling.

Elsewhere earlier in the video, we talked a bit about styles. There seems to be general consensus online that style is not copyrightable. As expressed by Creative Commons here, copyright protects specific expressions in fixed forms of a style, not the style itself. (But there are, ahem, blurred lines here, to be fair.)

Anyway, my point here isn’t to argue on copyright grounds in this post, that is just for background. What I wanted to say I guess is that ultimately my intuition as an artist who has made extensive use of generative AI is that style ultimately breaks down to supersets of characteristics/attributes/features(/dimensions).

To an AI image generator, for example, you could just as simply prompt “in the style of salvador dali” as you could basically any random string of characters “in the style of day old bagels at a long island supermarket in the 1980s” or “in the style of z8gn3s8hjkl*12%”.

All of those are going to give you results, some more inscrutable than the others as to clear cultural meaning or obvious connections to known things. You can also do it without including “in the style of” at all, and just including random phrases or strings in your prompts, as Nettrice talks about in the video.

This fluidity and flexibility around the sheer fact that literally anything (or nothing) can become a “style” in AI for me blows apart our current understanding of what an artist’s style conventionally even means. (Let alone what this means for ‘consensus reality’ – and this is what I was referring to I think in the video, re: holistic frameworks for analysis of meta-data.)

Anyway, I have other things waiting on me, but wanted to jot this down as a beach head to be able to come back and grasp this idea more fully down the road. Also see the notion of the hypercanvas here. Somewhere, somehow, I trust that this will all make sense in the end…

Quoting Chomsky on ChatGPT & True Intelligence

Tons of good stuff in this article, but this jumped out at me:

True intelligence is demonstrated in the ability to think and express improbable but insightful things.

Talking Dystopia on the 1984 Today! podcast

An interview I did on AI dystopia in the middle of last year was finally released this morning in the UK, on the 1984 Today! podcast. Click through the link for a chance to download two free AI lore books (supplies are limited). You can also listen at the YouTube link below:

Page 32 of 177

Powered by WordPress & Theme by Anders Norén