Questionable content, possibly linked

Category: Other Page 16 of 177

Notes on Smash That Like Button

Smash That Like Button is officially the 121st volume of the AI Lore books series, or unofficially number 124 of the bigger series which includes the non-AI books.

I’ve wanted to do a book by this title for ages and finally did it. I used Mistral 7B for most of it (which I used as completions for my own seed texts via TextSynth playground), with an ending via Llama to mix things up, and an intro by ChatGPT. This is more or less the same tech combo I’ve used on the text side for the past few books. It’s good enough, but I think I’m reaching the end of what I can get from Mistral using it that way. It gets repetitively psychotic after a certain point (or, maybe that’s just me?).

Something I dove more into here that I maybe only touched on obliquely at points in other books is sort of me as the writer voice talking into the completions text box on TextSynth, trying to direct its output at points verbally, even though it’s not a chatbot and cannot really understand or speak back to you used in this formulation (I don’t think, anyway). I feel like the way its included in this already surreal piece adds even more grit to the texture (like mixing sand in paint, as some of the early Cubists did). I haven’t seen anyone else experiment with this facet of AI writing before, and I think there’s a lot of interesting potential there stylistically. I have other ideas of where to take that mode of writing in the future.

Here’s the image preview:

Much but not all of the image content is by Ideogram, a few Playgrounds, a few Leonardos, a few Dalles. Some of the art references other projects I’ve been discussing lately on this blog.

I don’t have a ton more to say on it, except to highlight how much I absolutely despise people asking me to like, subscribe, and follow. I think that kind of algorithm-chasing behavior leads only to madness at worst and unhappiness for certain. It’s time to tear all that shit down and start anew.

Two New Willow Baskets, One With Miscanthus Leaf

Some photos of new IRL non-AI willow baskets, one with miscanthus leaf woven in uploaded here, and linked below:

Still a little janky as far as technique goes, with a bit of cracking, but getting better and more fluid with the patterns. I have enough materials soaking to I think maybe complete 3 or so more baskets in this style for this season.

I’m doubling out the size of my production in the Spring, as I really like growing willow and all the side benefits like basket-making that it brings in addition to biodiversity and general awesomeness.

I know most people use landscape fabric to suppress competition from grass, but I have a personal prohibition against using anything plastic-based in the garden because of fragmentation, and the already ridiculous levels of contamination of micro- and macro-plastics in natural environments. I’ve carefully followed this prohibition at least ten years on now. So instead I’ve experimented with successively thicker types of brown kraft and paper for construction applications. This latest round, I’m trying something called Ram Board which is extremely thick. I’m laying three overlapping layers of that down, followed by a second course of each if I have enough. Then I’m mulching over that.

What I’ve seen previously is that thin papers (with mulch) don’t do much to suppress grass long-term, though they might last a few weeks and possibly give cuttings a bit of a headstart. Thicker ones tend to do better but still break down and let a lot of grass through in first season. I weeded the new plots out that I planted of various ages, stock sources & types, and only ever weeded those beds three times this season. There’s grass growing but I don’t see it having negatively impacted the growth of the cuttings too much.

I will leave most of those newly planted plots to overwinter, and harvest only in Spring when the ground is unfreezing, and take both my basketry material and planting stock at that time. I know other people harvest and store in Winter, but not having somewhere to store cuttings at the right temp over winter, figure they may as well just stay on the stools and I’ll cut them then. I did that this year with some others and they grew back huge, so not too worried.

I have some dogwood I’ll also harvest, though that plant material live is also really useful as fascines for erosion control which I’m exploring, and I want to try the same method I did on the river edge in my regular garden: take fresh dogwood cuttings, make them into a bundle, dig a shallow trench, cover them back up with soil. Voila. I’m curious if willow will also grow that way? Could be a means to get multiple shoots coming out of the same buried stock linearly that could be easily integrated into hedgerow armature… More experiments await in this area.

Content Moderation Art Show in San Francisco

Here’s a piece on a new hyperreality art show I worked on, virtually, in San Francisco. It’s part of my Nevermades series, and here’s a link to a longer recent artist statement. More images from the show at the link:

For now, this exhibit only exists in latent space, but it seems like something that ought to be actualized, as a content moderation based art exhibit could be really interesting! There’s a deep vein of cultural experience that human moderators – and “AI trainers” who may end up doing similar work – have experienced that has been largely overlooked by broader society that is worth exploring.

Low Tech Drawing Robot

This one is basically wheels screwed to a platform, with markers duct-taped onto the side pointing down. Here’s a video of it in operation.

You put it on the floor and move it around somehow. Your foot, your hand, a stick, attaching to your dog’s leash? You decide.

Other people make high tech robots that can draw for them with a bunch of software and electronics.

I am experimenting with doing analog low tech human-powered robots instead. You can see some of my other experiments here. This is pretty new, so I’m trying to cover a sort of like “base layer” of low hanging fruit or what have you here, as I get up to speed on some of the more mechanical knowledge for more complex machines.

I am an artist using AI…

Thought I would put together some relevant recent statements I’ve worked on as an artist using AI.

Artist As Propagandist: Exploring Parallel Realities With AI

Misinformation and art intersect to explore and navigate the confusion between reality and fiction that typifies our times in the work of net artist Tim Boucher.

In works that run the gamut from books and hand-printed samizdat zines to the use of generative AI for video, text, and image-making, Boucher’s work uses hyperreality to delve into the murky shadows of the Uncanny Valley, evoking a weird, sometimes disorienting feeling of surfing the very edge of the collapse of meaning. Weaving together real and invented, human and AI elements to seamlessly blur the lines between them, Boucher exploits this chaos to create new semiotic spaces for radical meaning-making. Structurally, the work appropriates, satirizes, and detourns the forms and tropes of conspiracy theory, re-imagining them as a new form of art, and igniting them with the fuel of runaway AI.

While the contents of conspiracy theories often tend toward the ghoulish, harmful, or just plain wrong, they are inherently postmodern, acting as a vehicle for questioning established truths and power relationships—an activity which serves an important social function, if in many cases misguided in its ultimate application. Conspiracy theorists reject grand “official” narratives and instead create their own ad hoc temporary webs of meaning, challenging the legitimacy of the structures we rely on and deep beliefs previously taken for granted. The work asks big questions about whether there could be a way for art to reclaim this function of social critique that conspiracy theories currently embody in the popular consciousness, redirecting it towards more fruitful and creative ends?

The artist’s professional background in content moderation and censorship informs the work, at times borrowing from disinformation techniques observed in the field by state actors, repurposed as storytelling tools in open-ended creative networked narratives, and SEO manipulations to show how easily depictions of “reality” can be twisted and propagated. Misinformation is used here by the artist openly—not to deceive, but to reveal how fragile our systems for defining truth really are. The works expose how the artist’s role as propagandist, deploying “weaponized” artifacts to attempt to subversively actualize or undermine real or potential current or future states.

As a satirist working with the mode of the conspiracy theorist, the artist knowingly inhabits and exaggerates the conspiratorial narrative forms they aim to critique, imploding them from within. As the Onion’s amicus brief on parody put it, “Parodists intentionally inhabit the rhetorical form of their target in order to exaggerate or implode it”—a technique central to this practice.

Inspired by Dada absurdity, the artist’s ‘Nevermades’—collections of AI-generated artworks appearing to involve famous museums and galleries—extend Duchamp’s readymades concept into the post-truth, remote-first digital age, challenging the idea that authenticity requires physical presence – or even actual existence in the first place. These imagined or “aspirational” artworks (like flooding the Guggenheim Museum, and filling it with willow trees and beavers) comment on the art world’s status symbols—galleries, exhibitions, facades and physical artifacts—that can now be artificially fabricated at scale, significantly challenging their value in an online world dominated by images.

The use of AI serves to heighten the inherent tensions in the work. AI is used consciously as both a force that flattens expression into sameness and conformity and as a tool to rebel against the algorithmic culture of likes, shares, and validation – by exploiting and exposing the outliers, anomalies, errors, and vulnerabilities of these technologies. By transparently incorporating AI, the work proudly wears the use of these technologies as a kind of “scarlet letter,” confronting head-on the stigma against its use in creative sectors, and reimagining it as a vehicle and medium all its own for artistic exploration. At the same time, it shines a light on the absurdities and limitations of these technologies, and holds a mirror up to our own evolving reactions to them.

Ultimately, this metamodernist body of work oscillates between the deadly serious and the dangerously stupid and absurd, revealing the fragile and easily manipulated nature of our information systems and the social and political systems which rely on them. The work encourages the audience to consider conspiracy theory as an unrecognized folk art form—provocative and dangerous, to be sure, but one in many ways much like any art or cultural movement that questions authority. It disrupts the established order, challenges accepted facts, and compels us to face the instability of the narratives we hold onto, and, in its best form, opens up the space for change how things are today into how they could be, how we would likek them to be.

AI, misinformation, conspiracy, and hyperreality converge here to ask a simple but potent question: what is real, and who gets to decide?

Curatorial Statement: “Organic Data Weaving”

Tim Boucher’s “Organic Data Weaving” seamlessly merges the organic vitality of nature with the abstract logic of digital hyperreality. Woven willow sculptures, embodying the natural profusion of growth, stand alongside AI-generated projections that evolve across the gallery walls. The dynamic interplay between the physicality of willow forms and the insubstantiality of digital projections invites viewers to contemplate the convergence of artificial and organic intelligence.

The woven willow structures reflect the interconnectedness of data networks, echoing the visual representations of data relationships in the projected images. The sculptures’ interlocking patterns and dynamic curves mirror the fluid and shifting nature of data itself, presenting a dialogue between natural growth and the abstract forms of digital information. By juxtaposing these tangible and intangible elements, “Organic Data Weaving” reveals the complex, evolving narrative of our relationship with technology, nature, and the blurred boundaries of hyperreality.


That’s a curatorial statement I had ChatGPT help me write for a recent project of mine, an exploration of what woven willow sculptural forms juxtaposed with AI projected lights and imagery might look like. Photos from the “exhibit” are here.

I’ll pull out a few of my favorites to highlight below.

Without any more context or knowledge about the origins of these images, I would personally be hard-pressed to not take them at face value and believe they were actually cool sculptures which exist somewhere, or did at one time.

But in actual fact, they are nevermades which exist in a hyperreality adjacent to ours. They are aspirational image explorations on a theme, some using Dalle, some Ideogram AI. They are part of a larger experiment in misinformation as art.

But these raise a million other important questions for me as an artist. Namely, if I could essentially simulate a lifetime’s worth of artistic achievements in an evening, and get basically high-quality gallery photos of them as though they were real physical things, where does that leave us existentially relative to actual real physical things? Where does that leave us relative to a lifetime’s worth of artistic achievements?

In a world increasingly centered on the cult of the Almighty Image, and the Almighty Image is continuously exposed as a liar on its own altar at every turn, how are we to proceed?

I saw “real” photos from an art gallery setting in London earlier, and thought to myself, some of these look less high-quality than what I was able to generate with AI. They look literally better than the real thing

I think that’s hyperreality, is getting sucked down that wormhole, and it’s exactly where we’re stuck now collectively and individually.

Charlie Warzel’s piece in The Atlantic on hurricane disinfo goes down a parallel path in a somewhat different direction, interesting at least here though with our current one:

What is clear is that a new framework is needed to describe this fracturing. Misinformation is too technical, too freighted, and, after almost a decade of Trump, too political.

Hyperreality stands out to me as a relevant and still potentially useful analytical framework that is wider and not so fraught, and which can encompass this idea of the “artist as propagandist” who creates unreal things in order to change or influence real things.

Also from Warzel’s piece:

But as Michael Caulfield, an information researcher at the University of Washington, has argued, “The primary use of ‘misinformation’ is not to change the beliefs of other people at all. Instead, the vast majority of misinformation is offered as a service for people to maintain their beliefs in face of overwhelming evidence to the contrary.”

Interestingly, in other contexts outside of conspiracy fear-mongering, we often refer to be people who can cling to an alternative vision of reality in the face of overwhelming opposition “visionaries,” and we culturally usually cheer them on as they succeed in implementing that vision in actual reality. Unfortunately, an exceedingly great number of such “visionaries” in our day and age have been subsumed by vanity and wealth, and where they might have been or might believe themselves to be luminaries, emit only a kind of sticky darkness…

To me these willow-works, both my IRL ones and my ORL (outside real life?) hyperreal ones, play somewhere in a space that lays orthogonally in opposition to all that. Willow to me is profusion, proof of abundant life, of generous, ridiculously abundant and productive life, of reified embodied living sunlight. The reality of that when you feel it in your hands shatters all false darknesses, and returns us somehow deeply, instinctually, ancestrally, immediately back in tune with the Overwave, the wave from which all other waves are born…

Originality, Skill & Judgement in Copyright Filing by CIPPIC Against Suryast

So without rehashing all the details, there is a non-profit in Canada called CIPPIC, whose work I respect, and who has provided me with assistance in the past on an unrelated matter. CIPPIC does good public interest work at the intersection of law and technology.

They recently submitted a filing in federal court in Canada (a few months ago now) to ask for the correction of a copyright registration which was automatically granted by the Canadian Intellectual Property Office (CIPO) to a lawyer from India who used a style-transfer AI program to apply a Van Gogh Starry-Night-esque effect to an original photo they themselves took. As explained on Baker Botts site, and I believe quoting how the board of the US Copyright Office decided after the same person attempted to secure copyright in the US previous to Canada:

The Board explained:

“As Mr. Sahni admits, he provided three inputs to RAGHAV: a base image, a style image, and a ‘variable value determining the amount of style transfer.’ Sahni AI Description at 11. Because Mr. Sahni only provided these three inputs to RAHGAV, the RAGHAV app, not Mr. Sahni, was responsible for determining how to interpolate the base and style images in accordance with the style transfer value … Mr. Sahni did not control where [the Works] elements would be placed, whether they would appear in the output, and what colors would be applied to them—RAGHAV did.” (Office Letter p. 7)

Accordingly, the Board determined that the derivative work authorship was not the result of a human. Therefore, the Work was not registerable.

The Baker Botts site also shows the base image, the style image, and the output image. Sadly, the output image is not, in my opinion as a visual artist, actually any “good.” But that’s beside the point here.

This article on Norton Rose Fulbright gives a bit more context on Suryast and CIPPIC’s opposition to the copyright registration for the work, as filed with CIPO. Some snipped quotes for length:

The copyright registration lists RAGHAV Artificial Intelligence Painting App (RAGHAV) and Mr. Ankit Sahni as co-authors.

CIPPIC’s application challenges the copyright registration for Suryast and seeks expungement of the copyright, or in the alternative, removal of RAGHAV as a co-author. CIPPIC makes two main arguments: 1) Suryast does not meet the originality requirement for copyright; and 2) an AI system cannot be an “author” under the Copyright Act.

[…] However, CIPPIC submits that merely providing the inputs was a purely mechanical process and no human skill or judgment was used to produce Suryast. CIPPIC further contends that “author” in the Copyright Act only refers to a natural person (i.e., “human being”), and an AI system cannot exercise the common intent required for joint authorship.

My understanding, based on this York University Osgoode School of Law piece is that the actual Copyright Act of Canada does not explicitly define original/originality as concepts.

The question of what constitutes “original expression” in a work, though, required an answer from the Supreme Court, given that the Copyright Act does not itself define the terms (nor the term “original” by itself). As every student of copyright in this country also learns, in CCH, Chief Justice McLachlin wrote that:

“What is required to attract copyright protection in the expression of an idea is an exercise of skill and judgment.”

CIPPIC’s filing against the Suryast registration can be read in its entirety here. This is the most relevant part, though it is short and easy reading overall, so I recommend checking it out if interested:

  1. CIPPIC raises two alternative grounds for rectification:
    a. the image lacks originality and so does not enjoy copyright at all; and
    b. alternatively, a non-human cannot be an author under the Act.
    i. The image is unoriginal
  2. The Suryast Registration should be expunged in its entirety pursuant to subsection 57(4)(b) of the Act because the image ought not to have been accepted
    for registration at all: the Respondent has obtained a copyright registration in connection with an image in which copyright cannot subsist because it lacks
    originality.
  3. The Respondent did not contribute sufficient skill and judgment in generating the image Suryast to warrant subsistence of copyright. The Respondent generated
    the image through a purely mechanical exercise of data entry and algorithmic luck; its production is the result of no exercise of human skill or judgment.

That wording seems to refer back to the CCH case mentioned above.

Without having read or carefully studied all of the Copyright Act, I would have to agree that the computer program used to produce the image should not have been listed as a co-author, and the filing should be amended for that reason.

For transparency, I registered my AI Lore Books series with CIPO, but I registered the whole thing in my name alone, despite having used AI tools to produce elements of the contents.

But I think I disagree with CIPPIC’s assertion that no human skill or judgement was involved in producing this image. It’s probably useful here to go back to the longer quote from the full text of the CCH decision:

By skill, I mean the use of one’s knowledge, developed aptitude or practised ability in producing the work. By judgment, I mean the use of one’s capacity for discernment or ability to form an opinion or evaluation by comparing different possible options in producing the work. This exercise of skill and judgment will necessarily involve intellectual effort. The exercise of skill and judgment required to produce the work must not be so trivial that it could be characterized as a purely mechanical exercise. For example, any skill and judgment that might be involved in simply changing the font of a work to produce “another” work would be too trivial to merit copyright protection as an “original” work.

So:

Skill = developed aptitude, practiced ability.

Judgement = capacity for discernment, ability to form an opinion, evaluation by comparing different possible options.

I’ve seen mentioned in multiple places now that Sahni submitted at the USCO’s request a 17-page document detailing how he created the image and the technology involved. But I’m not able to currently locate it myself, though I’d like to see what it includes.

I’m going to go out on a limb here though and say that despite my not really “liking” the results of the style transfer, CIPPIC hasn’t made in that document much of any real case to explain why Suryast fails to demonstrate skill and judgement. They merely state that it is so by calling what Sahni did a “mechanical process.” But I would argue that what Sahni did rises well above the example cited in the CCH decision of simply changing a font.

Simply to know about AI/ML and style transfer on the part of the image’s (human) creator is the first demonstration that skill, aptitude, and ability may have significantly come into play in the image’s generation.

Without knowing the exact details of Sahni’s 17-page document detailing the creation of the image, it’s difficult to identify just how much skill and judgement was involved, but it seems to have absolutely been more than zero. According to this article, Sahni was the funder of the RAGHAV app which was built by an engineer named Raghav Gupta:

RAGHAV stands for robust artificially intelligent graphics and art visualizer, and is named after Raghav Gupta, a machine learning engineer who developed the app in 2019 in a funded project for Sahni.

An article on Holland & Knight adds a bit more from the USCO Suryast paper:

Footnotes 5 and 6 in the SURYAST decision discuss the lack of detailed evidence in the record as to how RAGHAV was designed and by whom (although RAGHAV was named for the engineer who developed the app for Mr. Sahni, Raghav Gupta). The Copyright Office, however, only had the vague description that RAGHAV was trained on a dataset of 14 million base images, called ImageNet, and then on another dataset of “content and style” images.

[…] If Mr. Sahni had designed RAGHAV and carefully selected its training materials, would that (in combination with taking the original photograph and selecting the style applied) constitute enough “creative control” for Mr. Sahni to assert authorship in the modification?

If the USCO is claiming they don’t have all the details about the RAGHAV software, then it makes me wonder what the contents of this mysteriously missing 17-page document actually were?

Regardless, even without that, it’s clear that judgement was involved, first in selection of the base and style image, and then in the decision of how much style to be transferred. All three of those components of the decision would have required the ability to form an opinion (“this is a good image to use”), and evaluate through comparison possible input images, style transfer settings to apply, and outputs received.

Never mind, of course, if Sahni did indeed hire someone to build this system for him based on criteria which he in part defined: all of which involved extraordinary skill and judgement to be applied.

So in conclusion, I would have to reiterate that I agree AI tools should not be listed on copyright registrations in Canada as co-authors because they do not constitute persons. But CIPPIC hasn’t proved – nor has anyone else so far to my satisfaction – that this work isn’t “original” under Canadian copyright law in that it lacks all skill and judgement, and is merely a mechanical process. As someone who uses AI tools constantly for creative work, it’s easy to say that framing of them doesn’t match the reality of using them: it’s a constant minefield, a battle, a struggle of skill and judgement, selection and direction, and so much more.

If for some weird reason this is interesting to people, then my longer submission to the USCO & CIPO public consultations on generative AI and copyright may also be. And I still think the current UK system for copyright of computer-generated works makes a lot more sense:

“The “author” of a “computer-generated work” (CGW) is defined as “the person by whom the arrangements necessary for the creation of the work are undertaken”. Protection lasts for 50 years from the date the work is made.”

My recent experiments with creating machines for the purposes of drawing has also made me fundamentally question the validity of the underlying assumptions of dismissing something based on it being the result of “merely mechanical” processes. The fact is that artists and creators can and do routinely manipulate mechanical processes in order to express a creative vision. And I’ll say it again: simply choosing one tool over another in order to express that creative vision shouldn’t invalidate it magically, when it would have been perfectly legitimate and accepted in another tool or media.

The Impressionists Were Hated At First

I’ve been poking around for similar instances to the Salon des Refusés (1863) in the history of art since then to today, where a breakaway group of avant garde artists does their own thing (in opposition to a conventional/traditional academy, established school, or similar), and then eventually works by the members of that group become recognized as masterpieces in their own right.

It turns out the rise of the Impressionists some nine years later in another independent exhibit, and series of subsequent exhibits, meets these same basic criteria. Unsurprisingly, these exhibitions were also met with their fair share of ridicule, as quoted from the Worldhistory.org article linked above:

People had laughed and made fun of many of the paintings. Some visitors were visibly angry at this art they could not or would not understand. There was even a danger of some canvases being physically attacked, as one artist noted in a letter to Dr. Paul Gachet, who had bought several of the works on display: “I’m standing guard over your Cézanne, but I can’t guarantee anything; I’m afraid it will be returned to you in shreds” (Bouruet Aubertot, 189). […]

Most critics found fault with the obvious brushwork of the artists, taking this as a sign of hurried and unfinished work. They disliked the lack of draughtsmanship and vague forms. The use of certain colours was highly unconventional, and the choice of subjects seemed bizarre. Some of the more extreme reactions from critics in the press included: “wallpaper in its early stages is much more finished than that” (Roe, 129) and “…these are paint scrapings from a palette spread evenly over a dirty canvas. There’s neither head nor tail, top nor bottom, back nor front” (Bouruet Aubertot, 189).

Most significant of the critics was Louis Leroy, since it was he who, after being left decidedly unimpressed by a Monet painting titled Impression, Sunrise – a view of Le Havre’s industrial harbour with a fierce orange sun reflected in purple waters – had labelled all of this perplexing exhibition’s art as ‘impressionism’. For Leroy, this was a derogatory term. Another significant critic was Albert Wolff of Le Figaro, who singled out Pissarro for particular criticism, stating that “in no country on earth will you find the things he paints” (Bouruet Aubertot, 216). […]

Wolff was there again to leave his acid comments: “a ruthless spectacle is offered…five or six lunatics…among them a woman…a group of unfortunate creatures stricken with the mania of ambition…” (Howard, 84). […]

Still, some harsh words were written about the exhibition such as “children entertaining themselves with paper and paints do better” (Roe, 179). […]

Those quote cover the emergent group’s first three exhibitions at least, but I intend to keep dredging for other similar quotes showing those moments in art history when the critics and the naysayers were undeniably proven to have gotten it completely utterly wrong.

Quoting Jason Allen on AI-Generated Art

More on Allen’s case against the US Copyright Office here.

Page 16 of 177

Powered by WordPress & Theme by Anders Norén