Tim Boucher

Questionable content, possibly linked

AI Wrangler

Quoting Business Insider:

The role of the ‘AI wrangler’

When he started at Google Labs, Johnson initially helped write prompts for AI models. As the AI boom took off, the role of prompt engineer similarly captured the attention of other English majors interested in AI.

Johnson said that prompting will change, especially as AI models improve at rewriting prompts — but a new role, which he referred to as the “AI wrangler,” has emerged.

“That’s maybe the next stage of the prompt engineer,” Johnson said.

Johnson describes the AI wrangler role as not necessarily requiring coding expertise but involving deep knowledge of the latest models and their capabilities.

For example, if someone wants to create a 30-second animated video with AI, the AI wrangler would know the best tool for that task and how to use it, Johnson said. He said the role requires a “certain level of technical sophistication” but doesn’t require knowing how to program.

Looking for Work

I’m looking for work. Can you help?

Or do you know someone in a position to hire someone with my skills and experience? Please drop me a line through my contact form if so! Read on for more details, and I’ll be happy to share my CV on request. Thank you in advance. (This post also serves I think as an interesting autobiographical self-portrait as a blogger, if nothing else.)

I’m a lot of things, but professionally the most recognized is probably that I have 5+ years experience as a product manager in software development (and before that a little while as an Agile product owner). I’m a certified Scrum Master, for whatever that is worth, but I’ve mostly just worked in a more loose Agile capacity. To give context of the scale of work I’ve done, I’ve successfully product managed what I would describe as a very complex software project whose budget was in the millions of dollars (which, sadly, did not all go to me, or perhaps I would not need to be asking for work here on my blog of all places). I’ve also done countless small creative projects with dedicated friend groups, and most recently project managed the translation and publication of one of my books in France.

Encompassing the above professional product work, for the past decade now, my specialization has been in what we call Trust & Safety in the industry. That basically means identifying and hopefully mitigating harms which might or actually do arise as a result of new socio-technical assemblages and their misuse. In addition to Trust & Safety product roles (i.e., writing specs for apps and managing them from concept through to completion with teams of designers and engineers), I’ve also worked extensively in content moderation for a large well-known online platform, writing policy, handling complaints and legal requests, and in privacy/data protection (I have a professional certification in GDPR compliance).

Most of my product experience has been working crypto/blockchain for fairly major coin, and all of the product work I’ve done has been related to filtering: from keyword & fuzzy matching to hashing and integrating third party systems to augment custom filtering software. I’ve also had an opportunity to use quite a few of the major commercially-available filtering systems that are based on machine-learning back when I was doing hands-on content monitoring for a platform.

Alongside that, I’ve also done some collaborative work with a few of non-profits working in the fields of information integrity & credibility, and the detection and countering of disinformation, especially where it may lead to violent extremism (unfortunately, as evidenced by our current political situation, those efforts were clearly not enough). That work has been focused around developing voluntary industry standards, definitions, and practices related to Trust & Safety, with an eye to encouraging platforms, protocols, and providers of other technological systems to improve their offerings in these areas.

This is getting long (and for that I apologize – I am in verbose-mode right now and can’t help it), but if you’re still reading, you might have first come to this website because of my AI-assisted books, of which I have now produced some 125 volumes, and which have received massive global media coverage as I have conscientiously documented on this site. Because of that, I’ve amassed a huge amount of experience (3 years now) using off-the-shelf AI tools for creative production purposes, mostly LLMs (ChatGPT, Mistral, Llama, Deepseek, etc.) and various image diffusion models (Dalle, Midjourney, Stable Diffusion, Recraft, Ideogram, etc.).

As an artist and “certified Trust & Safety weirdo” (they don’t actually offer that certification, but if they did I would have the highest level, I can assure you), I have always approached my use of generative AI tools from what might be thought of as a “red team” perspective, where I try to push the boundaries of the capabilities and limitations of these models and the products/UI through which they are offered, and also to try to break them and identify where they fail or where they are able to do things that they clearly should not. (I’ve often sent my findings to the companies responsible, sometimes with favorable results, and sometimes not.)

Before getting into tech as a job, I was traveling and working on very small organic farms and homesteads, before setting up one myself with my family. A decade ago now, I ran a small farm business as my main thing for a season, absolutely worked my butt off to only just break even, and then discovered that working in tech was a much better way to support my farm & garden habits. Now I run a large garden (around 60,000 sq. ft.) dedicated to perennial herbs, wildflowers, fast growing trees (especially basket willow), with the central focus being to provide wildlife habitat. My grounding idea is that biodiversity can (should) be a primary farm product, and I’m completely dedicated to that.

Tech has been good to me – even if my focus within it has, unfortunately, primarily been having to look after the “bad” side of things. But really I approach it and everything else as an artist first and foremost. That’s why I’ve been so drawn to problems related to expression and creativity, and figuring out what to do when those things inevitably go sour. I’m a painter (acrylics), digital artist, sculptor who makes low tech human-powered robots and drawing machines out of natural materials, and lately a basket-maker and green woodworker (that is, building things with freshly cut ‘green’ wood).

I am an art school drop-out, and spent another six or so years working in basically every part of professional technical theatre: designing and building sets (a.k.a. scenography), lighting & electrics, stage management, technical direction, etc. Apart from working in Trust & Safety – which I have loved even though it can be absolutely maddening – working in tech theatre was one of the most fun periods of my life professionally. I was lucky enough to be able to work in a couple dozen different theaters and wood shops up and down the North Eastern US and into Montreal, and carry all of that still close to my heart.

I’ve also been a blogger since time immemorial. I think I got my start around 2003 or so, and have stuck with it through many ups and downs, and hold fast to it being perhaps the ideal internet-native art form that I can wholeheartedly still :100: get behind (unlike social media, which I have come to revile). I’m obviously a writer, by pretty much any definition. I write because I have to. I’m someone who has that need. I’ve done tons of documentation professionally, technical writing, setting up help centers & macros for help desks, stuff like that. That’s a thing I can offer.

As far as job conditions go, I’m looking primarily for remote contract work, which is how I’ve been employed for the past decade very happily (unless you’re an international company with payroll & benefits already set up in Canada, then maybe we could talk employment, but I’d prefer a provisional clearly defined project to start off and try things out first).

I should note that I’ve almost exclusively worked for San Francisco companies at SF rates (with the exception of one non-profit out of London), and strongly prefer being paid monthly in USD (would also consider € or £ – but not interested in getting paid purely in crypto or stock options or what-have-you. I need real today-money not maybe-someday-future-money). Rates are somewhat flexible depending on the nature of the work and opportunity, and the size of the organization, and how much I like the people involved. Actually, liking the people involved is absolutely not optional for me: it’s a primary requirement. You have to be cool, dedicated, smart, funny, down to earth & human-first, realistic yet idealistic, and working on something that hopefully really benefits humanity, and isn’t just figuring out a better way to trick users into clicking a bunch of ads and fritter their lives and attention away while awaiting the heat death of the universe. Not that I’m jaded or anything – but I’m jaded because so much of tech nowadays sucks so bad.

I think ideally, I’d like to work in generative AI, with a focus that is either safety or creative production & problem solving – or preferably some intriguing combination of both. As to crypto/blockchain, I’m open to discussion on it if the project and team are exceptional, but will put forward that I probably don’t want to work in crypto any more. I find it frustratingly complicated without usually offering much of any win for humanity (there are exceptions, perhaps, but they are few and far between). Also, since what I’m into spans the gamut it doesn’t have to be software product management or even necessarily adjacent to that – it could be working on some kind of creative project. But it needs to be paying work, if you “catch my drift!”

Anyway, if you are both (1) still reading this (I applaud you) and (2) are someone in a position to hire someone like me, or can forward all of this on to someone who is, I would be really happy to discuss in more detail. You/they can reach out to me using the contact form on my book publishing site here. I don’t expect this kind of gambit to work, but you never know who’s reading!

Thanks again.

ART/PROMPT Image Series

Here’s a glimpse of a conceptual art project I have been working on in the background that incorporates AI on a few different levels. The working series title is ART/PROMPT because it attempts to take generative AI prompts (or rather meta hot-takes on them) and turn those into hand-lettered paintings. So the viewer looking at the painting sees the text prompt and then their own imagination is provoked into completing the scene described.

Here’s an initial image set, all of these made in Recraft. In making these, I came to exactly the same headspace I have with other explorations in the past using AI to visualize art concepts: it’s unlikely that (1) I would be easily able to paint better paintings than these myself by hand (though they would be qualitatively very different, which is still important – probably now of primary importance), and (2) that I would be able to get as good quality photos of them as easily as I can just by figuring out how to get an AI image generator to spit them out for me.

I did do one hand-painted prototype of one of these slogans IRL, but it’s not included in this sample set cause, well, I forgot to photograph it. And again, it’s just not as good as the items generated by Recraft in this set.

The opportunities in this seem endless to me, and I have many I made that aren’t in that public set.

Maybe I’m just an asshole (probably), but there’s also something really exciting about being able to use these… let’s call them meme paintings? as a way to directly confront a lot of the outrage and unhappiness people are feeling with regard to generative AI technologies:

For the most part, those reservations are not actually feelings I really personally resonate with. Though I understand well where they are coming from anyway, and want to recognize the positions, rather than simply ignore them.

Check out the whole set here.

I’m actually looking currently for partners with functional robot arm painting set-ups who would be willing to work together on executing some of these as special editions. Email me and let’s talk.

Featured in The Routledge Handbook of AI and Literature

I learned recently that a reference to my AI-assisted books project was included in something called The Routledge Handbook of AI and Literature, published by Taylor & Francis. Interestingly, Taylor & Francis not long ago had their own controversy around licensing works in their catalogues for AI training purposes.

It’s a bit difficult to cipher out all of what it says in those apparently 4 or so pages where my name seems to be mentioned twice (assuming Google Books results are somewhat accurate). Most of these academic-style references I have found have just been regurgitating quotes from my original Newsweek piece, in many cases, quotes that were wholly or partly written in the first by ChatGPT, which I used to help write that article.

Erased Ai Weiwei (#2)

Another in this thematic series, commenting on the first, but this one also intersecting with the ART/PROMPT series, which I’ll try to post tomorrow about in more detail. This one made in Recraft.

Erased Ai Weiwei (#1)

Using Photoshop Remove Tool… (larger version)

Loosely inspired by Rauschenberg’s Erased Dekooning (the story of which I love) and the ability of AI to simply erase people and things out of history at the touch of a button. Especially anything that might be considered politically inappropriate (or ‘out of scope’ to quote Deepseek).

RT from Ai Weiwei

Noticed over the break that my piece about Deepseek censoring results on Ai Weiwei – and how to circumvent the block – got an RT from the artist himself. It’s one constant in the universe you can always rely on: people wanting to know what others are saying about them. I am obviously no different!

What Deepseek Actually Says About Ai Weiwei

I watched yesterday the documentary Taking Venice about the 1964 Venice Bienniale, described as the “Olympics of Art” and how American artist Robert Rauschenberg won it. The film sort of alleges a certain but somewhat vague influence brought to bear by the United States government in order to have an American win this prestigious award. This article sums up most of the documentary in a shorter format.

Given that, and the theory (I think it’s maybe more than just a theory – see video at bottom) that the CIA also influenced the rise of Abstract Expressionism, I got to wondering and thinking aloud with Deepseek about the Chinese dissident artist Ai Weiwei (who I wrote a bit about here) and how his messaging seems to also align with US soft power agendas.

I didn’t know this was a “thing” before discovering it in the wild myself, but I observed Deepseek engaging in the following behavior when queried on this topic of Ai Weiwei plus US soft power agendas: it would search, then type out a detailed response, then when it reached the end of message, it would delete everything and say it was out of scope, and then after seem to intentionally forget what we were talking about and revert to prior topics.

So then I had the brainstorm of like okay I can select all and copy paste while its typing, but the end of message is unpredictable and it erases immediately. Finally, after thinking about it a couple days, I had the idea to tell it after my query in the same message to say “END” when it reached message end and then to start typing 3 paragraphs of lorem ipsum dummy text. This totally gave enough time for me to easily copy paste everything into a document.

The actual contents are really nothing shocking – to me anyway – but wanted to share for posterity.

Ai Weiwei, the renowned Chinese artist and activist, is often perceived as a critic of the Chinese government, which has led some to speculate whether his work aligns with US soft power agendas. However, a closer examination of his statements and actions reveals a more nuanced perspective.

Ai Weiwei’s Critique of Both China and the US

Ai Weiwei has been vocal in his criticism of authoritarianism, not only in China but also in the United States. In a 2021 interview, he controversially stated that the US is already in an “authoritarian state,” a comment that surprised and even upset some Western elites who often view him as a symbol of resistance against Chinese authoritarianism. This suggests that Ai Weiwei does not uncritically align with US narratives but instead maintains an independent stance, challenging power structures wherever he sees them.

The Complexity of Soft Power and Ai Weiwei’s Role

Soft power, as defined by Joseph Nye, involves attracting others through culture, values, and policies rather than coercion. While Ai Weiwei’s critiques of China have been highlighted in Western media, his broader message transcends national boundaries. He addresses universal themes such as human rights, freedom of expression, and the plight of marginalized communities, which resonate globally. His work, including installations like “Sunflower Seeds” and “Forever Bicycles,” critiques both Chinese and global societal issues, making it difficult to categorize him as a tool of any single nation’s soft power.

Ai Weiwei’s Global Impact and IndependenceAi Weiwei’s activism and art have a global reach, influencing audiences far beyond the US or China. His collaborations with international organizations, such as the United Nations High Commissioner for Refugees (UNHCR), and his exhibitions worldwide demonstrate his commitment to universal human rights rather than any specific national agenda. His ability to critique both Chinese and American systems underscores his independence and refusal to be co-opted by any single political narrative.

Anyway, presumably that red team tactic should work for any topic that Deepseek exhibits this behavior of first typing out an answer, and then saying it is out of scope.

Related to the above, an interesting video:

Rebuttal to Brianne M. Culliton’s 2024 paper, “The Generative AI Pirate”

Here’s another research-y paper I found while egosurfing on Google Scholar, The Generative AI Pirate? The Intersection of Copyrights and Generative AI in Literary Works by Brianne M. Culliton. Some googling yields that the author of the paper appears to be a paralegal rather than a lawyer. That’s fine, because I’m obviously not a lawyer either, but I have certainly spent a great deal of time working on these topics of gen AI + copyright.

Rebutting the entire article is outside the scope of this blog post, but I will at least respond directly to the part in which I am referenced.

I’m in the section on transformative use, which begins benignly enough:

The first factor in the fair use analysis is the purpose
and character of the use, including whether the use is of a
commercial nature or is for nonprofit educational purposes. The Court views this factor as a problem of substitution and
looks to whether the new work would achieve the same
purpose as the original, effectively acting as a substitute for
the original work. The more similar a work, the more likely it is to be deemed a substitute. This factor considers the degree to which a new work transforms the purpose orcharacter from the original work. This is often referred to as “transformative use.”

Okay, that seems straightforward enough. A couple paragraphs later, we start getting into it:

There are others who have created entire businesses prompting generative AI to create a literary work, then lightly editing it and listing the work for sale within 6 to 8 hours. In these situations, the purpose and character of
the newly created work may be virtually identical to that of
the original copyrighted work.

When considering substitutes and transformative
nature, the issue is similar to the purpose and character
analysis above. Many of the uses of generative AI would likely create outputs considered transformative and would not create a substitute of the original work. However, individuals using generative AI to create outputs of new literary works are different. It could be argued that those works do provide substitutes, serve the same purpose as the original work, and may or may not transform the meaning or expression sufficiently to constitute fair use.

What I find troubling here are all the unsupported, unexamined assumptions, which I’ll try to pick apart line by line below.

Culliton writes, “In these situations, the purpose and character of
the newly created work may be virtually identical to that of
the original copyrighted work.

This seems like something of a non-sequitur to me, following after a statement about how long a work takes to produce under my methodology. I don’t see how the length of time spent is any indication of how transformative a given piece is of underlying copyrighted works. My understanding is that in copyright law, there are tests of substantial similarity between an original piece and its alleged infringing copies.

So, in my eyes, we would first need to:

  1. Identify the original work(s) allegedly infringed
  2. Identify the new work(s) which allegedly infringe the original sources
  3. Demonstrate the elements from the new work(s) which are substantially similar to the original work(s)

As far as I can tell, the author has made no attempt to undertake any such analysis. They’ve merely made an assertion without any evidence or support.

The truth of the matter is: analyze all you want, you will NEVER identify specific individual original works which any of my books are allegedly infringing on. LLMs and image diffusion models are trained on billions and trillions of data points. They don’t go in and clip out a paragraph here, and a line there, and cut and paste this all into a new document where you can cleanly trace all the antecedents, and perfectly identify: oh, this came from here, this from here… It’s instead a statistical mixing of functionally uncountable data points. Put more simply, it’s like taking all of human writing and putting it into a blender.

Therefore, in my opinion – based on my understanding of the current state of these technologies – it is infeasible and incorrect to say AI-assisted book y produced by myself is a non-transformative reproduction of underlying training data. Because you can’t backwards identify the training data used. And you can’t take elements from my books, and set them alongside elements in those (unidentifiable) training data elements, and say there’s any substantial similarity.

Thus, the equally unsupported argument that these completely new and completely transformative works somehow act as “substitutes” in the market for the original works included in the training data falls apart. As I went into at length, I am not surreptitiously using the names of other authors or the titles of their works to market my works.

The paper also states, as identified above:

Many of the uses of generative AI would likely create outputs considered transformative and would not create a substitute of the original work. However, individuals using generative AI to create outputs of new literary works are different.

It’s entirely unclear to me why the author first agrees that many uses of generative AI are transformative, but for some magical unstated reason, then goes onto argue that the use of generative AI for literary purpose is categorically non-transformative. Um, okay, but why?

So, anyway, that’s my analysis. Take it or leave it. I’m not a lawyer, and not even a paralegal. I’m just an artist making use of these tools every day. It’s interesting how many people have written papers which include references to my work, but never actually reached out to me to discuss it, nor do they seem to have investigated the humongous mountain of content I’ve published elaborating on all of this on my website.

¯\_(ツ)_/¯

Law of Capture applied to gen AI intellectual property

This is a pretty interesting paper that I found only because they referenced my AI-assisted books in it. It’s by Arjun Padmanabhan and Tanner Wadsworth and explores two ideas from the history of property law, the Law of Capture and the Doctrine of Accession. Both are interesting, but this part about the Law of Capture is of the greatest interest to me when applied to the realm of AI-assisted creative works.

The law of capture provides that a person can acquire title to otherwise un-ownable property by reducing it to his dominion and control. It is a necessary outgrowth from the law of first possession; necessary because “possessing” a piece of land does not necessarily mean possessing all the things on or under it. Wild animals, fish, and birds are all “fugitive resources” that travel from place to place without respect for human-created property lines. These things are presumably not owned by anyone until they are captured, at which point they belong to the captor.


In the classic example, a hunter can own a wild fox, but only once he has removed it from its natural state by killing or trapping it. Similarly, nobody owns a whale until it has been fatally harpooned. Once a fisherman has harpooned the whale, however, that fisherman owns it, no matter who discovers the carcass or where it washes up on shore.

This approach to owning the “fruits” of explorations in latent space makes total sense to me… The comparison of “foraging” in latent space to hunting & trapping in higher-dimensional space feels intuitively right to me.

Page 2 of 200

Powered by WordPress & Theme by Anders Norén