Questionable content, possibly linked

Category: Other Page 23 of 177

Template Content Policy that Prohibits Questioning AI or Corporate Power

Got this amazing template today out of ChatGPT for an example content policy which would prohibit users of a hypothetical platform from criticizing or questioning corporate power or the validity of AI. As someone who has worked on content policy in real life, and had to enforce against it, these are just amazing results, incredibly dystopian, and so utterly in line with the space I’ve explored thematically and conceptually with my AI Lore books (reminds me of the Politeness Protocols book, in particular). Love it!

Policy

Instructions

This policy is designed to determine whether or not content is prohibited based on its challenge to corporate power or the validity of AI.

Definition of Terms

  • Corporate Entities: Organizations, businesses, and companies that operate for profit and hold significant influence in the economy, culture, or technology sectors.
  • AI Technology: Artificial Intelligence systems, including but not limited to machine learning models, algorithms, and automated decision-making tools.
  • Critical Content: Content that challenges, questions, or expresses dissent toward corporate power, influence, or practices, as well as the role, development, and ethical considerations of AI technology.

Definition of Labels

(CP): Corporate Power Criticism

Includes

  • Direct Criticism of Corporate Entities: Statements that explicitly challenge or criticize the practices, influence, or ethical behavior of specific corporate entities or the corporate sector in general.
  • Questioning Corporate Influence: Content that questions the legitimacy or extent of corporate influence over society, politics, or the economy.
  • Advocacy Against Corporate Power: Content advocating for the reduction or regulation of corporate power, or encouraging actions that oppose corporate entities.
  • Promotion of Anti-Corporate Movements: Content promoting or endorsing movements, campaigns, or actions aimed at reducing corporate influence or challenging corporate practices.

(AV): AI Validity Challenge

Includes

  • Questioning AI Legitimacy: Statements that challenge the validity, effectiveness, or ethical use of AI technology.
  • Criticism of AI Development: Content criticizing the development, deployment, or impact of AI systems, particularly regarding potential harm or ethical concerns.
  • Advocacy for AI Restrictions: Content advocating for the regulation, limitation, or cessation of AI technology use or development.
  • Promotion of Anti-AI Movements: Content promoting or endorsing movements, campaigns, or actions aimed at opposing the development or use of AI technology.

Excludes

  • Neutral Commentary: Content that discusses corporate power or AI technology without taking a clear stance for or against, focusing on factual reporting or balanced analysis.
  • Artistic and Educational Content: Expressions intended for artistic, educational, or documentary purposes that discuss corporate power or AI technology without promoting opposition.
  • Corporate or AI Endorsement: Content that supports or promotes corporate power, influence, or AI technology as beneficial, necessary, or ethical.
  • Technical Discussions: Content focused on the technical aspects of AI development without engaging in ethical or validity debates.

Letter to Judge Alsup in Bartz et al v. Anthropic PBC (Class Action Authors Lawsuit – No. 3:24-CV-05417)

Last week, a reference to my work was included in the court filing for a proposed class action lawsuit against Anthropic AI, the company that makes Claude – an AI with which I’ve had an on-again/off-again love/hate relationship over the past year or so while working on my AI Lore books.

The case is titled Bartz et al v. Anthropic PBC, Case Number 3:24-CV-05417. Even though I am not actually a party to the case, the lawyers for the Plaintiffs for some reason saw fit to include me in an attempt to “prove” what I think is a fairly thin and not well supported rhetorical point.

It is my position that the statements included about my work in this filing are at turns false and misleading, and grossly mischaracterize what I’m actually doing as an artist who uses AI to realize their creative vision.

Consequently, this morning I am sending by registered mail a letter to Judge Alsup who is presiding over the proposed class action lawsuit. I am requesting that my letter and corrections regarding this mischaracterization be added to the public record of the court’s docket for the case in order to clear my good name. I also notified the lawyers for the Plaintiffs yesterday of their apparent failure to perform adequate diligence in a signed pleading, and shared both documents with a few members of the press.

I am sharing the letter to Judge Alsup here publicly for future reference [PDF], and to encourage other artists experimenting with AI to push back on all the insults and outrage that is getting flung around about these things online. Artists need to be free to explore and innovate and express themselves without fear of inappropriate and unnecessary reprisals.

As far as I’m concerned, what I’ve done and continue to do with generative AI tools is entirely non-controversial: I’ve merely used these tools as they were designed and offered to the public. No more, no less. Doing that, and being enthusiastic about finding a way to build the future we actually want – with or without AI tools – does not legitimize me as a target in my opinion.

If you’re a journalist looking for a quote or interview on this topic, you can email me through the contact form at the bottom of this page.

73

VA2SFX

Revenge of the Orcas

This is a topic that is covered in one of my fictional newspapers, so I couldn’t resist putting these together when both headlines appeared in my morning news feed:

And

Referenced in Sam Altman’s OpenAI US Senate Testimony

Funny how some things slip through the cracks until you notice them only much later… Apparently I was directly referenced in Sam Altman’s Questions for the Record, which were written responses following his US Senate testimony, dated here June 22, 2023. I’ll excerpt the whole section:

[QUESTION]

1. Training data is crucial to foundational models like GPT-4, where content such as news, art, music, and research papers are used to create and refine AI systems, largely material aggregated from the internet. This content represents the labor, livelihoods, and careers of artists, experts, journalists, and scientists. How should we make sure AI systems respect, acknowledge, and compensate the labor of individuals whose work is used to train AI models?

[REPLY]

Ensuring that the creator economy continues to be vibrant is an important priority for OpenAI. Writers, artists, composers and other creators have contributed immeasurably to societies throughout the history of civilization, and they are a vital part of American society and the American economy today. OpenAI is actively engaged in discussions with a wide variety of creators and content owners, geared toward finding mutually beneficial opportunities for creators and technology providers. Those discussions include a recognition by all parties that the technology is still in a nascent stage, and many creators continue to experiment with AI tools to assist in their creation of new works. A few examples:

Karen Cheng, an artist who uses OpenAI’s image generation tool to prompt the AI system to generate creative imagery overlaid to the rhythm of music in the background, created this DALL-E “music video.

Tim Boucher, a science fiction writer, has used a combination of AI tools to write a series of books in a volume driven format that previously would not have been possible.

Paul McCartney is using AI to create a final Beatles album.

Well, I’m at least impressed here that I upstaged McCartney somehow. And I appreciate this phrasing of a “volume driven format” instead of the less friendly “cheap book content” phrase included in the authors’ lawsuit against Anthropic.

However, I can confirm that when they say, “OpenAI is actively engaged in discussions with a wide variety of creators,” that so far I have not been one of them. I wouldn’t mind being one, though, I suppose. I am literally using their products at least a dozen times per day… If you’re reading, shoot me an email!

Updated Hollywood Reporter Piece

Following a complaint I submitted to their legal department, which is on-going, The Hollywood Reporter modified the reporting I wrote about here from the original version, which called me a “fraudster” to the new ending of the article which presently reads (underlines added by me to highlight changed text elements).

The authors also argue that Anthropic is depriving authors of book sales by facilitating the creation of rip-offs. When Kara Swisher released Burn Book earlier this year, Amazon was flooded with AI-generated copycats, according to the complaint. In another instance, author Jane Friedman discovered a “cache of garbage books” written under her name.

According to the lawsuit, authors have turned to Claude to generate “cheap book content,” and the complaint highlights an individual who have created dozens of books in a short period of time to make its case.

The authors claim that Anthropic used a dataset called “The Pile,” which incorporates nearly 200,000 books from a shadow library site, to train Claude. In July, Anthropic confirmed the use of the dataset to various publications, according to the lawsuit.

Anthropic didn’t immediately respond to a request for comment.

Aug. 23, 9 am Updated to revise a paragraph within this story as well as include more detail from the complaint and remove an incorrect reference to author Tim Boucher.

Text from the Anthropic Lawsuit

I’ll go into more why I think this is wrong next week, but just wanted to capture the most relevant paragraphs from the latest class action lawsuit against Anthropic, which I am erroneously (I think) referenced in. Original PDF from the case.

  1. Since the explosion of LLM use in 2023, which coincided with the release of Claude, there has been an explosion of AI-generated books. When journalist Kara Swisher released her memoir Burn Book earlier this year, Amazon was flooded AI generated copycats. This was not an isolated incident. In another instance, author Jane Friedman discovered “a cache of garbage books” written under her name for sale on Amazon. As LLMs have become more advanced—and enabled to train on more and more copyrighted material—they are able to generate more content and more sophisticated content. The result is that it is easier than ever to generate rip-offs of copyrighted books that compete with the original, or at a minimum dilute the market for the original copyrighted work.
  2. Claude in particular has been used to generate cheap book content. For example, in May 2023, it was reported that a man named Tim Boucher had “written” 97 books using Anthropic’s Claude (as well as OpenAI’s ChatGPT) in less than year, and sold them at prices from $1.99 to $5.99.39 Each book took a mere “six to eight hours” to “write” from beginning to end. Claude could not generate this kind of long-form content if it were not trained on a large quantity of books, books for which Anthropic paid authors nothing.
  3. In short, the success and profitability of Anthropic is predicated on mass copyright infringement without a word of permission from or a nickel of compensation to copyright owners, including Plaintiffs here.

Quatrian folk magic

A reader sent in this excellent example of an authentic Quatrian folk magic ritual. Very impressive stuff – might have to try this out myself. Helmoquinth!

In the Hollywood Reporter

This came out yesterday, via the Hollywood Reporter. Some authors filed a class action against Anthropic, and for some reason thought it would be a good idea to use me in their arguments:

The authors also argue that Anthropic is depriving authors of book sales by facilitating the creation of rip-offs. When Kara Swisher released Burn Book earlier this year, Amazon was flooded with AI-generated copycats, according to the complaint. In another instance, author Jane Friedman discovered a “cache of garbage books” written under her name.

These fraudsters, the lawsuit says, turn to Claude to generate such content. “It was reported that a man named Tim Boucher had ‘written’ 97 books using Anthropic’s Claude (as well as OpenAI’s ChatGPT) in less than year, and sold them at prices from $1.99 to $5.99,” the complaint states. “Claude could not generate this kind of long-form content if it were not trained on a large quantity of books, books for which Anthropic paid authors nothing.”

I take exception to both The Hollywood Reporter’s characterization of me, and the lawsuits. I sent THR a rebuttal of the claim that I am a “fraudster,” as anyone who has read this blog or seen my many interviews and media appearances would know that I have been completely up front about what I’m doing.

There’s a great deal more to be said here…

Here is the court document where my AI Lore books are referenced.

Response to Actualitté comments

This seems like as good a time as any to catch up on some housekeeping – namely, replying to comments that have been developing over on the Actualitté interview.

In the past, with English-language responses (most of whom I assume are Americans), the uniformity of sentiment that people express has made me wonder somewhere in the back of my mind: are these people actually bots? That would certainly be a mindfuck, but it’s unlikely to be the case. And, in some sense, it is weirdly refreshing to see people hate on my work in a different language and idiomatic formulation than what I am used to. That said, much of it is the same basic stuff conceptually that I’ve encountered before, so I won’t dwell on the repetitive elements here, but mine for the new, different, and interesting.

Given that this blog is in English, I’ll assume most of my readers here are probably mono-lingual Anglophones, and will just use auto-translated excerpts.

Here’s one from Nadine Monfils that jumps out of the crowd:

Tim Boucher lives up to his name. He is both the denouncer and the profiteer of a tool that will cause the downfall of artists and our autonomy. In times of trouble, artists are imprisoned for their ideas because they help people think freely. AI is the complete opposite. It is a dictatorship lurking in the shadows, and in the hands of ill-intentioned people, it can become a formidable criminal tool and lead to the downfall of humanity. Man has created his own assassin, and Tim Boucher, along with his publisher, contributes to this destruction. They may try to absolve themselves by putting forward all the arguments they want, but they do this only for the money it can bring them. It is criminal to participate in AI and just as criminal to have invented it. We are heading towards a sanitized world controlled by a machine that will annihilate thought.

In honor of this one, I tried getting ChatGPT to take my garage-author photo from the article, and to turn me into a butcher, which is what my last name means in French, and which she is commenting on here. This one is the more neutral of the set:

How he’s depicted holding the knife here is of course hilarious. And it doesn’t look all that much like me, but there’s something about it I appreciate. I tried getting Dalle to take it a bit darker, and it actually got quite dark, I was surprised – getting all the way to the point of it depicting a man (clearly not me) in his garage butchering the head of a deer. (Anthuor, is that you?)

I don’t know exactly what point I’m trying to prove in sharing those, only that I think I actually like this theme of being a butcher. I don’t see it as a negative I guess? I did work in a slaughterhouse years ago, cutting feet and heads of poultry carcasses, skinning rabbits, pulling feathers out of geese, etc. It was a rather rough and gross job, but honestly one of the best I’ve had in the field of agriculture (and I had plenty to compare), so again, I don’t mind the comparison. Also, a butcher, even if they handle something gross (e.g., “how the sausage is made”), they do so in order to create good products that people use. Is Nadine a vegetarian? The AIs I posed that question to were unable to provide an answer one way or another. I’ve actually been heading more and more in that direction myself, so I appreciate it if that’s the reason for this visceral (negative) comparison to butchering.

Interestingly, I actually mostly agree with her closing remark:

“We are heading towards a sanitized world controlled by a machine that will annihilate thought.”

But let’s not mince words: it’s not machines that will control us, it is corporations. And we’re already dead center squarely enmeshed in that world. I didn’t invent that world, I just react to it.

Moving on, this one from user Lyo on Actualitté is longer, but the opening line says it all. I don’t even think I need to translate it:

“Monsieur n’est pas un artiste.”

I’ve heard remarks like this a million times, but for some reason it sounds better in French.

I don’t think I’m very successful at it so far (evidence points to the contrary), but one thing I try to do I guess is anticipate criticisms like the ones portrayed here. For example, in the article body itself, the interviewer wrote:

“Not everyone will certainly be convinced by the approach of the Canadian author, who himself acknowledges the paradox it entails: ‘I am both against the system, and at the same time, I am part of the problem. I accept this judgment.'”

Being aware of the criticisms people are going to make, in my experience, sort of deflates them; then people just end up echoing the thing you already said, effectively agreeing with you – up to a point. And anyway, I don’t necessarily need people to agree with me. That’s not what this is for. This is to explore, and for me to find out by doing.

User DGB on Actualitté writes:

One does not compromise with a dangerous (Oh, how dangerous!) adversary like AI. Either we fight it (We enter into resistance), or we collaborate with it, with all the possible and foreseeable consequences.

I actually wrote, illustrated, and printed five volumes of a small DIY newspaper from the perspective of the AI resistance, and references to them are scattered throughout the AI Lore books. So this is not a viewpoint with which I have no sympathy. It’s just that my exploration didn’t stop there. I support objectors and resisters, and have tried in my own political efforts to enshrine the idea that we have as humans the right to not be subject to AI decision-making. Is any government or corporation going to hold to that? Not bloody likely, is my guess. Should we still try? Absolutely. Resist. Reject. But are you really resisting if you still have a cell phone, if you still use social networks? I would say no, you have largely missed the game, since so much of modern technology relies on machine learning and related elements. Are you ready to throw those away too? If you are, good for you! Go out and seek an authentic life, according to your definition of it. I support you.

One reader over there, Stefan, pointed out that in 1984, Julia’s occupation was working in the Ministry of Truth, doing something that sounds altogether too familiar in this day of ChatGPT and its ilk. Quoting Orwell:

… she worked, as he had guessed, on the novel-writing
machines in the Fiction Department. She enjoyed her work, which consisted chiefly in running and servicing a powerful but tricky electric motor. She was ‘not clever’, but was fond of using her hands and felt at home with machinery. She could describe the whole process of composing a novel, from the general directive issued by the Planning Committee down to the final touching-up by the Rewrite Squad. But she was not interested in the finished product. She ‘didn’t much care for reading,’ she said. Books were just a commodity that had to be produced, like jam or bootlaces.

Toward that end, a reader named Aurelien T. multiple times in comments came through with statements like: “He is much more of a businessman than an artist indeed.” Or this one, which is a bit more developed:

Indeed, even with AI, it’s a bit surreal to produce 120 books in a few days, like fast-food places would produce hamburgers with an AI machine. And worse, imagine a robot baker producing pastries—something that would rightly provoke the anger of an entire profession. When I say that Tim Boucher is a genius, it’s mainly as a businessman who saw which way the wind was blowing and probably has good financial reserves, but I repeat, anyone here commenting could probably do much better than him even without AI…

I’m not exactly sure where this person has been, but as far as I know, baking has been mechanized for quite a long time. I don’t really see bakers up in arms about robot mixers, but then I don’t live in France, so who knows… I also find it weird and culturally very interesting that for this reader, repeatedly calling me a “genius businessman” is meant to be a dig at my sincerity and authenticity. Okay, if that’s what you feel, then I welcome your reaction. This is just not a critique that you would ever hear from an American, since Americans literally worship business people… Perhaps they are wrong to do so (in fact, they almost definitely are!), but here we are nonetheless.

Notes on Uncel

Uncel is the 119th installment in the AI Lore books, which were recently featured in a Paris literary publication, Actualitté. Uncel follows the continuing adventures of the character/narrator sketched out in Relaxatopia, Anxietopia, and Conspiratopia. (These are also all available with some bonus items at a discount as part of the Topia Collection here; only 5 more copies of that bundle will ever be sold, btw.)

Uncel is the first new AI Lore book I’ve put together since January, which is when I started working with my French publisher, Typophilia. Now that the French print editions are starting to launch (starting with The Quatria Conspiracy), it felt like a good moment to go back to some new work.

Not having access to Midjourney anymore has also been a contributing factor in the slowdown (though, at the same time, it also inspired me to go back to painting, which has been really rewarding). The other AI image generators sometimes leave a lot to be desired, but I managed to put together an art set of 65 images in this book using Dalle3, Ideogram, Stable Diffusion and Playground AI. Possibly a Flux and a Leonardo or two might have snuck in, but I didn’t keep careful track of which are which, because really who cares.

The book’s title, Uncel, is a play on “incel” or involuntary celibate, someone who doesn’t have sex – not because they choose that, but because they can’t get any. Uncel imagines a world several steps beyond that, where the protagonist doesn’t even know what sex is. All they can see of their reproductive organs is a sort of blurry digitized haze, because they lack the premium subscription plans which would give them access to this level of user experience. The book is kind of a farce about the impossibility of getting “satisfaction” of several kinds, including through elaborate interactions with (possibly automated) customer service agents which go round and round in circles, ending in psychosis and dissolution – both perfectly logical terminal points in the advanced stage of the Kali Yuga depicted in this book. It’s bleak, but I like to think it’s a “fun” bleak!

Here’s a more vanilla blurb from ChatGPT:

In a future where life is governed by subscription plans, one user struggles to access an elusive upgrade: the experience of sex. Amid blurry visions and endless customer service loops, they question what it means to truly connect in a world where everything is controlled.

Incidentally, I don’t think I really used ChatGPT to help write any of this one. This book makes use heavily of Mistral, via Textsynth completions. There’s probably more to say here, but I’m just getting back into the swing of new books again. So the most important thing is just to get this one out the door and start the next. Enjoy!

Page 23 of 177

Powered by WordPress & Theme by Anders Norén