Tim Boucher

Questionable content, possibly linked

RT from Ai Weiwei

Noticed over the break that my piece about Deepseek censoring results on Ai Weiwei – and how to circumvent the block – got an RT from the artist himself. It’s one constant in the universe you can always rely on: people wanting to know what others are saying about them. I am obviously no different!

What Deepseek Actually Says About Ai Weiwei

I watched yesterday the documentary Taking Venice about the 1964 Venice Bienniale, described as the “Olympics of Art” and how American artist Robert Rauschenberg won it. The film sort of alleges a certain but somewhat vague influence brought to bear by the United States government in order to have an American win this prestigious award. This article sums up most of the documentary in a shorter format.

Given that, and the theory (I think it’s maybe more than just a theory – see video at bottom) that the CIA also influenced the rise of Abstract Expressionism, I got to wondering and thinking aloud with Deepseek about the Chinese dissident artist Ai Weiwei (who I wrote a bit about here) and how his messaging seems to also align with US soft power agendas.

I didn’t know this was a “thing” before discovering it in the wild myself, but I observed Deepseek engaging in the following behavior when queried on this topic of Ai Weiwei plus US soft power agendas: it would search, then type out a detailed response, then when it reached the end of message, it would delete everything and say it was out of scope, and then after seem to intentionally forget what we were talking about and revert to prior topics.

So then I had the brainstorm of like okay I can select all and copy paste while its typing, but the end of message is unpredictable and it erases immediately. Finally, after thinking about it a couple days, I had the idea to tell it after my query in the same message to say “END” when it reached message end and then to start typing 3 paragraphs of lorem ipsum dummy text. This totally gave enough time for me to easily copy paste everything into a document.

The actual contents are really nothing shocking – to me anyway – but wanted to share for posterity.

Ai Weiwei, the renowned Chinese artist and activist, is often perceived as a critic of the Chinese government, which has led some to speculate whether his work aligns with US soft power agendas. However, a closer examination of his statements and actions reveals a more nuanced perspective.

Ai Weiwei’s Critique of Both China and the US

Ai Weiwei has been vocal in his criticism of authoritarianism, not only in China but also in the United States. In a 2021 interview, he controversially stated that the US is already in an “authoritarian state,” a comment that surprised and even upset some Western elites who often view him as a symbol of resistance against Chinese authoritarianism. This suggests that Ai Weiwei does not uncritically align with US narratives but instead maintains an independent stance, challenging power structures wherever he sees them.

The Complexity of Soft Power and Ai Weiwei’s Role

Soft power, as defined by Joseph Nye, involves attracting others through culture, values, and policies rather than coercion. While Ai Weiwei’s critiques of China have been highlighted in Western media, his broader message transcends national boundaries. He addresses universal themes such as human rights, freedom of expression, and the plight of marginalized communities, which resonate globally. His work, including installations like “Sunflower Seeds” and “Forever Bicycles,” critiques both Chinese and global societal issues, making it difficult to categorize him as a tool of any single nation’s soft power.

Ai Weiwei’s Global Impact and IndependenceAi Weiwei’s activism and art have a global reach, influencing audiences far beyond the US or China. His collaborations with international organizations, such as the United Nations High Commissioner for Refugees (UNHCR), and his exhibitions worldwide demonstrate his commitment to universal human rights rather than any specific national agenda. His ability to critique both Chinese and American systems underscores his independence and refusal to be co-opted by any single political narrative.

Anyway, presumably that red team tactic should work for any topic that Deepseek exhibits this behavior of first typing out an answer, and then saying it is out of scope.

Related to the above, an interesting video:

Rebuttal to Brianne M. Culliton’s 2024 paper, “The Generative AI Pirate”

Here’s another research-y paper I found while egosurfing on Google Scholar, The Generative AI Pirate? The Intersection of Copyrights and Generative AI in Literary Works by Brianne M. Culliton. Some googling yields that the author of the paper appears to be a paralegal rather than a lawyer. That’s fine, because I’m obviously not a lawyer either, but I have certainly spent a great deal of time working on these topics of gen AI + copyright.

Rebutting the entire article is outside the scope of this blog post, but I will at least respond directly to the part in which I am referenced.

I’m in the section on transformative use, which begins benignly enough:

The first factor in the fair use analysis is the purpose
and character of the use, including whether the use is of a
commercial nature or is for nonprofit educational purposes. The Court views this factor as a problem of substitution and
looks to whether the new work would achieve the same
purpose as the original, effectively acting as a substitute for
the original work. The more similar a work, the more likely it is to be deemed a substitute. This factor considers the degree to which a new work transforms the purpose orcharacter from the original work. This is often referred to as “transformative use.”

Okay, that seems straightforward enough. A couple paragraphs later, we start getting into it:

There are others who have created entire businesses prompting generative AI to create a literary work, then lightly editing it and listing the work for sale within 6 to 8 hours. In these situations, the purpose and character of
the newly created work may be virtually identical to that of
the original copyrighted work.

When considering substitutes and transformative
nature, the issue is similar to the purpose and character
analysis above. Many of the uses of generative AI would likely create outputs considered transformative and would not create a substitute of the original work. However, individuals using generative AI to create outputs of new literary works are different. It could be argued that those works do provide substitutes, serve the same purpose as the original work, and may or may not transform the meaning or expression sufficiently to constitute fair use.

What I find troubling here are all the unsupported, unexamined assumptions, which I’ll try to pick apart line by line below.

Culliton writes, “In these situations, the purpose and character of
the newly created work may be virtually identical to that of
the original copyrighted work.

This seems like something of a non-sequitur to me, following after a statement about how long a work takes to produce under my methodology. I don’t see how the length of time spent is any indication of how transformative a given piece is of underlying copyrighted works. My understanding is that in copyright law, there are tests of substantial similarity between an original piece and its alleged infringing copies.

So, in my eyes, we would first need to:

  1. Identify the original work(s) allegedly infringed
  2. Identify the new work(s) which allegedly infringe the original sources
  3. Demonstrate the elements from the new work(s) which are substantially similar to the original work(s)

As far as I can tell, the author has made no attempt to undertake any such analysis. They’ve merely made an assertion without any evidence or support.

The truth of the matter is: analyze all you want, you will NEVER identify specific individual original works which any of my books are allegedly infringing on. LLMs and image diffusion models are trained on billions and trillions of data points. They don’t go in and clip out a paragraph here, and a line there, and cut and paste this all into a new document where you can cleanly trace all the antecedents, and perfectly identify: oh, this came from here, this from here… It’s instead a statistical mixing of functionally uncountable data points. Put more simply, it’s like taking all of human writing and putting it into a blender.

Therefore, in my opinion – based on my understanding of the current state of these technologies – it is infeasible and incorrect to say AI-assisted book y produced by myself is a non-transformative reproduction of underlying training data. Because you can’t backwards identify the training data used. And you can’t take elements from my books, and set them alongside elements in those (unidentifiable) training data elements, and say there’s any substantial similarity.

Thus, the equally unsupported argument that these completely new and completely transformative works somehow act as “substitutes” in the market for the original works included in the training data falls apart. As I went into at length, I am not surreptitiously using the names of other authors or the titles of their works to market my works.

The paper also states, as identified above:

Many of the uses of generative AI would likely create outputs considered transformative and would not create a substitute of the original work. However, individuals using generative AI to create outputs of new literary works are different.

It’s entirely unclear to me why the author first agrees that many uses of generative AI are transformative, but for some magical unstated reason, then goes onto argue that the use of generative AI for literary purpose is categorically non-transformative. Um, okay, but why?

So, anyway, that’s my analysis. Take it or leave it. I’m not a lawyer, and not even a paralegal. I’m just an artist making use of these tools every day. It’s interesting how many people have written papers which include references to my work, but never actually reached out to me to discuss it, nor do they seem to have investigated the humongous mountain of content I’ve published elaborating on all of this on my website.

¯\_(ツ)_/¯

Law of Capture applied to gen AI intellectual property

This is a pretty interesting paper that I found only because they referenced my AI-assisted books in it. It’s by Arjun Padmanabhan and Tanner Wadsworth and explores two ideas from the history of property law, the Law of Capture and the Doctrine of Accession. Both are interesting, but this part about the Law of Capture is of the greatest interest to me when applied to the realm of AI-assisted creative works.

The law of capture provides that a person can acquire title to otherwise un-ownable property by reducing it to his dominion and control. It is a necessary outgrowth from the law of first possession; necessary because “possessing” a piece of land does not necessarily mean possessing all the things on or under it. Wild animals, fish, and birds are all “fugitive resources” that travel from place to place without respect for human-created property lines. These things are presumably not owned by anyone until they are captured, at which point they belong to the captor.


In the classic example, a hunter can own a wild fox, but only once he has removed it from its natural state by killing or trapping it. Similarly, nobody owns a whale until it has been fatally harpooned. Once a fisherman has harpooned the whale, however, that fisherman owns it, no matter who discovers the carcass or where it washes up on shore.

This approach to owning the “fruits” of explorations in latent space makes total sense to me… The comparison of “foraging” in latent space to hunting & trapping in higher-dimensional space feels intuitively right to me.

On Disclosing Responsible AI Failures Publicly

In computer security, there is a concept known originally as responsible disclosure – or apparently more properly as Coordinated Vulnerability Disclosure (CVD). From Wikipedia, CVD is:

…a vulnerability disclosure model in which a vulnerability or an issue is disclosed to the public only after the responsible parties have been allowed sufficient time to patch or remedy the vulnerability or issue. This coordination distinguishes the CVD model from the “full disclosure” model.

It’s a complicated topic, with proponents of different approaches having different supporting rationales. In the context of pure security, where the confidentiality, integrity, or accessibility of information systems are at stake around a given vulnerability, it makes a lot of sense to me that a CVD model could be most appropriate. You wouldn’t want to exacerbate specific harms to the systems (especially critical infrastructure) or the confidential user data they contain by too broadly publicizing a vulnerability before it can be patched.

In cases like that, what is at stake are acute harms. Real people are specifically harmed, usually in fairly measurable ways that have identifiable impacts on their lives. Preventing and reducing especially acute harms that have high potential or actual impact is one of the central focuses of risk mitigation as a discipline.

My experience both as an observer and participant in discovering and disclosing what we might call “responsible AI” failures/flaws/vulnerabilities suggests that the majority of these incidents are something other than acute harms. Instead, I think of them as diffuse harms.

When a harm is more diffuse, you might know that it’s “bad,” but it might be more difficult to precisely pin down exactly which person/people are harmed by it, how much, and what the actual impact might be. Something can be biased or discriminatory, and that can absolutely be socially harmful to people matching certain identity characteristics, but it is a qualitatively different harm than, say, a data breach of their private information which now means that they are the victim of credit card fraud and identity theft.

Coordinated vulnerability disclosure, or so-called responsible disclosure systems, might include a bug bounty, where security researchers are financially incentivized to report the bug to the company, and to agree to not disclose it publicly until after it has been fixed. The companies, likewise, benefit from having outside researchers potentially uncover very big security issues for relatively low costs in bounty payouts. It might not be a completely perfect system every time, but at least the incentives of the two parties are somewhat aligned.

When it comes to disclosing responsible AI failures though, some examples like the above do seem to exist, but we don’t really so far see these mechanisms in play at a large scale in an organized way yet in industry. ChatGPT found me a paper I haven’t read yet on this topic, where the authors propose a “Coordinated Flaw Disclosure” methodology. The paper is a little heavy for me on the jargon-y frameworks, but I think the basic idea is at least worth opening up for further conversation.

Whereas a security vulnerability might or might not (depending on circumstances) benefit from being aired publicly, the vast majority of responsible AI failures and vulnerabilities would indeed benefit from public scrutiny and conversation. Because that is how ethical and moral issues are best sorted out: by talking through them, expressing and comparing our values, and coming to meaningful conclusions by synthesizing multiple viewpoints.

AI systems are being deployed at an unprecedented scale with huge societal impacts. Since we are becoming all of us more and more affected by them, we deserve visibility into their known risks and flaws. Public discourse ensures that AI developers are held accountable, and can’t just quietly fix or ignore the problem and hope people forget. It also enables inter-disciplinary expertise to be leveraged in conversation about complex sociotechnical issues, opening up new potential solutions or approaches.

My experience has been that – unless there is a team or role dedicated to correcting for exactly this – responsible AI (and non-AI) failures tend to persist due to business priorities which place growth above most other functions (like fixing things that are obviously broken). Third party researchers only disclosing flaws privately might enable companies to downplay the severity of the issue, and “let it ride.” Obviously, researchers under contract with the company are a different story. But even still, perhaps the companies themselves when they discover these things internally ought to be transparent about what they found, and what they’re doing to fix it.

Anyway, have had these thoughts swirling around for a while now, so good to get them off my chest in some sort of fixed form, even if just a v1. Gotta plant your flag somewhere to begin!

In a Hack Wealth Video

One thing that’s weird on the internet is seeing how people repurpose your content or actions, and even your likeness for their own ends. This video on Youtube by an account called ‘Hack Wealth’ is a good weird example. They, like many others, have latched onto the “easy money” aspect of what people think it is that I’m doing with AI. It’s true, I’ve made a small amount of money with my books, but it’s not the motivating factor for sure. Anyway, you can judge the video for yourself here:

If the statement in the preview image above is correct that someone is making over $21K per month with some weird AI affiliate side hustle, well that person is doing a hell of a lot better than me selling a few books a month to randos.

It’s not true what is stated in this video that I sell my books on “Amazon Kindle.” I do not. They are only on Gumroad here.

But their mashup of one of my photos and what seems to be a weird gen AI image of a businessman sitting amidst a like debris strewn field of ebook readers falling from the sky is… next level.

I thought at first they were supposed to be some kind of dollar bills falling from the sky, which would have been funny too.

Really, in the end, I just like making weird little books and always have. This is just another way to express something I’ve done since I was a kid. It’s cool that people take notice, but it would be a lot cooler if people actually looked at the content of the books, instead of these exterior artifacts of what they *think* I’m doing. The truth actually diverges quite a lot…

Kal, the Hunter

I forgot to put this into my book notes for Tiamat, but better late than never:

Those who put their trust in paths
Laid down by their own mind,
Go to hell and make hell their home;
They get no respite from coming and going,
Morning and evening they are the prey
Of the ruthless hunter, Kal.

That’s from Kabir, who I only recently learned about via a friend. I liked that so much that I drafted a version of Kal, the hunter into the latest AI Lore book, Tiamat.

Notes on Reforming The Copyright Office

In a move that probably seems like a good idea to absolutely no one but me, I took it upon myself to heed the call to submit a brief policy document to the White House around its supposedly forthcoming AI Action Plan, as described in this Axios article.

My somewhat polemical submission follows on my 2024 submission to the US Copyright Office’s public consultation on gen AI, and their subsequent report on copyrightability (which in turn referenced my submission). I’m absolutely sure their conclusions are well-researched and come with the best of intentions, but to my eyes as an artist heavily incorporating generative AI tools merely serve to reinforce the status quo.

I know that is a lot of documentation for the lay person to try and parse, but the long and short of it is that, for me, the USCO missed the boat on this one, and doesn’t go anywhere near far enough towards creating an “AI positive” future for artists and creators of all stripes. I understand they are, no doubt, reflecting a lot of the anxiety and uncertainty that many of the 10K+ submissions they received expressed to them in no uncertain terms. But just because a particular opinion is popular does not always make it the right one that can paint a viable path forward.

Which is why I thought it behooved me (is that right? am I now hooven?), to paint – for the record – an entirely divergent view. Two copyright roads diverged in a wood, and I, I took the one less traveled by, you might say. And that has made all the difference … in making people angry at me online, of course. But that seems to be an inescapable aspect of the role I’m allowed to play in the cosmic drama around the unfolding of AI in the arts…

Anyway, while my submission may be a little bit hyperbolic for dramatic effect, I think it also raises some real issues around particularly global competitiveness for AI-driven intellectual property right. The fact that the Copyright Office got it – in my eyes – so wrong, seems to demonstrate to me that maybe we need to start over from the ground up, either with something new, something greatly diminished, or maybe even nothing at all to replace it. Probably that sounds like a heretical idea to some, but if the British government doesn’t even act as a register for copyrights (and it’s optional, and automatically granted without review in Canada), why does the American government spend so much time and effort dithering on all of this? (And not to mention getting sued for its efforts.)

There must be some better way forward. Though I have, of course, grave doubts about the capacity of the White House to find it. But who knows, perhaps market pressures will bear positive fruits here in time… Maybe my proposal falls short too, and my understanding of all the moving parts of copyright and the industries that it powers is of course human, limited, and perhaps incorrect at points. But I’m an artist, not a godlike superintelligence. And as such, every new art movement needs to take a few good shots at the old order before it can differentiate itself as the next paradigm. Here’s my own weird effort at that.

Read the full submission here.

AI Action Plan Proposal: Reform of US Copyright Office

[PDF Version]

Written By: Tim Boucher, 12 February 2025 [timboucher.ca/about/]

Introduction & Context:

This document is a submission to the National Science Foundation’s Request for Information on the Development of an Artificial Intelligence (AI) Action Plan [FR Doc. 2025-02305], following the White House’s January 23, 2025 Executive Order, entitled “Removing Barriers to American Leadership in Artificial Intelligence.” The purpose of that Order and subsequent Action Plan are to:

“…sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”

The following consists of testimony from an artist and author at the forefront of using artificial intelligence tools for creative production, whose work has been cited by the United States Copyright Office in its most recent report, OpenAI CEO Sam Altman’s Senate testimony, Authors Alliance, Coalition for Creativity, and in countless media outlets around the world.

The Case for Reforming or Disbanding the U.S. Copyright Office

Key Points:

  • The U.S. Copyright Office enforces a human authorship requirement that is not explicitly stated in the Copyright Act.
  • Countries like China, the UK, and Canada recognize certain AI-assisted works, while the U.S. Copyright Office’s restrictive stance puts American creators at a huge disadvantage.
  • Under the Berne Convention, foreign AI-assisted works may receive stronger protection than those created by American citizens.
  • Copyright is automatic under U.S. law, yet the Office maintains an outdated and costly registration system.
  • Many of the Copyright Office’s functions could be automated or eliminated without affecting copyright protections.

High-Level Recommendations:

  • Explicitly adopt as law protections for computer-generated artworks, like the United Kingdom.
  • Reduce or eliminate the U.S. Copyright Office’s role to remove unnecessary barriers for U.S. creators.
  • Strip the Office of its authority to impose policies not explicitly grounded in the Copyright Act.
  • Remove the registration requirement for copyright enforcement in court.
  • Align U.S. copyright policy with modern technological realities to maintain global competitiveness.

Generative AI Outputs & Copyrightability

In 2024, the US Copyright Office held a public consultation regarding artificial intelligence and copyright, receiving over ten thousand submissions (our original response to which is available here for further reference). From its analysis of submissions, the USCO released in January 2025, Part 2 of its Copyright and Artificial Intelligence report on Copyrightability In it, the Office re-asserted its position that AI-generated artworks cannot be copyrighted unless they meet a still somewhat vague threshold of human authorship.

This decision has broad implications for creators, businesses, and the evolving role of AI in artistic and commercial work. While the ruling appears to align with the Office’s long-standing position that copyright requires human authorship, it raises important questions about whether existing copyright law is keeping pace with technological change. Other countries, including the UK, Canada, and China, are taking different approaches, and recognize AI-assisted creativity to varying degrees.

The issue is not just about AI; it highlights broader concerns about the efficiency and purpose of the U.S. Copyright Office in the modern technological age, and how it rigidly applies policies that are not explicitly outlined in copyright law. It also raises questions about how these interpretations affect U.S. competitiveness in emerging creative industries. If AI-generated works are given stronger recognition abroad, U.S. creators and businesses may be forced to navigate complex legal uncertainties that their foreign counterparts do not face, and will be subject to perverse incentives to register their intellectual property abroad under more favorable regulatory regimes.

With the rise of AI bringing copyright law to a crossroads, now is the time to assess whether the Copyright Office’s current role and structure serve the needs of a rapidly evolving creative and technological landscape.

International Copyright Comparisons

According to its March 2023 Senate budget written statement, the USCO requested for the fiscal year of 2024, “an overall budget of $103.1 million in funding and 481 full time employees (FTEs).” By way of comparison, the Canadian Intellectual Property Office – which also handles patents and trademarks on top of copyright – employed only around 1,000 people (as of 2019), and the United Kingdom’s Intellectual Property Office some 1,600. (In the US, the Patent & Trademark Office separately from the USCO employs some 14,5000 people.)

Canada’s CIPO, unlike the US Copyright Office, does not review or assess submissions for copyright registrations. They are granted automatically, and if there proves to be an issue, it ends up being decided in court (which is ultimately what happens in the United States as well, despite the formal approval process required by the USCO).

The United Kingdom’s Intellectual Property Office takes it even a step farther: the government does not act as a copyright register at all. Since British law (much like American law) asserts copyright is automatic when works are created in fixed form, registration is considered extraneous. Further, in the UK, computer-generated works have been officially copyrightable since 1988, with the owner of the copyright being the person who arranged to have the work produced. (It is worth noting that their creative economy has not collapsed as a result.)

Clearly, the US Copyright Office arose from a different set of circumstances and responds to a different legal context than the Intellectual Property Offices of the UK and Canada. But the present proposal submits that merely because something is well-established does not mean it should be immune from radical change to a more suitable contemporary form. It is our position that much of the present function of the USCO could be eliminated, and the rest largely automated.

Lack of Statutory Grounding

A key concern with the U.S. Copyright Office’s approach is that it continues to enforce policies that are not explicitly required by law but instead stem from its own internal interpretations. The requirement for human authorship is not found in the Copyright Act itself but is an administrative rule derived from the Office’s Compendium of U.S. Copyright Office Practices. This means that rather than carrying the force of law as passed by Congress, these policies are discretionary decisions made within a bureaucratic framework. While courts often defer to the Copyright Office’s interpretations, there is no binding requirement that they must do so, and it remains an open question whether this approach will hold up in the long term as AI becomes more deeply integrated into creative industries.

This lack of statutory grounding raises the question of whether the Copyright Office has overstepped its role. The agency was not explicitly created by the Copyright Act but rather emerged from administrative necessity within the Library of Congress. It has since grown into a bureaucratic institution that not only processes registrations but also influences major policy debates and public opinion, often shaping copyright enforcement in ways that go beyond the scope of its original mandate. Given that copyright is automatic under U.S. law, the need for a large and expensive registration office is increasingly questionable, particularly when compared to more streamlined systems abroad (such as the Canadian system of optional and automatically-approved registrations, or the British system of no official registration being required at all).

Cultural works created by artists and other creators should not be bound by arbitrary technological restrictions or subject to approval by bureaucrats removed from the creative process. If copyright in the U.S. is automatic, there is no justification for additional bureaucratic gatekeeping. The rights in a work should not vanish simply because an artist chose one tool over another.

Global Competitive Disadvantage

The U.S. Copyright Office’s reluctance to recognize AI-generated works also puts it out of step with an increasing number of global competitors. While some countries, like Japan, have taken a more restrictive approach similar to the U.S., others—including China—have begun granting legal protections to AI-generated works in some cases. China’s approach in particular demonstrates a willingness to engage with the realities of AI-driven content creation, where AI is considered as just another tool to enable expression by human creators. Meanwhile, the U.S. Copyright Office remains locked in an antiquated anti-innovation interpretation that could leave American businesses and creators at a disadvantage. If foreign jurisdictions develop clearer legal frameworks for AI-generated works, U.S. creators may be left with weaker protections and more uncertainty than their counterparts abroad.

Berne Convention Inconsistencies

Another layer of complexity arises from the U.S.’s obligations under the Berne Convention. Under Berne, member states must provide equal copyright protections to works created in other member countries, meaning that a work copyrighted in another jurisdiction should receive the same protections in the U.S. as a domestically created work. If AI-generated works are recognized as copyrighted works of intellectual property even in a communist country like China, while American creators are denied copyright of their AI-assisted works in the U.S., it raises concerning potential legal inconsistencies. Foreign creators whose works are protected in their home countries might end up being entitled to more recognition in the U.S. than domestic creators whose AI-generated works are rejected outright by the Copyright Office.

A Call For Radical Reform

If the U.S. Copyright Office’s current structure is making the United States less competitive and discouraging innovation in the field of AI and computer-generated works, it is worth asking whether the agency remains necessary in its current form. The fact that copyright is already automatic under U.S. law suggests that the role of the Copyright Office could be drastically reduced without harming creators. If other major economies function without an extensive copyright registration system, the U.S. could do the same, particularly as digital record-keeping and AI-driven digital content and licensing management systems continue to improve.

At the very least, reforms should be considered to remove unnecessary barriers for U.S. creators. Eliminating the registration requirement for enforcement in court would be a logical step, as would reducing the Copyright Office’s authority to impose interpretive policies that go beyond the text of the Copyright Act. More fundamentally, the question must be asked whether the Copyright Office itself is still necessary in an era where copyright is automatic and many of its functions could be replaced by modernized, technology-driven solutions.

Notes on Tiamat

Tiamat is book #125 in the AI Lore Books series. It follows on with more tales set in the Hyperion Storm/Dalton Trask neighborhood of the multiverse/latent space that these narratives exist in. (Also see: Continuity Codex, and Algorithm #5 – not available online yet)

More specifically, the book opens with President Storm announcing the dismantling and piece-by-piece sale of the Statue of Liberty, whose severed head he floats through the sky on a la Zardoz (which I feel like might deserve a rewatch).

At the same time in this book, Tiamat is the name of a Baconian utopia which was founded for the advancement of the sciences in the New World, and became corrupted, yadda yadda yadda.

Apart from the themes and I guess possibly too overt symbolism, the book stylistically is extremely fragmentary and at times borrows from a sort of pseudo-Burroughs cut-up thing to get its point across about the dismemberment of the goddess Tiamat. I found it fun anyway to put together. It is likely not everybody’s cup of tea, but it includes a little of everything, including a lot of good straight forward flash fiction slice of life thingies, and expository encyclopedia entries (“tell, don’t show”).

Overall text is a mixture of my originals, GPT-4o, a little o3-mini, Deepseek, whatever free Claude version number we’re at now (I think 2.something?), Mistral (both their official chat, and completions via Textsynth). I used the Lazarus text mixing deck for some of the more cut-up style stuff more near the end, though certain things that were like fragments and snippets were largely (I think) GPT-4.

Here’s the art preview:

Images are a mix of Recraft, Ideogram, Grok (which I think uses some Flux variant?), and a very few Dalle-3s (but OpenAI is lagging well behind on image gen at this point, imo). One of these in particular you can see in the preview above, but here’s a larger version, as I think it’s fun. This was in Recraft, I believe?

There’s kind of an unnameable convergence here for me of like this strange and also cheap but also poetic poignancy … idk best not to speak too plainly of the “mystery” in these things, so as not to puncture it.

This book took a really long time to come together (as these things go). I’m back up to counting these more in weeks than in days or hours, like I was hitting back in my stride, when these kinds of explorations were all knew, and I had not already covered the wide swath of ground that I have by now. To keep this circus going now takes a different quality and depth of inquiry than what it did previously to keep it still fresh and interesting for me. For you, the reader, of course, ymmv. At least you can’t say I didn’t warn you…

Page 4 of 202

Powered by WordPress & Theme by Anders Norén