Questionable content, possibly linked

Category: Other Page 1 of 171

On Disclosing Responsible AI Failures Publicly

In computer security, there is a concept known originally as responsible disclosure – or apparently more properly as Coordinated Vulnerability Disclosure (CVD). From Wikipedia, CVD is:

…a vulnerability disclosure model in which a vulnerability or an issue is disclosed to the public only after the responsible parties have been allowed sufficient time to patch or remedy the vulnerability or issue. This coordination distinguishes the CVD model from the “full disclosure” model.

It’s a complicated topic, with proponents of different approaches having different supporting rationales. In the context of pure security, where the confidentiality, integrity, or accessibility of information systems are at stake around a given vulnerability, it makes a lot of sense to me that a CVD model could be most appropriate. You wouldn’t want to exacerbate specific harms to the systems (especially critical infrastructure) or the confidential user data they contain by too broadly publicizing a vulnerability before it can be patched.

In cases like that, what is at stake are acute harms. Real people are specifically harmed, usually in fairly measurable ways that have identifiable impacts on their lives. Preventing and reducing especially acute harms that have high potential or actual impact is one of the central focuses of risk mitigation as a discipline.

My experience both as an observer and participant in discovering and disclosing what we might call “responsible AI” failures/flaws/vulnerabilities suggests that the majority of these incidents are something other than acute harms. Instead, I think of them as diffuse harms.

When a harm is more diffuse, you might know that it’s “bad,” but it might be more difficult to precisely pin down exactly which person/people are harmed by it, how much, and what the actual impact might be. Something can be biased or discriminatory, and that can absolutely be socially harmful to people matching certain identity characteristics, but it is a qualitatively different harm than, say, a data breach of their private information which now means that they are the victim of credit card fraud and identity theft.

Coordinated vulnerability disclosure, or so-called responsible disclosure systems, might include a bug bounty, where security researchers are financially incentivized to report the bug to the company, and to agree to not disclose it publicly until after it has been fixed. The companies, likewise, benefit from having outside researchers potentially uncover very big security issues for relatively low costs in bounty payouts. It might not be a completely perfect system every time, but at least the incentives of the two parties are somewhat aligned.

When it comes to disclosing responsible AI failures though, some examples like the above do seem to exist, but we don’t really so far see these mechanisms in play at a large scale in an organized way yet in industry. ChatGPT found me a paper I haven’t read yet on this topic, where the authors propose a “Coordinated Flaw Disclosure” methodology. The paper is a little heavy for me on the jargon-y frameworks, but I think the basic idea is at least worth opening up for further conversation.

Whereas a security vulnerability might or might not (depending on circumstances) benefit from being aired publicly, the vast majority of responsible AI failures and vulnerabilities would indeed benefit from public scrutiny and conversation. Because that is how ethical and moral issues are best sorted out: by talking through them, expressing and comparing our values, and coming to meaningful conclusions by synthesizing multiple viewpoints.

AI systems are being deployed at an unprecedented scale with huge societal impacts. Since we are becoming all of us more and more affected by them, we deserve visibility into their known risks and flaws. Public discourse ensures that AI developers are held accountable, and can’t just quietly fix or ignore the problem and hope people forget. It also enables inter-disciplinary expertise to be leveraged in conversation about complex sociotechnical issues, opening up new potential solutions or approaches.

My experience has been that – unless there is a team or role dedicated to correcting for exactly this – responsible AI (and non-AI) failures tend to persist due to business priorities which place growth above most other functions (like fixing things that are obviously broken). Third party researchers only disclosing flaws privately might enable companies to downplay the severity of the issue, and “let it ride.” Obviously, researchers under contract with the company are a different story. But even still, perhaps the companies themselves when they discover these things internally ought to be transparent about what they found, and what they’re doing to fix it.

Anyway, have had these thoughts swirling around for a while now, so good to get them off my chest in some sort of fixed form, even if just a v1. Gotta plant your flag somewhere to begin!

In a Hack Wealth Video

One thing that’s weird on the internet is seeing how people repurpose your content or actions, and even your likeness for their own ends. This video on Youtube by an account called ‘Hack Wealth’ is a good weird example. They, like many others, have latched onto the “easy money” aspect of what people think it is that I’m doing with AI. It’s true, I’ve made a small amount of money with my books, but it’s not the motivating factor for sure. Anyway, you can judge the video for yourself here:

If the statement in the preview image above is correct that someone is making over $21K per month with some weird AI affiliate side hustle, well that person is doing a hell of a lot better than me selling a few books a month to randos.

It’s not true what is stated in this video that I sell my books on “Amazon Kindle.” I do not. They are only on Gumroad here.

But their mashup of one of my photos and what seems to be a weird gen AI image of a businessman sitting amidst a like debris strewn field of ebook readers falling from the sky is… next level.

I thought at first they were supposed to be some kind of dollar bills falling from the sky, which would have been funny too.

Really, in the end, I just like making weird little books and always have. This is just another way to express something I’ve done since I was a kid. It’s cool that people take notice, but it would be a lot cooler if people actually looked at the content of the books, instead of these exterior artifacts of what they *think* I’m doing. The truth actually diverges quite a lot…

Kal, the Hunter

I forgot to put this into my book notes for Tiamat, but better late than never:

Those who put their trust in paths
Laid down by their own mind,
Go to hell and make hell their home;
They get no respite from coming and going,
Morning and evening they are the prey
Of the ruthless hunter, Kal.

That’s from Kabir, who I only recently learned about via a friend. I liked that so much that I drafted a version of Kal, the hunter into the latest AI Lore book, Tiamat.

Notes on Reforming The Copyright Office

In a move that probably seems like a good idea to absolutely no one but me, I took it upon myself to heed the call to submit a brief policy document to the White House around its supposedly forthcoming AI Action Plan, as described in this Axios article.

My somewhat polemical submission follows on my 2024 submission to the US Copyright Office’s public consultation on gen AI, and their subsequent report on copyrightability (which in turn referenced my submission). I’m absolutely sure their conclusions are well-researched and come with the best of intentions, but to my eyes as an artist heavily incorporating generative AI tools merely serve to reinforce the status quo.

I know that is a lot of documentation for the lay person to try and parse, but the long and short of it is that, for me, the USCO missed the boat on this one, and doesn’t go anywhere near far enough towards creating an “AI positive” future for artists and creators of all stripes. I understand they are, no doubt, reflecting a lot of the anxiety and uncertainty that many of the 10K+ submissions they received expressed to them in no uncertain terms. But just because a particular opinion is popular does not always make it the right one that can paint a viable path forward.

Which is why I thought it behooved me (is that right? am I now hooven?), to paint – for the record – an entirely divergent view. Two copyright roads diverged in a wood, and I, I took the one less traveled by, you might say. And that has made all the difference … in making people angry at me online, of course. But that seems to be an inescapable aspect of the role I’m allowed to play in the cosmic drama around the unfolding of AI in the arts…

Anyway, while my submission may be a little bit hyperbolic for dramatic effect, I think it also raises some real issues around particularly global competitiveness for AI-driven intellectual property right. The fact that the Copyright Office got it – in my eyes – so wrong, seems to demonstrate to me that maybe we need to start over from the ground up, either with something new, something greatly diminished, or maybe even nothing at all to replace it. Probably that sounds like a heretical idea to some, but if the British government doesn’t even act as a register for copyrights (and it’s optional, and automatically granted without review in Canada), why does the American government spend so much time and effort dithering on all of this? (And not to mention getting sued for its efforts.)

There must be some better way forward. Though I have, of course, grave doubts about the capacity of the White House to find it. But who knows, perhaps market pressures will bear positive fruits here in time… Maybe my proposal falls short too, and my understanding of all the moving parts of copyright and the industries that it powers is of course human, limited, and perhaps incorrect at points. But I’m an artist, not a godlike superintelligence. And as such, every new art movement needs to take a few good shots at the old order before it can differentiate itself as the next paradigm. Here’s my own weird effort at that.

Read the full submission here.

AI Action Plan Proposal: Reform of US Copyright Office

[PDF Version]

Written By: Tim Boucher, 12 February 2025 [timboucher.ca/about/]

Introduction & Context:

This document is a submission to the National Science Foundation’s Request for Information on the Development of an Artificial Intelligence (AI) Action Plan [FR Doc. 2025-02305], following the White House’s January 23, 2025 Executive Order, entitled “Removing Barriers to American Leadership in Artificial Intelligence.” The purpose of that Order and subsequent Action Plan are to:

“…sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”

The following consists of testimony from an artist and author at the forefront of using artificial intelligence tools for creative production, whose work has been cited by the United States Copyright Office in its most recent report, OpenAI CEO Sam Altman’s Senate testimony, Authors Alliance, Coalition for Creativity, and in countless media outlets around the world.

The Case for Reforming or Disbanding the U.S. Copyright Office

Key Points:

  • The U.S. Copyright Office enforces a human authorship requirement that is not explicitly stated in the Copyright Act.
  • Countries like China, the UK, and Canada recognize certain AI-assisted works, while the U.S. Copyright Office’s restrictive stance puts American creators at a huge disadvantage.
  • Under the Berne Convention, foreign AI-assisted works may receive stronger protection than those created by American citizens.
  • Copyright is automatic under U.S. law, yet the Office maintains an outdated and costly registration system.
  • Many of the Copyright Office’s functions could be automated or eliminated without affecting copyright protections.

High-Level Recommendations:

  • Explicitly adopt as law protections for computer-generated artworks, like the United Kingdom.
  • Reduce or eliminate the U.S. Copyright Office’s role to remove unnecessary barriers for U.S. creators.
  • Strip the Office of its authority to impose policies not explicitly grounded in the Copyright Act.
  • Remove the registration requirement for copyright enforcement in court.
  • Align U.S. copyright policy with modern technological realities to maintain global competitiveness.

Generative AI Outputs & Copyrightability

In 2024, the US Copyright Office held a public consultation regarding artificial intelligence and copyright, receiving over ten thousand submissions (our original response to which is available here for further reference). From its analysis of submissions, the USCO released in January 2025, Part 2 of its Copyright and Artificial Intelligence report on Copyrightability In it, the Office re-asserted its position that AI-generated artworks cannot be copyrighted unless they meet a still somewhat vague threshold of human authorship.

This decision has broad implications for creators, businesses, and the evolving role of AI in artistic and commercial work. While the ruling appears to align with the Office’s long-standing position that copyright requires human authorship, it raises important questions about whether existing copyright law is keeping pace with technological change. Other countries, including the UK, Canada, and China, are taking different approaches, and recognize AI-assisted creativity to varying degrees.

The issue is not just about AI; it highlights broader concerns about the efficiency and purpose of the U.S. Copyright Office in the modern technological age, and how it rigidly applies policies that are not explicitly outlined in copyright law. It also raises questions about how these interpretations affect U.S. competitiveness in emerging creative industries. If AI-generated works are given stronger recognition abroad, U.S. creators and businesses may be forced to navigate complex legal uncertainties that their foreign counterparts do not face, and will be subject to perverse incentives to register their intellectual property abroad under more favorable regulatory regimes.

With the rise of AI bringing copyright law to a crossroads, now is the time to assess whether the Copyright Office’s current role and structure serve the needs of a rapidly evolving creative and technological landscape.

International Copyright Comparisons

According to its March 2023 Senate budget written statement, the USCO requested for the fiscal year of 2024, “an overall budget of $103.1 million in funding and 481 full time employees (FTEs).” By way of comparison, the Canadian Intellectual Property Office – which also handles patents and trademarks on top of copyright – employed only around 1,000 people (as of 2019), and the United Kingdom’s Intellectual Property Office some 1,600. (In the US, the Patent & Trademark Office separately from the USCO employs some 14,5000 people.)

Canada’s CIPO, unlike the US Copyright Office, does not review or assess submissions for copyright registrations. They are granted automatically, and if there proves to be an issue, it ends up being decided in court (which is ultimately what happens in the United States as well, despite the formal approval process required by the USCO).

The United Kingdom’s Intellectual Property Office takes it even a step farther: the government does not act as a copyright register at all. Since British law (much like American law) asserts copyright is automatic when works are created in fixed form, registration is considered extraneous. Further, in the UK, computer-generated works have been officially copyrightable since 1988, with the owner of the copyright being the person who arranged to have the work produced. (It is worth noting that their creative economy has not collapsed as a result.)

Clearly, the US Copyright Office arose from a different set of circumstances and responds to a different legal context than the Intellectual Property Offices of the UK and Canada. But the present proposal submits that merely because something is well-established does not mean it should be immune from radical change to a more suitable contemporary form. It is our position that much of the present function of the USCO could be eliminated, and the rest largely automated.

Lack of Statutory Grounding

A key concern with the U.S. Copyright Office’s approach is that it continues to enforce policies that are not explicitly required by law but instead stem from its own internal interpretations. The requirement for human authorship is not found in the Copyright Act itself but is an administrative rule derived from the Office’s Compendium of U.S. Copyright Office Practices. This means that rather than carrying the force of law as passed by Congress, these policies are discretionary decisions made within a bureaucratic framework. While courts often defer to the Copyright Office’s interpretations, there is no binding requirement that they must do so, and it remains an open question whether this approach will hold up in the long term as AI becomes more deeply integrated into creative industries.

This lack of statutory grounding raises the question of whether the Copyright Office has overstepped its role. The agency was not explicitly created by the Copyright Act but rather emerged from administrative necessity within the Library of Congress. It has since grown into a bureaucratic institution that not only processes registrations but also influences major policy debates and public opinion, often shaping copyright enforcement in ways that go beyond the scope of its original mandate. Given that copyright is automatic under U.S. law, the need for a large and expensive registration office is increasingly questionable, particularly when compared to more streamlined systems abroad (such as the Canadian system of optional and automatically-approved registrations, or the British system of no official registration being required at all).

Cultural works created by artists and other creators should not be bound by arbitrary technological restrictions or subject to approval by bureaucrats removed from the creative process. If copyright in the U.S. is automatic, there is no justification for additional bureaucratic gatekeeping. The rights in a work should not vanish simply because an artist chose one tool over another.

Global Competitive Disadvantage

The U.S. Copyright Office’s reluctance to recognize AI-generated works also puts it out of step with an increasing number of global competitors. While some countries, like Japan, have taken a more restrictive approach similar to the U.S., others—including China—have begun granting legal protections to AI-generated works in some cases. China’s approach in particular demonstrates a willingness to engage with the realities of AI-driven content creation, where AI is considered as just another tool to enable expression by human creators. Meanwhile, the U.S. Copyright Office remains locked in an antiquated anti-innovation interpretation that could leave American businesses and creators at a disadvantage. If foreign jurisdictions develop clearer legal frameworks for AI-generated works, U.S. creators may be left with weaker protections and more uncertainty than their counterparts abroad.

Berne Convention Inconsistencies

Another layer of complexity arises from the U.S.’s obligations under the Berne Convention. Under Berne, member states must provide equal copyright protections to works created in other member countries, meaning that a work copyrighted in another jurisdiction should receive the same protections in the U.S. as a domestically created work. If AI-generated works are recognized as copyrighted works of intellectual property even in a communist country like China, while American creators are denied copyright of their AI-assisted works in the U.S., it raises concerning potential legal inconsistencies. Foreign creators whose works are protected in their home countries might end up being entitled to more recognition in the U.S. than domestic creators whose AI-generated works are rejected outright by the Copyright Office.

A Call For Radical Reform

If the U.S. Copyright Office’s current structure is making the United States less competitive and discouraging innovation in the field of AI and computer-generated works, it is worth asking whether the agency remains necessary in its current form. The fact that copyright is already automatic under U.S. law suggests that the role of the Copyright Office could be drastically reduced without harming creators. If other major economies function without an extensive copyright registration system, the U.S. could do the same, particularly as digital record-keeping and AI-driven digital content and licensing management systems continue to improve.

At the very least, reforms should be considered to remove unnecessary barriers for U.S. creators. Eliminating the registration requirement for enforcement in court would be a logical step, as would reducing the Copyright Office’s authority to impose interpretive policies that go beyond the text of the Copyright Act. More fundamentally, the question must be asked whether the Copyright Office itself is still necessary in an era where copyright is automatic and many of its functions could be replaced by modernized, technology-driven solutions.

Notes on Tiamat

Tiamat is book #125 in the AI Lore Books series. It follows on with more tales set in the Hyperion Storm/Dalton Trask neighborhood of the multiverse/latent space that these narratives exist in. (Also see: Continuity Codex, and Algorithm #5 – not available online yet)

More specifically, the book opens with President Storm announcing the dismantling and piece-by-piece sale of the Statue of Liberty, whose severed head he floats through the sky on a la Zardoz (which I feel like might deserve a rewatch).

At the same time in this book, Tiamat is the name of a Baconian utopia which was founded for the advancement of the sciences in the New World, and became corrupted, yadda yadda yadda.

Apart from the themes and I guess possibly too overt symbolism, the book stylistically is extremely fragmentary and at times borrows from a sort of pseudo-Burroughs cut-up thing to get its point across about the dismemberment of the goddess Tiamat. I found it fun anyway to put together. It is likely not everybody’s cup of tea, but it includes a little of everything, including a lot of good straight forward flash fiction slice of life thingies, and expository encyclopedia entries (“tell, don’t show”).

Overall text is a mixture of my originals, GPT-4o, a little o3-mini, Deepseek, whatever free Claude version number we’re at now (I think 2.something?), Mistral (both their official chat, and completions via Textsynth). I used the Lazarus text mixing deck for some of the more cut-up style stuff more near the end, though certain things that were like fragments and snippets were largely (I think) GPT-4.

Here’s the art preview:

Images are a mix of Recraft, Ideogram, Grok (which I think uses some Flux variant?), and a very few Dalle-3s (but OpenAI is lagging well behind on image gen at this point, imo). One of these in particular you can see in the preview above, but here’s a larger version, as I think it’s fun. This was in Recraft, I believe?

There’s kind of an unnameable convergence here for me of like this strange and also cheap but also poetic poignancy … idk best not to speak too plainly of the “mystery” in these things, so as not to puncture it.

This book took a really long time to come together (as these things go). I’m back up to counting these more in weeks than in days or hours, like I was hitting back in my stride, when these kinds of explorations were all knew, and I had not already covered the wide swath of ground that I have by now. To keep this circus going now takes a different quality and depth of inquiry than what it did previously to keep it still fresh and interesting for me. For you, the reader, of course, ymmv. At least you can’t say I didn’t warn you…

Referenced on the Fiverr Official Blog

It looks like a few days ago, my work using AI to assist in the bookmaking process was referenced in some detail on the official blog of the gig platform, Fiverr.

Excellent Bauhaus BBC Documentary (2019)

Thought this video was really great overview by the BBC on the Bauhaus. Put together a lot of puzzle pieces for me.

Parts of it actually reminded me of the Kibbo Kift, which was something of an offshoot of the Boy Scouts after WWI, founded by an artist in protest of that organization becoming increasingly militaristic. They used a lot of theatre and ritual and social experimentation. Despite those origins, and being ostensibly committed to world peace, the Kift later morphed into the paramilitary Greenshirts. Very weird story on their own.

Having studied a lot lately art and social movements from this interwar (and pre-war) period, I feel like I’m starting to catch a much better grasp on what was happening back then 100-ish years ago. And sadly, much of it is happening again. Except where are the amazing art movements nowadays to counter all of it?

Quoting Zvi Rosen on John Cage’s Aleatoric Music Copyrights

This is a fascinating story that came up as a side-quest in my copyright investigations around 1965, by Zvi Rosen:

The idea of chance in making music is nothing new, the dice game attributed to Mozart is only one of many examples through history. In the early 1950s the composer John Cage pushed this much farther into what would come to be called aleatory music, using randomness system of the I Ching to compose such works dictated by chance as Imaginary Landscape No. 4 for 12 radio receivers, and Music of Changes for piano. In 1952 this led to his best-known and most controversial creation: 4′33″. The “silent piece,” contains no musical notation beyond three movements and an instruction to [be] silent […]

Cage’s publisher Henmar Press applied to register many of them – including 4’33” – with the U.S. Copyright Office, and in looking at how the applications were handled … there’s some valuable lessons to learn about registration of works of indeterminate authorship. Most notably, although a decent number of Cage’s compositions passed muster and were registered as music, others including 4’33” did not, and after years of considering the issue, were registered instead as textual works by the U.S. Copyright Office.

This seems pretty similar to the Sol Lewitt pieces which consist merely of instructions to create a given work, which is then executed by third parties. More to explore down this particular rabbit hole!

What happened in 1965 in the world of computer-assisted copyright?

I’ve been trying to track down what the heck happened in 1965 in copyright & computers that the Ars Technica piece I referenced earlier places such importance on vis a vis the USCO & computers in art. The Ars piece doesn’t actually say what it was that happened, just this:

But the Copyright Office insisted that the AI copyright debate was settled in 1965 after commercial computer technology started advancing quickly and “difficult questions of authorship” were first raised. That was the first time officials had to ponder how much involvement human creators had in works created using computers.

Skimming through the actual report by the USCO yields not a lot more detail:

Given its role in registering claims to copyright,5 the Copyright Office has considerable experience addressing technological developments related to the creation of works of authorship. As early as 1965, developments in computer technology began to raise “difficult questions of authorship,” including whether material created using technology is “‘written’ by computers” or authored by human creators.6 As then-Register of Copyrights Abraham Kaminstein observed, there is no one-size-fits-all answer:

“The crucial question appears to be whether the “work” is basically one of human authorship, with the computer merely being an assisting instrument, or whether the traditional elements of authorship in the work (literary, artistic, or musical expression or elements of selection, arrangement, etc.) were actually conceived and executed not by man but by a machine.”

This just sounds like more of the “it depends” style of rule-making, which seems still not adequate to me, whatever the year.

Thanks to a ChatGPT conversation though, I did manage to trace down this interesting 2016 paper by Annemarie Birdy, entitled “The Evolution of Authorship: Work Made by Code” (PDF linked at bottom of that page). Some cobbled together quotes from that:

In 1965, Register of Copyrights Abraham Kaminstein identified the question of computer authorship as one of three major problems confronting the Copyright Office. A number of people filed copyright registrations that year for works at least partly authored by computers, and the Office found itself at a loss for how to deal with the situation.

Later:

But does copyright law require human creativity? The Copyright Act doesn’t say anywhere that an author has to human, and there’s really no case law directly on point. Nevertheless, there seems to be an assumption, maybe driven by practical and historical considerations, that authorship means human authorship. The 1965 Register of Copyrights Annual Report [here’s the document] frames the question precisely in terms of the human-computer divide. If a human creates a work, it’s copyrightable. If a machine creates it, then it’s not. The CONTU report does the same thing: unless there’s minimal human creative effort, there’s no protection.

Okay, so this is absolutely mind-blowing that in the actual Copyright Act, there is no requirement that authorship is human-only… The piece talks about a 1956 attempted copyright registration of computer-assisted song-writing:

The only problem that Klein and Bolitho encountered in
their digital composing project was that the Copyright Office refused registration for “Push Button Bertha,” one of Datatron’s many compositions. The reason the Office gave at the time was that no one had ever before tried to register a piece of music written by a machine. The rejection, for which the Office didn’t offer—and couldn’t have offered—any statutory basis, revealed a deep-seated if unspoken
assumption that authors are necessarily human.

Then in a footnote it says this assumption wasn’t codified until 1973:

By 1973, which brought publication of the first Compendium of U.S. Copyright Office Practices, that assumption had become explicit. See U.S. COPYRIGHT OFFICE, COMPENDIUM OF COPYRIGHT OFFICE PRACTICES (FIRST) § 2.8.3 (1st ed. 1973) (stating that works are not copyrightable if they do not “owe their origin to a human agent”).

    As a layperson, its difficult to navigate what official standing the COMPENDIUM OF PRACTICES must have relative to the actual law, but it must be… non-zero?

    According to ChatGPT – so this must be taken with heaping grains of salt – but it “sounds right” approximately:

    The Compendium of U.S. Copyright Office Practices is an administrative manual that outlines the practices and procedures of the U.S. Copyright Office. While it serves as a comprehensive guide for the Office’s staff and the public, it does not have the force of law. However, courts may consider it as persuasive authority due to the Copyright Office’s specialized expertise. 

    So…. that’s interesting?

    But going back to the Birdy piece, it follows on with an, I think, devastating point (and one which I included in my submission to the USCO originally), that the law already clearly accomodates for “non-human” authorship by way of work-for-hire arrangements, such that a corporation becomes considered legally as the author of the work.

    What the anthropocentric view of authorship elides, however, is that copyright law already accommodates a notion of non-human authors; they’re called corporations. Under the work-made-for-hire doctrine, which is a legal fiction, a corporate employer is considered the legal author of a work of which it is not the author-in-fact.38 The statute could have been written to create an assignment by operation of law from an employee-author to her corporate employer, thus maintaining in principle a human monopoly on authorship, but it wasn’t. It was written to allow a corporate employer to be treated ab initio as the author of a work created by its human employee.

    Because we already have a copyright doctrine that accommodates non-human authors, maybe that’s a logical place to look for a solution to the problem of computer authors. Maybe we can treat computer-authored works as works made for hire.

    So there we are. And that paper is from 2016. So these thoughts have been in the air for quite some time. But still we’re stuck on following a “practices” document, instead of explicitly following the actual law. I’d call that a pretty big discovery for me personally, as these things go…

    Page 1 of 171

    Powered by WordPress & Theme by Anders Norén