Questionable content, possibly linked

Category: Other Page 42 of 177

Maciej Ceglowski on the Myth of Superintelligence

Excellent illustrated essay (and video) from 2016 by Maciej Ceglowski. Here is the synopsis:

A skeptical view on the seductive, apocalyptic beliefs that prevent people in tech from really working to make a difference.

Apocalyptic ideas have traditionally been the province of religion, but nerds have found a way to import them into the world of computer programming. These ideas are a cognitive hazard that preferentially infects smart people, making them useless for more practical work. Like other forms of religious obsession, fantasies of superintelligence prevent us from tackling problems in this life by convincing us to focus on the life to come. This talk is an attempt to vaccinate the next generation of developers against the seductive ideas of existential risk, superintelligence, and the charismatic religious figures who will try to eat their brains.

One of the few takes on this subject that I wholeheartedly agree with…

(via Ran Prieur)

“Relax” at the chill af unlicensed Eco-Resort of Thai Ornament

Used Photoshop’s generative fill to make a new ad for an old Early Clues property, the Unincorporated Eco-Villages of Thai Ornament, of Boca Raton, Florida.

Given that Adobe’s Firefly model is trained on a lot of stock photos, it’s fun to be able to mimic that genre in a sort of decidedly low quality, intentionally “crappy” stock photo collage aesthetic.

If you’ve never visited the luxurious installations at Thai Ornament, you’re in for a treat. Be sure to check out their FAQs and amazing photos of this wonderful restful environment. You’ll “thank me later!”

Quoting René Walter on public-run AI models

Via a Substack I follow:

In my opinion, at least the large, all-encompassing “statistically stochastic knowledge synthesizer libraries,” the so called “foundational models”, could be operated by the public sector to ensure safety, ethical production and prevention of abuse. Running foundational models by the public would also ensure data transparency and work against the “black boxing” of this tech. I’m not sure or convinced that this approach would be practical or feasible to do, but i think it would provide the most stability and transparency.

Anthropic’s Jack Clark has proposed something similar, and I’ve already integrated it into my AI TOS proposal.

While I think it’s a good idea, I don’t think we would be correct to automatically assume that a public-run option would “ensure safety, ethical production and prevention of abuse.” It’s equally possible that we have an unsafe, unethical, and abusive system that is simply “run by the public” – whatever that even exactly means.

That said, I still think that future at least puts this all out into the open, and outside the exclusive control of closed for-profit enterprise. At least it would be an *attempt* at accountability and transparency, and distributed control. If we realize it’s not living up to our expectations, we would at least have the theoretical power to modify it… which is more than we have now (apart from open-source models, obvs).

Quoting the Canadian Bar Association on how should AI Copyright work in Canada

An interesting looking document put out by the Canadian Bar Association (I’m not a lawyer, just an aggressive Google seacher), entitled State of the Arts: How Should Canadian Copyright Law Treat Works Generated by Artificial Intelligence?

From the abstract:

Nothing in the Copyright Act seems to indicate that works generated by AI cannot be original, since the users of AI exercise skill and discretion in selecting appropriate data for the AI to use. Thus, I argue that AI has emerged as an important tool for authors and that the user likely the best candidate for authorship in the work.

I’ll drop in the most interesting quotes I find, but they suggest legislative reform is needed. This direction seems interesting, as I’ve often used the comparison to creative or artistic director:

I propose that the law should adopt an approach to AI resembling that of “makers” in cinematographic work…

In my view, copyright in works created by AI should subsist in the “maker” who is responsible for making the arrangements necessary to create the work…

So this author’s view more or less accords with the UK view.

Okay, so this is interesting, re: the Canadian Supreme Court not relying on the “creative spark” or incalculable “modicum of creativity” discussed in the last post:

On the other hand, they also rejected the “modicum of creativity” approach taken by the Supreme Court of the
United States, and ruled that creativity should not be a prerequisite for originality.16 The Court held that Canadian copyright law should take a middle-of-the-road stance on

originality, and require that a work be an “exercise of skill and judgement” by the author.17 As such, the Canadian conceptualization of originality encompasses aspects of both the product (in that it cannot be a mere copy), and the process (in that it must be an exercise of skill and judgement by the author).

And this:

Although the Court did state that originality cannot result from a purely “mechanical exercise”, it seems that the phrase does not specifically refer to automated processes.22 The Court employs the example of simply changing fonts in a text as a mechanical exercise that would not meet the skill and judgement test. In context, the phrase “mechanical exercise” appears to refer to a trifling or trivial exercise, rather than to the use of automation in the creative process.

I like this:

No doubt the AI’s programmer is the author of the AI’s source code. However, I would argue that copyright in the AI code should not necessarily extend to the works that flow from its use. Doing so would constitute an oversimplification of AI processes, and ignores the fact that an AI’s user (if separate from the creator) provides the data and stimuli required for the AI to perform its function. In essence, the AI code provides a canvas upon which the user-artist can apply their craft.

Technological neutrality concept:

Technological neutrality was recently affirmed by the Supreme Court of Canada as the “recognition that, absent parliamentary intent to the contrary, the Copyright Act should not be interpreted or applied to favour or discriminate against any particular form of technology” (emphasis added).42 Thus, it is necessary to apply the skill and judgement test to AI created works in a manner consistent with other modes of producing copyrightable
works.

Okay, here we go, the stuff about cinematic “makers” that I like a lot:

The complexity and collaborative nature of creating a cinematographic work compares well with the challenges posed by AI created works. For cinematographic works, the Canadian Copyright Act states that copyright subsists in the work’s “maker” – which can even be a corporation.45 In relation to cinematographic works, the Act defines a maker as “the person by whom the arrangements necessary for the making of the work are undertaken”.46 Interestingly, the United Kingdom Copyright, Design, and Patent Act deems
the “person by whom the arrangements necessary for the creation of the work are undertaken” to be the author of any computer-generated work.47 In the Canadian context,
it might be more coherent with the remainder of Canada’s Copyright Act to employ a “maker” approach to copyright in works created by AI, rather than using a deemed
authorship stance. This would avoid confusing the concept of authorship with ownership in copyright.48 …

Having copyright subsist in the maker of an AI created work would strike the appropriate balance. Although it would surely strengthen the economic incentives of using
AI for creative applications, in doing so, it would provide a legal framework for the growth of an entirely new creative industry. If one of the objectives of copyright is truly the
“encouragement and dissemination of works of the arts and intellect”, then it would behoove Canadian law makers to ensure that the Copyright Act appropriately reflects
creativity in the 21 st century.55

Interested to see if there are other Canadian sources on this topic!

“Creative Spark” in copyright is essentially magic

Sorry to keep beating on this, but there are things I need to work out here, so bear with me…

In the US Copyright Office letter regarding Zarya, they mention on 3 separate occasions (one time is the lawyer letter, twice is the CO) the “creative spark” which is allegedly linked to the bare minimum requirements around creativity/originality/something/something.

They mention this term, creative spark, referencing a spine-tinglingly exciting work of copyright office lore called COMPENDIUM (THIRD), which seems to correspond to this PDF. Within that work, there’s a particular use of this phrase in section 310.3:

When the U.S. Copyright Office examines a work of authorship, it determines whether the work “possess[es] the minimal creative spark required by the Copyright Act and the Constitution.” Feist, 499 U.S. at 363.

However, upon looking up the Copyright Act of 1976, or US Code Title 17 (PDF), as it also seems to be called, it does not appear to include the word “spark” anywhere, let alone define it in plain language.

I believe this to be the text of the Supreme Court Feist case (see also the summary on Wikipedia). There are three references to “spark” in that document, the first one seeming the most relevant, used in relation to the term original or originality:

To be sure, the requisite level of creativity is extremely low; even a slight amount will suffice. The vast majority of works make the grade quite easily, as they possess some creative spark, “no matter how crude, humble or obvious” it might be. Id., § 1.08[C][1]. Originality does not signify novelty; a work may be original even though it closely resembles other works so long as the similarity is fortuitous, not the result of copying.

Another vague word that gets used to measure minimum required creativity levels is “modicum” and it rears its head here as well.

(a) Article I, § 8, cl. 8, of the Constitution mandates originality as a prerequisite for copyright protection. The constitutional requirement necessitates independent creation plus a modicum of creativity.

These terms might be commonly used in legal contexts, but I’m hard pressed to find anyone who can clearly define what constitutes a “creative spark” and a “modicum of creativity.”

Creative spark, for my money, sounds more like a magical or mystical word more than a legal word. My impression as a reader and armchair analyst with the background that I have tends to link this concept to the idea of the “divine spark,” which is something like, depending on the tradition, the fingerprint or the shard of the Creator left in creation.

Perhaps there is some mysterious legal exegesis floating around out there which more properly links these two in the context of copyright and the Judeo-Christian tradition, but when I hear “creative spark” then, I sort of automatically assume we’re talking about a spiritual concept, which makes it much much easier to understand why nobody is actually able to explain clearly what the hell they are talking about here.

If they just were like “Oh, we mean it’s, uh, you know, magic…,” then I would be like, okay. Well, that’s stupid, but okay. At least you’re coming out and saying it clearly. But all the rest of this seems like a massive case of burying the lede, and then turning it into law.

If if if if my esoteric read of these interlinked concepts is true, I think what the Church of the Copyright Office is attempting to decree is that AI has no divine spark, and thus cannot something something. And artists who use AI are very bad, and you should all be ashamed of yourselves for eternity… 😉

Tangent: this rant made me remember Tolkien’s excellent essay on Subcreation, which because it is openly religious, actually ends up for me being a bit more coherent than the arguments promulgated by the US Copyright Office, which does not openly admit its work is serving a religious function of upholding the hegemonic colonialism of the imagination.

Quoting Dave Karpf on Failure Modes in Tech

Lots of good stuff in this essay on two modes of failure in tech by Dave Karpf. Briefly:

One way that a technology can fail is that it can work as intended, but at a much larger scale, with unexpected results

Assuming that an emerging technology works as-intended can be a huge stretch….

The second failure mode prompts an entirely different set of questions. What if the bugs in the emerging technology are not resolved? What if it the market for it grows, and it gets incorporated into critical social systems, but it continues to fail in ways that are increasingly hard to see?

We ought to pay more attention to the second failure mode when imagining the trajectory of AI. I’m not worried about an imminent future of artificial general intelligence. I’m worried about a future where generative AI tools get baked into social systems and wreak havoc because the tech doesn’t work nearly as well as intended.

This second failure mode with regard to AI is also the one that I am concerned about, because it is more the norm in tech, as far as I’ve seen it. Things get bigger, but not necessarily better. In fact, they often get worse, and no one gives enough of a damn to fix it after release, cause fixing bugs isn’t sexy. And acknowledging even that you built around a bad paradigm resting on a lot of faulty assumptions is even less sexy for companies and developers. And, as usual, it’s users and all the people downstream who are negatively affected by the tech who end up getting stuck with the bill.

Loose ends on the AI Copyright debate

Now, to tie together a few loose ends on the copyright of AI output front, from my previous posts about Photoshop’s new generative fill & copyright as colonialism

Did I already post this? I’ve been in a haze on these subjects the last few days, but the US Copyright Office put out this follow-up guidance on AI-assisted art, firmly planting its flag in the “it all depends” territory – a sign of uncertainty as policy, if ever I saw one. These quotes are telling:

The answer will depend on the circumstances, particularly how the AI tool operates and how it was used to create the final work. This is necessarily a case-by-case inquiry.


If a work’s traditional elements of authorship were produced by a machine, the work lacks human authorship and the Office will not register it. For example, when an AI
technology receives solely a prompt from a human and produces complex written, visual, or musical works in
response, the ‘‘traditional elements of authorship’’ are determined and executed by the technology—not the
human user.

I realize that “traditional elements of authorship” here primarily has a legal meaning, but I think it’s important we – at some point – explode those notions, to show we’re no longer living in that world of traditions. And that the goal of authors and artists ought not to be conform to the legal definitions of things, but to seek the deeper truths – and change their manifestations in our world.

But before that, here’s the other USCO quote:

In other cases, however, a work containing AI-generated material will also contain sufficient human authorship to support a copyright claim. For example, a human may select or arrange AI-generated material in a sufficiently creative way that ‘‘the resulting work as a whole constitutes an original work of authorship.’’ Or an artist may modify material originally generated by AI technology to such a
degree that the modifications meet the standard for copyright protection. In these cases, copyright will only protect the human-authored aspects of the work, which are ‘‘independent of’’ and do ‘‘not affect’’ the copyright status of
the AI-generated material itself…

In each case, what matters is the extent to which the human had creative control over the work’s expression and
‘‘actually formed’’ the traditional elements of authorship.

I thought about this notion a lot while I was working on the generative fill Photoshop experiments last night, about how presumably given the level of involvement I expressly had in determining the actual arrangement of elements within the images. It’s a much higher degree of specificity than working in Midjourney – though again, I frankly don’t give a shit what any bureaucrat thinks about my artwork or process. Their supposed hegemony does not impinge on my autonomy to make creative choices how I see fit.

I wanted to stick in this bit from Lawrence Lessig, who wrote on the matter of AI image copyrightability, which I largely agree with in general, if not all the particulars:

In exchange for AI copyright protection, Congress could require that the AI technologies register the work in digital registries, tied to data that established provenance and ownership. These registries need not be the government’s, though the government should set standards for an approved copyright registry. If done right, AI creativity could engender the return to a system that made it easy to identify the owners of copyrighted work, and therefore easy to clear rights when that work is to be reused.

Interestingly, this type of provenance system is being pioneered to a certain degree in parallel via the C2PA standard, as well as Adobe’s implementation of it, Content Credentials. However, neither of those to my knowledge was designed with the express purpose of acting as a universal copyright registry or clearinghouse for ownership of IP.

C2PA’s about page says that it provides “a tool for creators to claim authorship while empowering consumers to make informed decisions about what to trust.”

Conceivably though, this kind of system could potentially be adapted to something like what Lessig describes above. From an Adobe help doc on their Content Credentials initiative:

When enabled, Content Credentials gathers details such as edits, activity, and producer name then binds the information to the image as tamper-evident attribution and history data (called Content Credentials) when creators export their final content….

It creates an open format for sharing information about the producer’s identity and the ingredients and tools used to make the content. These ultimately provide useful attribution information for audiences once the producer shares or publishes the image.

I find the claims questionable that this information will ultimately prove to be that useful to audiences who are doomscrolling on their cell phone while on the toilet. But I suppose tied to a licensing system, this might be very powerful for (some) rights holders, or at least those who can afford to pursue the enforcement of their rights in court.

My knowledge of this might be a bit out of date (and I’m not sure if or when this is in force), but this also seems to plug handily into the EU’s direction in terms of copyright, where content is supposed to be scanned at time of upload (Wired, 2019).

This EU Commission document (2021) on the matter states:

Article 17 provides that online content-sharing service providers need to obtain an authorisation from rightholders for the content uploaded on their website. If no authorisation is granted, they need to take steps to avoid unauthorised uploads.

This all sounds incredibly restrictive, even if in theory it is intended to be maximally protective of copyright. It seems to go about attempting to protect copyright by limiting other rights around fair use and free expression, which is especially problematic on the internet, where so much of our personal expression is comprised of re-packaging and re-publishing the expression of others.

There must be a better way than all of this, but it doesn’t seem like we’ve found it yet.

Copyright is Colonialism for the Imagination

The deeper I’ve gone into this topic of copyrightability of AI-assisted works, the more I’m convinced that copyright is merely another face of colonialism – this time applied to the products of the imagination. It’s yet another attempt by humans to chop things up into little boxes, collect rent, and prevent others from accessing these resources without going through the approved gatekeepers.

Which is not to say that I don’t include copyright notices in my AI Lore books; I absolutely do, though I realize the legal theory around doing so is anything but settled.

And that’s also not to say that I think creators don’t deserve to be compensated for their talents and efforts; they absolutely do.

I just thing the present system is inherently flawed, and experientially feels wrong for me as a creator.

My experience of creativity is that the more you access it, and share it, the more powerful your connection to it all becomes: the more you drink, the fuller your cup.

To squeeze and squeeze and squeeze, to clench down, and wrap your fists around your creations and say MINE MINE MINE is really the surest way as an artist to lose them, to lose the connection to the spark that brought you there and electrified the whole thing.

What’s the answer from an intellectual property perspective? Honestly, I really don’t know.

But I know what’s not the right answer for me personally as an artist: having to go beg permission from some outside authority, having to go ask them to validate my artistic hunches and outputs, having them tell me I’m wrong, or that’s not art, or that can’t be copyrighted because “Simon didn’t say.”

I call bullshit on all that. I don’t accept the spiritual authority of bureaucrats or any other would-be temporal power over my work; the only voice I have to obey is that creative impulse that is my birthright, as a being through whom life is flowing. All the rest, all the affairs and concerns of men (and especially men in suits lording it over others) can all burn for all I care, so long as I hold fast to that core & follow my light.

Exploring Photoshop’s Generative Fill Beta

I finally spent some time last night with Photoshop’s Beta version, which includes the generative fill tool, based on Adobe’s own generative model, Firefly.

I found it both underwhelming & janky, but also that if you push it enough, you can do some interesting things with it. Here are the first four images I made with it, showed chronologically.

This one is loosely inspired by that David Bowie Blackstar video, where there’s a jewel encrusted astronaut skull or something. It’s the result of dozens and dozens of prompts using the generative fill tool. It’s very like “Photoshop looking” with a lot of the objects layered in the foreground. It’s fine, and there are aspects that are interesting, but not where I’d like a tool like this to be able to take me.

I actually find the underlying Firefly image generation model to be “not that good” relative to something like Midjourney v5+ family. Plus the keyword filtering is EXTREMELY restrictive in at least these early versions, making it extremely klunky and unimpressive to use. I’ll come back to that topic later.

Here’s the second image I made from scratch with a few dozen prompts:

There was a Wired article a while back asking why generative AI images so often look like 70’s prog-rock album art, and this one falls squarely into that category. Though perhaps it’s slightly more metal than prog, or at least prog-metal bathed in surrealism. Pictorially, I like this one a bit better than the first experiment.

It’s also worth noting that, despite its flaws, you can at least use the generative fill tool on higher resolution images. All the ones in this set, for example, were made at 300 dpi, so you could conceivably use them for print output, which is cool.

Next up:

What I actually had envisioned was sort of an image of a Capricornian sea-goat leaping out of the water and twisting in mid-air, inspired by maybe like an old engraving on a cosmic-tinged map or something. Where I ended up is really different (and continues to show the absurdity of the US Copyright Office’s test about “predictability” being a factor in determining copyrightability), but I rather like it all the same.

Apart from Adobe’s snotty, paternalistic, and overly restrictive keyword filtering (for a suite of tools I pay $900+ CAD annually to access, I might add), the tool is also many times simply ineffective. You lasso an area, ask for a given thing, and it just doesn’t deliver that at all. I recognize that this is still in beta, but that happened over and over again. Or it fails to constrain the thing you ask for in the manner in which you ask. For example, on the sort of white fish/rabbit body of the monster, I kept selecting the top part of it, and asking it to make the top or front half of a goat, and it kept routinely giving me a whole goat.

All this can make the tool very tedious to work with, but also there’s something to the problem where, since you can’t exactly get the thing you want or are envisioning from it, you have to take many alternative side paths, and sort of prod and poke and eventually accept where you end up, or else just stop altogether. That’s both a frustrating process, but the mere fact of exploring these blind alleys can also take you to some interesting new places, if you’re able to sit back and ride the wave a little (while also directing the wave to whatever extent you can).

Another pic:

I was trying for a mermaid in the initial image, but ended up with this figure of a woman, and just sort of followed my instincts on what else to include. It ended up, I think, in a segment of the latent space that kind of calls to mind some 90s grunge music videos, like Black Hole Sun, or something from Nirvana, maybe.

Again, this was dozens upon dozens of selections & prompt-guided generative fills. I remember one thing that I found annoying but not surprising is that Firefly seems to prohibit gun as a prompt. Along with things like missile and warhead etc. But as you saw in the first one, I did succeed to get artillery shell. So I think there are holes in their keyword filtering (and all keyword filtering), that you can still probably drive through.

Or you can just, you know, pop over into Adobe Stock, type exactly the same search term in, and get the thing you wanted no problem. So that doesn’t make sense to me at all as a user, that I can search for content in Adobe Stock – upon which Firefly is supposedly trained – and pull up something I’m not able to natively get the model to produce. That’s just so tedious, I can’t even engage with it further as a problem. And I think it really cripples the utility of the tool. Their design pattern should be: if it’s allowable in Stock, it’s allowable in Firefly. Otherwise, it’s just like this fever-dream of trying to navigate some idiotic unpredictable bureaucratic system of imagined cultural taboos… At least be consistent! And if I’m paying you for this service, give me the option to turn all this crap filtering off, please. I’m not doing anything wrong or illegal. I’m making fucking art, so hands off my fucking imagination, thank you very much.

Here’s one final one to chew on:

If random photographic snapshots are copyrightable, then so are generative AI creations. Here’s why…

Following on the theme of the UK enabling copyright registration of computer-generated works to “the person by whom the arrangements necessary for the creation of the work are undertaken,” I wanted to lay out a clear simple argument for why I think the US Copyright Office opinion letter on Zarya of the Dawn’s AI-generated comic panels not being eligible for copyright is basically wrong.

Here it is:

  1. Photos are copyrightable, including snapshots – even if I didn’t create or arrange by myself the contents of what is depicted in the photograph.
  2. So, if I proceed to use a minimum amount of creativity (whatever that is) to capture a depiction of real space, the same basic principle ought to apply if I capture a depiction of a non-physical latent space using an AI-based “idea camera.”
  3. It could be even argued that merely clicking the shutter on a camera pointed at a real dog is less effort and less creative an act (or perhaps they are at least equal) than prompting an AI image generator to depict, for example, an invented dog wearing a hat. The difference is merely of the instrument used to make the depiction, which settles something to a fixed form.
  4. In the case of a copyrightable snapshot, basically no one tries to argue that, because it is actually the camera’s hardware & software which do the work of image processing and not the human, that the camera is the true “author” of that work. And yet, this is exactly the (I think very wrong) claim made by the USCO about AI-generated works.
  5. Lastly, the fuzzy claims about predictability of final images from AI generators doesn’t hold water as a test for any other kind of media. It doesn’t, as I expressed in a recent post, hold water for example for writing a novel, many types of paintings, or films, musical works, etc. It’s rare you as the artist start with a perfect vision of the finished product, and then merely mechanically transcribe it into your chosen medium. Almost all of those, most of the time, are processes of discovery, selection, editing, etc. with a great many steps before you arrive at a finished product you never quite envisioned in the shape of the final product.
  6. Further, to tack on one final point: even if I close my eyes, spin around, and randomly point and click my camera to capture images – these are all potentially copyrightable, provided they meet some imagined minimum of creativity/originality. ChatGPT offers one rationale that might prove the minimum threshold has been passed: “This could be based on choices like the time and place of the photograph, or the decision to initiate the snapshot at a particular moment.”
  7. Likewise, I think it’s no stretch to say that even the most basic and “boring” AI prompts and their results are always going to be embedded in the context of the lives of the people who created them in concert with these tools. When the circumstances and context (social, personal, political, etc etc) are viewed as constellations (i.e., as a part of their hypercanvas), it will be plain to see where and how the creativity and originality manifest.

Page 42 of 177

Powered by WordPress & Theme by Anders Norén