I finally spent some time last night with Photoshop’s Beta version, which includes the generative fill tool, based on Adobe’s own generative model, Firefly.

I found it both underwhelming & janky, but also that if you push it enough, you can do some interesting things with it. Here are the first four images I made with it, showed chronologically.

This one is loosely inspired by that David Bowie Blackstar video, where there’s a jewel encrusted astronaut skull or something. It’s the result of dozens and dozens of prompts using the generative fill tool. It’s very like “Photoshop looking” with a lot of the objects layered in the foreground. It’s fine, and there are aspects that are interesting, but not where I’d like a tool like this to be able to take me.

I actually find the underlying Firefly image generation model to be “not that good” relative to something like Midjourney v5+ family. Plus the keyword filtering is EXTREMELY restrictive in at least these early versions, making it extremely klunky and unimpressive to use. I’ll come back to that topic later.

Here’s the second image I made from scratch with a few dozen prompts:

There was a Wired article a while back asking why generative AI images so often look like 70’s prog-rock album art, and this one falls squarely into that category. Though perhaps it’s slightly more metal than prog, or at least prog-metal bathed in surrealism. Pictorially, I like this one a bit better than the first experiment.

It’s also worth noting that, despite its flaws, you can at least use the generative fill tool on higher resolution images. All the ones in this set, for example, were made at 300 dpi, so you could conceivably use them for print output, which is cool.

Next up:

What I actually had envisioned was sort of an image of a Capricornian sea-goat leaping out of the water and twisting in mid-air, inspired by maybe like an old engraving on a cosmic-tinged map or something. Where I ended up is really different (and continues to show the absurdity of the US Copyright Office’s test about “predictability” being a factor in determining copyrightability), but I rather like it all the same.

Apart from Adobe’s snotty, paternalistic, and overly restrictive keyword filtering (for a suite of tools I pay $900+ CAD annually to access, I might add), the tool is also many times simply ineffective. You lasso an area, ask for a given thing, and it just doesn’t deliver that at all. I recognize that this is still in beta, but that happened over and over again. Or it fails to constrain the thing you ask for in the manner in which you ask. For example, on the sort of white fish/rabbit body of the monster, I kept selecting the top part of it, and asking it to make the top or front half of a goat, and it kept routinely giving me a whole goat.

All this can make the tool very tedious to work with, but also there’s something to the problem where, since you can’t exactly get the thing you want or are envisioning from it, you have to take many alternative side paths, and sort of prod and poke and eventually accept where you end up, or else just stop altogether. That’s both a frustrating process, but the mere fact of exploring these blind alleys can also take you to some interesting new places, if you’re able to sit back and ride the wave a little (while also directing the wave to whatever extent you can).

Another pic:

I was trying for a mermaid in the initial image, but ended up with this figure of a woman, and just sort of followed my instincts on what else to include. It ended up, I think, in a segment of the latent space that kind of calls to mind some 90s grunge music videos, like Black Hole Sun, or something from Nirvana, maybe.

Again, this was dozens upon dozens of selections & prompt-guided generative fills. I remember one thing that I found annoying but not surprising is that Firefly seems to prohibit gun as a prompt. Along with things like missile and warhead etc. But as you saw in the first one, I did succeed to get artillery shell. So I think there are holes in their keyword filtering (and all keyword filtering), that you can still probably drive through.

Or you can just, you know, pop over into Adobe Stock, type exactly the same search term in, and get the thing you wanted no problem. So that doesn’t make sense to me at all as a user, that I can search for content in Adobe Stock – upon which Firefly is supposedly trained – and pull up something I’m not able to natively get the model to produce. That’s just so tedious, I can’t even engage with it further as a problem. And I think it really cripples the utility of the tool. Their design pattern should be: if it’s allowable in Stock, it’s allowable in Firefly. Otherwise, it’s just like this fever-dream of trying to navigate some idiotic unpredictable bureaucratic system of imagined cultural taboos… At least be consistent! And if I’m paying you for this service, give me the option to turn all this crap filtering off, please. I’m not doing anything wrong or illegal. I’m making fucking art, so hands off my fucking imagination, thank you very much.

Here’s one final one to chew on: