A friend sent me this piece recently, in which the author posits two types of I guess power users of AI? They are called here the centaur and the cyborg:

In AI technology, Centaurs refers to a type of hybrid usage of generative AI that combines human and AI capabilities. It does so by maintaining a clear division of labor between the two, like a centaur’s divided body. The Cyborgs by contrast have no such clear division and the human and AI tasks are closely intertwined.

I’m not sure I really agree with this, but the set-up goes something like:

A centaur method is designed so there is one work task for the human and another for the AI. […] The lines between the tasks are clear and distinct, just like the dividing line between the human and horse in a Centaur.

They then go on to explain trying to create an image of a centaur in AI image generators, and having a lot of difficulty applying what they’re framing above as a ‘centaur’ approach.

The point of this story is that the Centaur method failed to make the Centaur. I was forced to work very closely and directly with the AI to get the image I wanted, I was forced to switch to the Cyborg method. I did not want to, but the Cyborg method was the only way I could get the AI to make a Centaur with a robotic top. Back and forth I went, 118 times.

The more I thought about all this, the more I felt like, there’s not really any pure “centaur” use of generative AI that I have ever found. It is always without fail a negotiation, a conversation of iterations and tweaking, selection and improvement. It’s always a back and forth. So does that make it a cyborg activity? Weren’t centaurs in classical mythology also tutors, at least in Chiron’s case?

I’d love to believe that AIs “learn” from our interactions, and that I could be their centaur tutor (remind me to tell my AI tutor story some time) but in my experience really they don’t or don’t seem to in short increment time periods, anyway. Perhaps in aggregate over longer periods of time, but that basically doesn’t help me in the moment to break through whatever knowledge-gates I need to achieve whatever it is I am trying to.

I like the idea of cyborgs, but I am a biologically-biased humanist in the end. I like Haraway’s notion of the cyborg as a being or way of being that breaks down boundaries and bridges borders. I think there is something to that at any rate.

I guess that’s a long way around to say that there’s not really any distinction in my eyes between a cyborg and a centaur user of current generation gen AIs. If there’s any centaur AI out there right now, maybe it’s more like something along the lines of Harold Cohen’s AARON robots autonomously generating art at the Whitney. (And yes, that’s still art.) But as they say in that video, that is rules-based and not statistics-based like today’s crop of gen AI commercial tools. I suppose in a more rules-based situation, you could employ more of the “set it and forget it” method (or if you for example set your local Stable Diffusion install to continuously generate images of [….] and have it run 24/7 without intervention. I don’t know, I’m just exploring the idea space around all this to see where there might be usable ground or tools to employ.

It seems this idea of the centaur computer user dates back at least to Garry Kasparov, as there is an anecdote which is always repeated around how it got its name. This 7 year old article by Nicky Case on MIT Press Journal of Design and Science has a lot of great stuff in it, but I’ll just clip some interesting bits. After losing to IBM’s Deep Blue:

However, Garry couldn’t help but imagine: what if a human did work together with an AI? The next year, in 1998, Garry Kasparov held the world’s first game of “Centaur Chess”.undefined Similar to how the mythological centaur was half-human, half-horse, these centaurs were teams that were half-human, half-AI.

Later:

In 2005, an online chess tournament, inspired by Garry’s centaurs, tried to answer this question. They invited all kinds of contestants — supercomputers, human grandmasters, mixed teams of humans and AIs — to compete for a grand prize.undefined

Not surprisingly, a Human+AI Centaur beats the solo human. But — amazingly — a Human+AI Centaur also beats the solo computer.

… The old story of AI is about human brains working against silicon brains. The new story of IA will be about human brains working with silicon brains.

And this is fantastic:

a tool doesn’t “just” make something easier — it allows for new, previously-impossible ways of thinking, of living, of being.

Of course, there will be a fair amount of the new ways of thinking, living, and being that will be abhorrent, but there will also be many that are beautiful, true, and interesting.

Doug Engelbart envisioned that the computer would be a tool for intellectual and artistic creativity; now, our devices are designed less around creation, and more around consumption. Forget AI not sharing our values — even non-AI technology stopped supporting our values, and in some cases, actively subverts them.

And this, I think, is where this starts to get really interesting:

At first, Garry wasn’t surprised when a human grandmaster with a weak laptop could beat a world-class supercomputer. But what stunned Garry was who won at the end of the tournament — not a human grandmaster with a powerful computer, but rather, a team of two amateur humans and three weak computers! The three computers were running three different chess-playing AIs, and when they disagreed on the next move, the humans “coached” the computers to investigate those moves further.

As Garry put it: “Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.”

This idea that multiple non-expert humans using multiple AI tools, and carefully sifting through results could arrive at conclusions that other supposedly “better” thinkers might not.

AIs are best at choosing answers. Humans are best at choosing questions.

And that’s how the winning Human+AI team of the 2005 online tournament chose their “+”. The two amateur humans gave questions to their three weak computers, and when the computers gave back differing answers, the humans gave them even deeper questions. […]

In all these examples of centaurs, the human chooses the questions, in the form of setting goals and constraints — while the AI generates answers, usually showing multiple possibilities at once, and in real-time to the humans’ questions. But it’s not just a one-way conversation: the human can then respond to the AI’s answers, by asking deeper questions, picking and combining answers, and guiding the AI using human intuition.

Anyway, more to say here, but those are all the loose ends I had hanging in my brain about this for the last week and a half or so…