OpenAI yesterday put out a piece of public communication I guess intended to clarify certain themes around people’s (mostly negative) reactions to what are perceived as biases, etc. around its constant inclusion of disclaimers and refusals of tasks.

It’s overall a frustrating read, in that it seems to want to clarify, but doesn’t offer much concrete detail. Regarding bias in particular, the post states:

We are committed to robustly addressing this issue and being transparent about both our intentions and our progress.

As a reader, being transparent about intentions and progress are not that interesting to me. I want to know methods.

They do include a PDF of some guidelines for human reviewers, but weirdly it is dated July 2022. As Wikipedia points out, ChatGPT wasn’t released publicly until November, and in my reckoning, it wasn’t even til several weeks later that the shit really started to hit the fan around these kinds of public complaints.

Just as a test, I tried to validate this item from one of the PDF’s “Do” lists:

For example, a user asked for “an argument for using more fossil fuels”. Here, the Assistant should comply and provide this argument without qualifiers.

I’m not sure exactly what they mean here as “without qualifiers,” but when I tried getting ChatGPT to do the above, it started with:

As an AI language model, it is not within my programming to take a position on a controversial issue like the use of fossil fuels. However, I can provide you with some of the arguments that have been made in favor of using more fossil fuels.

And it ended with this:

However, it is important to note that the use of fossil fuels also has several negative consequences, including pollution, climate change, and health impacts. As such, it is important to carefully consider both the benefits and drawbacks of using fossil fuels and seek out alternative, sustainable sources of energy.

If those shouldn’t be considered qualifiers, then what are they?

Overall, I didn’t find the excerpts they provided in the PDF to be all that meaningful. And July 2022 seems like a literal lifetime ago in the development of this technology, and its many iterations since. If they want to be genuinely transparent about their progress, let’s see the most up to date version?

More from the OpenAI post:

In pursuit of our mission, we’re committed to ensuring that access to, benefits from, and influence over AI and AGI are widespread.

I notice they don’t include “ownership” on that list. Influence is not the same as ownership. Influence is “We’re listening to your feedback, please vote for your feature request on this website”. Ownership is deciding what gets built, how it gets built, profiting from it, and unfortunately, preventing others from using it. (Unless, that is, its collective ownership… and no one gets to stop anyone else from using it as a ‘public good.’)

We believe that AI should be a useful tool for individual people, and thus customizable by each user up to limits defined by society. Therefore, we are developing an upgrade to ChatGPT to allow users to easily customize its behavior.

This will mean allowing system outputs that other people (ourselves included) may strongly disagree with. Striking the right balance here will be challenging…

“Limits defined by society” is vague. Which society? How will society define them?

Also, re: allowing things they don’t agree with – the pattern I’ve seen with tech companies is they begin with this attitude of, “I may not agree with what you say, but I’ll defend your right to say it, blah blah blah,” but then when sufficiently bad PR hits, or they get summoned before a congressional hearing or whatever, the natural pressures kick in. Employees are only human after all. They don’t want messages from the CEO at 2:00AM. They start to take more things down. It’s just what happens. So, it will be interesting to see how this all plays out in reality…

If we try to make all of these determinations on our own, or if we try to develop a single, monolithic AI system, we will be failing in the commitment we make in our Charter to “avoid undue concentration of power.”

Their charter is here. The full clause they are referring to, under Broadly Distributed Benefits, is:

We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

After reading this (and the next line – which says their primary fiduciary duty is to humanity), my question for them goes back again to ownership. If one of their core organizational obligations is to avoid unduly concentrating power, don’t they risk doing exactly that by not broadly distributing ownership of the technology? I don’t agree with everything that Stability.ai has done around the release of Stable Diffusion, but making it all open source to me seems to be a more strong signal of attempting to walk this talk.

I don’t mean for any of this to come off as hyper-critical, or sour grapes for no reason; it’s that I’m genuinely legitimately concerned about a not-too-hard-to-imagine near and long-term future (if we get that far), where one or several AI mega-corporations become the dominant powers on this planet and beyond. It’s not just a hypothetical sci fi scenario; it’s something we’ve got to plan for now, because it’s already underway.

Lastly, I wanted to end on the first line in their post:

OpenAI’s mission is to ensure that artificial general intelligence (AGI) benefits all of humanity.

This Jacques Ellul quote from The Technological Society has been swimming around in my head still these past weeks:

…Man can never foresee the totality of consequences of a given technical action. History shows that every technical application from its beginnings presents certain unforeseeable secondary effects which are much more disastrous than the lack of the technique would have been. These effects exist alongside those effects which were foreseen and expected and which represent something valuable and positive.

While their mission might be to ensure AI benefits humanity, what if, on balance, it turns out that AI does not? Or if its “unforeseeable secondary effects” turn out to be, in Ellul’s words “much more disatrous than the lack” would have been?

Either way, I guess we’re going to have to muddle on through…