Within the past few months, a non-profit called Partnership for AI (PAI, for short), released a document entitled “Responsible Practices for Synthetic Media,” which I read with interest. I then submitted to them a sample implementation of their recommendations within the context of a fictional blogging service, to engage with them and other interested parties about how these principles might be deployed at the level of actual products.

I wrote them a few times, actually, expressing my interest. I’m not completely a nobody; I’ve worked for 8 years in online Trust & Safety, and published nearly 100 AI-assisted books that have received international coverage in the media. I never heard back from anyone I reached out to about chatting more.

Eventually after maybe 6 weeks or something, I got added to a newsletter, the first edition of which included what appeared to be a polite brush-off text, including:

We are currently working with existing collaborators on the next phase of the Framework launch and developing a process for an open call for additional Framework supporters. 

We will share more information about joining the Framework in the near future. In the meantime, we will keep you updated as we collect insights on how organizations are using the Framework as well as on PAI’s synthetic media work more broadly. 

Then I waited another month to get another newsletter, which seemed delighted to inform me that they had held a big meeting with a bunch of important “experts” (of which I was not included nor invited), and that this esteemed group had made a bunch of new vague recommendations that I could read if I wanted to. Something to the effect of:

In this update, we share more about our latest initiative to develop safety protocols for large-scale AI, including top three insights from the kickoff of a multistakeholder dialogue on these protocols at a workshop co-hosted with IBM. Below you can find links to the Version 0 of the Protocols as well as recommendations and open questions from our recent workshop.

I don’t want to blast or alienate anybody, but for a group which includes “partnership” in its title, this is a kind of staid and somewhat boring response to the work and excitement I tried to share with them as an interested & expert third party potential collaborator.

Their email went on to explain:

The meeting convened 40 experts from across industry, academia, and civil society, including representatives from 12 industry model providers. Discussions included lightning talks from model providers who shared insights on their organization’s deployment decisions and challenges, followed by group work…

To me, as a let’s say “AI practitioner,” what this all says is that… we asked a bunch of “experts” who have a vested financial interest in a particular outcome (aka our “partners”), and we excluded anybody who isn’t important enough, or who is not actually… you know.. using the tools.

I know this kind of meeting does not come from a bad place; it is all incredibly well-intentioned, I have no doubt. But I think it is indicative of the state of the industry, and illustrative of the dangers of ONLY allowing companies, a handful of non-profits, and other assorted grab bag “civil society” people to basically… decide things which will impact all of humanity by influencing the direction of regulations and norms in business.

I find it a travesty that other types of potential contributors are summarily excluded from this kinds of convocations; and I think it is a practice that needs to be thoroughly re-evaluated and changed. Because AI is a huge, frickin enormously consequential development for humanity. And we can’t just keep replicating the same old broken patterns that we have always perpetuated, and expect them to yield new, different, or interesting results. From where I’m standing, this looks like more of the same old establishment jockeying and ‘inside baseball.’ Not a true partnership of the kind that we need to face these enormous issues uncovered by AI.