Via the latest ImportAI newsletter, this excellent piece, Whoever Controls Language Models Controls Politics by Hannes Bajohr.

There’s a lot worth unpacking in this piece, but I’m just going to jump straight to the end when the author says:

If AI systems become the site of articulating social visions, a dominant factor in the make-up of the public sphere, or even a political infrastructure themselves, there is much to be said for actually subjecting them to public control as well. If this is taken to its logical conclusion, the last resort would be, horribile dictu, communization – in other words, expropriation.

It’s an idea that seems to be taking root, and is also reflected in this FT piece:

It felt deeply wrong that consequential decisions potentially affecting every life on Earth could be made by a small group of private companies without democratic oversight.

I think I more and more agree that this is the biggest problem of AI’s rapid ascendancy, that a few powerful players will own it all and amass altogether too much power for themselves.

OpenAI’s charter seems worth another reference here also, particularly this bit about unduly concentrating power:

We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

I guess I’m starting to think, especially since GPT-4 and the mass adoption we’re seeing of OpenAI’s technologies, that they have already crossed this ill-defined threshold of unduly concentrating power. Is there a … duly (?) way concentrate power? Is that ever actually desirable that one company not just controls the technology itself, but now effectively gets to control all the other technologies that get built by third parties using their technologies?

Rather than pointlessly halt AI research for six months (so that Elon’s outfit can catch up – absolutely the only reason that fucker signed the moratorium), I can now strongly see the arguments for socializing it, and putting into the hands of the public literally *any* AI model that becomes powerful enough to be “really good.”

Citizens’ assembly to (non-violently) occupy OpenAI?

Citizens’s assembly to occupy Midjourney?

Are we there yet?

Too soon?

What I want to know is, what is the level of threat to human livelihood and democratic governance systems which would urge us to require action? And what would be appropriate action? My impression is that it will become more and more difficult to say “no” to systems like this, or the organizations which created them, the further entrenched their services get in the marketplace. This causes them to amass a lot of power and possible points of control, not to mention money. What happens when governments become wholly dependent on privately-owned AIs? There aren’t easy or simple answers to these questions, but I tried to explore them in fictional form here