Questionable content, possibly linked

I’m not afraid of AI, I’m afraid of AI companies

As much as I think it’s a fun sort of thing to explore within the context of dystopian fantasy, I’m not really into the whole let’s pretend it’s real “existential risk” fantasy football around AI. That’s because I’m not that worried about runaway AIs going rogue. I’m more afraid of AI companies simply accruing too much power and it becoming less and less possible to unwind it. It’s part of what drove me to writing the AI TOS.

I am very much on the “public option” team when it comes to AI development, which is why I appreciated most of the points in this Bruce Schneier article about AI & Trust:

And we need one final thing: public AI models. These are systems built by academia, or non-profit groups, or government itself, that can be owned and run by individuals.

The term “public model” has been thrown around a lot in the AI world, so it’s worth detailing what this means. It’s not a corporate AI model that the public is free to use. It’s not a corporate AI model that the government has licensed. It’s not even an open-source model that the public is free to examine and modify.

A public model is a model built by the public for the public. It requires political accountability, not just market accountability. This means openness and transparency paired with a responsiveness to public demands. It should also be available for anyone to build on top of. This means universal access.

What Schneier describes is the basis for what the most recently published AI Lore book, The Continuity Codex, is all about: libraries around the world band together to form a truly public option AI based on all their collections. And for their troubles, they are bombed into non-existence by the newly re-elected psychoticratic Hyperion Storm.

And it is for that reason that I do differ on the viability of this way of thinking, from Schneier’s piece:

…the point of government is to create social trust.

While I think there’s a role to play for governments in public AI options, if history has taught us anything, it is that rich guys buying a platform can destroy it overnight, and the same is true for hostile actors suddenly taking over, gutting, perverting or otherwise terminally weakening government institutions.

So while “creating social trust” makes sense from a default-good rational actor point of view, we should not assume that the mechanisms intended to do that will not be subverted in the future, and be put to far worse ends, creating a dark reflection of something that can no longer be considered “trust.”

To the extent a government improves the overall trust in society, it succeeds. And to the extent a government doesn’t, it fails.

Tell that to Hyperion Storm!

Which is not to knock Schneier’s primary point: that public options (plural) are needed – they absolutely urgently are. We just need to be careful too what kinds of swords we hand governments in this situation as well. If we’re potentially making a category error thinking of AIs/corporations as “friends” because of their relationality as modes of interactions, we might also be making a grave error thinking that because governments have been trustworthy more or less so far, that they will continue to be… Signs around the world seem to indicate the contrary, and I’m becoming more and more nervous about the speed and suddenness of rate of decay.

Which is not to say the League of Earth Libraries should not build the Continuity Codex! It absolutely must! There isn’t a minute to waste! As the motto of the Inter-Library Intelligence Network (clandestine branch of the LEL) famous say: Scientia omnia vincit!


‘Authorial Act’ in curation & interpretation of gen AI outputs


Wex Definition

1 Comment

  1. Tim B.

    also, if we defensively assume that government is not always a trustworthy actor, the suggestion of relying on academia and non-profits becomes in my experience somewhat dicey, as i fear those groups tend to lack the efficacy for software development at scale that we see more natively succeeding in corporations. doesnt mean it cant be done, but it means that we’re relying a great deal of what ifs as deliverables from parties which may or may not be capable of delivering them. it may also be our best option for now too… idk.

Leave a Reply

Powered by WordPress & Theme by Anders Norén