Something I failed to get into in my last post about how do we know when it’s time to socialize AI…
Well, actually I’ve written a bit about these problems of consent & coercion w/r/t to the apparatus of the state in the past, so I won’t rehash them all again. I’ll just randomly toss out links and let the future figure it out:
Like, there are a lot of precursor conditions that would need to be met before democratic control over AI would actually even be a “good thing.”
We take it for granted in these discussions that (1) we actually have true democracy (we don’t), (2) that the control mechanisms we’ve attached to democracy are actually good…
Like I hate to break it to you, but have you looked at some of these so-called lawmakers as people? They are… not ideal specimens of humanity that we should all strive to mimic.
When people say they want democratic control of AI, do they mean by Ted Cruz? Cause I think he would be happy to speed us along toward that future, but I don’t think it’s one we actually really want…
Secondly (fourthly? fifthly? I lost count), coupling the power of decision-making to the yoke of popularity necessarily attracts power-grabbers. It does not necessarily attract people who are good decision-makers, let alone lawmakers.
So, if we really want democratic control of AI, I think we need to spend some more time on the core problems of democracy.
What if, for example, we uncoupled the popularity contest part that politicians like from the lawmaking part. So you could “be” a politician, and seek status, and gain influence, etc. Be a voice. But you would have effectively no decision-making power. Maybe once you held office, you wouldn’t even be allowed to vote anymore. I don’t know. I’m just throwing out wild ideas here. It just seems like putting popularity at the helm of the law is a really big design flaw we’re just willfully ignoring.
And then of course, not holding the so-called stewards of the law to even the most obvious basic ethical principles… god don’t get me started.
Also, there’s a who’s-to-say argument here: with something so pivotal as the mass scale introduction of AI into global society, how do we know that a bunch of people voting directly or voting for someone else to do their voting for them would actually lead us towards “good” outcomes? While some of immediate the harms are obvious and predictable, the next order effects and the ones after that become more and more chaotic.
Arguably, I think we can still say the systems should be accountable and responsive to the people they affect. That seems like a given. A corporate capitalist structure does not optimize for that as an outcome, so unless we choose to change that basic structure to optimize for other outcomes, we’ll get what we get.
This direction seems to slip into a sort of technocratic corporatist fantasy, where corporations might argue they have greater expertise to run these complex systems than simple ordinary dumb citizens whose lives will be impacted from every direction by them.
It’s a conundrum that we will have to puzzle our way through one way or another without being able to see the bigger picture we’re assembling until it’s all finished.