Given these AI systems will soon be foundational to all aspects of our society and economy, it would be a risk to our national security and economic competitiveness to become entirely dependent on external providers.
I tend to agree with this, even if I have my doubts about the current shape or need for “nation-states” as a thing going forward. However, if you are a nation-state or someone concerned with the continuation of that system, it does seem like a certainty to me that AI threatens national sovereignty.
Sure there are the data protection arguments, or the AI cybersecurity threats, etc. Or even the risk of an electorate being poisoned by a malicious AI. But to me the most clear and present danger is simply that AI companies will succeed. And will succeed wildly.
That is, they will come to control more resources directly (as in providing essential compute services), and will amass more money than many governments.
It sounds like from their blog post about planning for AGI, that OpenAI is to some extent already planning for this:
We have a cap on the returns our shareholders can earn so that we aren’t incentivized to attempt to capture value without bound and risk deploying something potentially catastrophically dangerous (and of course as a way to share the benefits with society). We have a nonprofit that governs us and lets us operate for the good of humanity (and can override any for-profit interests), including letting us do things like cancel our equity obligations to shareholders if needed for safety and sponsor the world’s most comprehensive UBI experiment.
While I think they have some good products, color me skeptical about all the above. According to Wikipedia, the cap on returns is 100x. So, if Microsoft puts in $10B, they can “only” get back $1T. That sounds “horrible.”
This is also a hell of a lot of trust to put into any group – corporate or otherwise – that they can be relied on to act in the interest of the good of humanity. And UBI, isn’t that the province of government?
I managed to coax a fairly coherent set of arguments out of ChatGPT about these very real dangers to national sovereignty from AI:
The threat of governments becoming reliant on AI systems they do not control and the AI companies becoming extremely wealthy and powerful is a real concern. Here are a few reasons why:
- Dependence on AI systems: As governments increasingly rely on AI systems, they may become dependent on these systems to function effectively. This dependence can make governments vulnerable to the AI companies that control these systems.
- Control over data: AI companies that develop and maintain these systems also have access to vast amounts of data, including sensitive government data. This gives them significant power and leverage over governments that use their services.
- Unequal bargaining power: AI companies that become extremely wealthy and powerful can have unequal bargaining power with governments, potentially leading to situations where these companies can dictate terms and conditions to governments.
- Lack of transparency: AI algorithms can be complex, making it difficult for governments to understand how these systems work. This lack of transparency can make it difficult for governments to regulate these systems effectively.
- Inequality and Social Unrest: AI companies becoming extremely wealthy and powerful can exacerbate existing economic inequalities and contribute to social unrest.
To mitigate these risks, it is essential for governments to invest in building their own AI capabilities, rather than relying solely on AI companies.
I’m actually not even sure that those measures will cut the mustard. It’s not (only) that I’m a doomer, but that the through-line of history seems to indicate that corporations are able to innovate faster than governments. And even if a government were to get up to speed on building out their own sovereign AI infrastructure, by the time they do, the game might already be lost to the corporations.
In any event, perhaps none of this will happen. Perhaps it’s all just a symptom of my sci-fi fantasizing. But then again… at least you can vote in a nation-state (theoretically). Is the same going to be true under the AI corporatocracy?