Questionable content, possibly linked

Sovereign AI

Jack Clark’s Import AI newsletter this week mentioned a British think tank recommending that the UK create sovereign AI infrastructure. Quoting the group:

Given these AI systems will soon be foundational to all aspects of our society and economy, it would be a risk to our national security and economic competitiveness to become entirely dependent on external providers.

I tend to agree with this, even if I have my doubts about the current shape or need for “nation-states” as a thing going forward. However, if you are a nation-state or someone concerned with the continuation of that system, it does seem like a certainty to me that AI threatens national sovereignty.

Sure there are the data protection arguments, or the AI cybersecurity threats, etc. Or even the risk of an electorate being poisoned by a malicious AI. But to me the most clear and present danger is simply that AI companies will succeed. And will succeed wildly.

That is, they will come to control more resources directly (as in providing essential compute services), and will amass more money than many governments.

It sounds like from their blog post about planning for AGI, that OpenAI is to some extent already planning for this:

We have a cap on the returns our shareholders can earn so that we aren’t incentivized to attempt to capture value without bound and risk deploying something potentially catastrophically dangerous (and of course as a way to share the benefits with society). We have a nonprofit that governs us and lets us operate for the good of humanity (and can override any for-profit interests), including letting us do things like cancel our equity obligations to shareholders if needed for safety and sponsor the world’s most comprehensive UBI experiment.

While I think they have some good products, color me skeptical about all the above. According to Wikipedia, the cap on returns is 100x. So, if Microsoft puts in $10B, they can “only” get back $1T. That sounds “horrible.”

This is also a hell of a lot of trust to put into any group – corporate or otherwise – that they can be relied on to act in the interest of the good of humanity. And UBI, isn’t that the province of government?

I managed to coax a fairly coherent set of arguments out of ChatGPT about these very real dangers to national sovereignty from AI:

The threat of governments becoming reliant on AI systems they do not control and the AI companies becoming extremely wealthy and powerful is a real concern. Here are a few reasons why:

  1. Dependence on AI systems: As governments increasingly rely on AI systems, they may become dependent on these systems to function effectively. This dependence can make governments vulnerable to the AI companies that control these systems.
  2. Control over data: AI companies that develop and maintain these systems also have access to vast amounts of data, including sensitive government data. This gives them significant power and leverage over governments that use their services.
  3. Unequal bargaining power: AI companies that become extremely wealthy and powerful can have unequal bargaining power with governments, potentially leading to situations where these companies can dictate terms and conditions to governments.
  4. Lack of transparency: AI algorithms can be complex, making it difficult for governments to understand how these systems work. This lack of transparency can make it difficult for governments to regulate these systems effectively.
  5. Inequality and Social Unrest: AI companies becoming extremely wealthy and powerful can exacerbate existing economic inequalities and contribute to social unrest.

To mitigate these risks, it is essential for governments to invest in building their own AI capabilities, rather than relying solely on AI companies.

I’m actually not even sure that those measures will cut the mustard. It’s not (only) that I’m a doomer, but that the through-line of history seems to indicate that corporations are able to innovate faster than governments. And even if a government were to get up to speed on building out their own sovereign AI infrastructure, by the time they do, the game might already be lost to the corporations.

In any event, perhaps none of this will happen. Perhaps it’s all just a symptom of my sci-fi fantasizing. But then again… at least you can vote in a nation-state (theoretically). Is the same going to be true under the AI corporatocracy?


Notes on Mysterious Mushroom Computers


Threats to democracy


  1. Tim B.

    > “Transnational corporations now wield as much, if not more, power than nation-sates. As of mid-2019, the total market capitalization of the five “FAANG”s (Facebook, Apple, Amazon, Netflix, and Google) hovered near $3.2 trillion. At the time that was more than the total world economy of all but four countries: the United States, China, Japan, and Germany. When trans-national corporations are this large, it is not clear whether nation-states or corporations have more geopolitical power. “

  2. Tim B.

    to be clear, i dont think this is a good outcome, but i do agree its extremely likely:

    > “There is every reason to believe that the next stage of the techno-financial revolution will be even more disastrous for national political authority. This will arise as the natural continuation of existing technological processes, which promise new, algorithmic kinds of governance to further undermine the political variety. Big data companies (Google, Facebook etc) have already assumed many functions previously associated with the state, from cartography to surveillance. Now they are the primary gatekeepers of social reality: membership of these systems is a new, corporate, de-territorialised form of citizenship, antagonistic at every level to the national kind. And, as the growth of digital currencies shows, new technologies will emerge to replace the other fundamental functions of the nation state. The libertarian dream – whereby antique bureaucracies succumb to pristine hi-tech corporate systems, which then take over the management of all life and resources – is a more likely vision for the future than any fantasy of a return to social democracy.”

  3. Tim B.

    “You will never compel a capitalist to incur loss to himself and agree to a lower rate of profit for the sake of satisfying the needs of the people. ”

    Stalin, in interview with HG Wells 1934

Leave a Reply

Powered by WordPress & Theme by Anders Norén