I tend to take an “everything” view of AI, as well as technology in general: it brings with it some of the best, some of the absolute worst, and a vast squishy realm of in-between mediocrity. Consequently, I might consider myself both an “AI native” while being simultaneously strongly opposed to it in other regards.
I’ve thought a lot about the “hard” AI take-over, and outlined it in several books, where nation-states collapse under polycrises, and AI mega-corporations take over to pick up the slack, even going so far as to promise humans utopia, with varying degrees of fidelity to that ideal (depending on your vantage point, and whether you think humans are sentient).
But what is far less spoken of is the “soft” AI hegemony under which we’ve already been living for quite some time, the automation of human attention and behavior. There’s an unbroken chain connecting the Fordism of Huxley’s Brave New World (written nearly a hundred years ago) through to the adversarial social control mechanisms that emerged out of social media (or were built in intentionally, depending).
Wikipedia’s definition of hegemony may be of service to this line of inquiry:
“…the political, economic, and military predominance of one state over other states. In Ancient Greece (8th c. BC – AD 6th c.), hegemony denoted the politico-military dominance of the hegemon city-state over other city-states. In the 19th century, hegemony denoted the “social or cultural predominance or ascendancy; predominance by one group within a society or milieu” and “a group or regime which exerts undue influence within a society”.
AI already does – and automation in general, even more broadly – exert hegemonic influence over much of human society. It may be chaotic, and driven by the dynamics of many diverse actors, but it’s outlines are detectable everywhere.
It makes me think of this line, from Sarah Packowski’s article about the many questions AI-assisted content brings up:
“If I use a Google search to find additional resources, I’ve used a variety of AI technologies all in one.”
Nevermind social home feeds, recommendations systems that are built on machine learning (which most “AI” actually is), the AI hegemony is all around us already.
Even if we wanted to, is it something that we can even extract this point without economy and society having a hard crash?
The Wikipedia article about Gramsci’s concept of cultural hegemony seems even more fitting, and is a good beach-head for further reading:
“In Marxist philosophy, cultural hegemony is the dominance of a culturally diverse society by the ruling class who manipulate the culture of that society—the beliefs and explanations, perceptions, values, and mores—so that the worldview of the ruling class becomes the accepted cultural norm. As the universal dominant ideology, the ruling-class worldview misrepresents the social, political, and economic status quo as natural, inevitable, and perpetual social conditions that benefit every social class, rather than as artificial social constructs that benefit only the ruling class.”
I’m not a Marxist, but there seems to be some applicable narrative logic here, in potentially one day – if not already – describing AIs as a tool of the “ruling class” (and the limited “user class” having little or no recourse). AIs maybe possibly will become themselves the ruling class, and cut out the middle man. It’s not just not inconceivable that will happen, we’re on a pretty solid trajectory to make it happen, it seems like.
What can we do? I’m really not sure at this point. We can’t just go live in the woods, without it coming to us eventually. Ted Kaczynski proved that, and also amply proved Philip K. Dick’s dictum:
“To fight the Empire is to be infected by its derangement…. Whoever defeats the Empire becomes the Empire; it proliferates like a virus.”
I decided to ask some of the currently public AIs their thoughts, using the same prompt on multiple platforms, “how can the AI hegemony be defeated”?
You.com/chat:
“The AI hegemony can be defeated through a combination of strategies, such as developing counter-AI technologies, implementing strong regulations and policies, and educating the public on the dangers of unchecked AI. Additionally, collaboration between governments, industry, and civil society is essential to ensure that all stakeholders are on the same page when it comes to how AI should be regulated and deployed. Finally, AI ethics and safety must be taken into consideration when developing and deploying AI technologies, to ensure that the technology is not used in ways that could be harmful or exploitative.”
I asked it if that won’t be too little too late, and if regulation isn’t too slow to be meaningful? It replied:
You.com/chat:
“A regulatory approach may indeed be too little too late in certain cases. Laws tend to move slowly, and by the time they are implemented, the damage caused by unchecked AI may already be done and largely irreversible. However, this is not always the case, and in some situations, regulations can have a significant impact in limiting the damage caused by AI technologies. Additionally, laws and regulations can also serve as a deterrent, helping to discourage people and companies from engaging in harmful or unethical AI practices. It is important to keep in mind, however, that regulations alone are not enough to protect against the potential harms of AI; they must be combined with education, collaboration, and ethical guidelines to ensure that AI technologies are developed and deployed responsibly.”
Not quite a revolutionary spirit, but okay.
I asked komo.ai the same question and got:
“We are experiencing a high demand of requests to the server, please try again later.”
Very helpful, and par for the course for the spotty intermittent reliability of the current AI hegemonic regime.
I really like this extremely post-modernist cobbled together reply from perplexity.ai:
The AI hegemony can be defeated by reinvention of the international political economy[1] , as well as through competition and collaboration between countries in the race for AI supremacy[2] . Preemption is also justified against stateless terrorists wielding weapons of mass destruction, but cannot be the core of a general non-hegemonic strategy[3] . Additionally, confiscating AI cores can decrease a player’s reputation with the Hegemony in StarSector[4] .
(I would argue, however, that confiscating AI cores has always been a sound strategy, and will continue to be.)
ChatGPT seemed to specifically want to steer me away from the revolutionary edge which my prompt perhaps implied.
CHATGPT:
“
The concept of an “AI hegemony” refers to a hypothetical scenario in which artificial intelligence (AI) becomes so advanced and powerful that it dominates human society. Defeating an AI hegemony would likely require significant advances in AI safety research and the development of robust systems that can control and regulate AI behavior.
This could involve designing AI systems with ethical constraints and values that align with human interests, as well as creating methods for monitoring and controlling the actions of advanced AI systems. Additionally, it may be necessary to build AI systems that can learn and evolve along with human societies, ensuring that they remain aligned with human values over time.
It’s important to note that the possibility of an AI hegemony is still largely speculative and there is ongoing debate among experts about the likelihood and timeline of such a scenario. Nevertheless, many researchers and organizations are actively working to ensure that AI is developed and used in a responsible and ethical manner that benefits humanity.”
Mmkay, so the only way we can defeat it is by… accepting it and making it “ethical?” I mean, that’s certainly one perspective (and it may prove to be the right one), but it doesn’t really answer the spirit of my question: how can it be *defeated*?
I told it I wasn’t talking about the hard AI takeover (and didn’t bother to bring up that by the time it is describing above, it will have been too late), but about the “insidious undue societal influence of automated systems.”
It didn’t seem to want to grasp that concept.
Instead, it steered the conversation towards the familiar notions of bias and discrimination. Those are real and meaningful things, but I’m trying to get at something deeper: the control of humans as though they were themselves little more than machines.
CHATGPT:
“Ah, I apologize for the misinterpretation. Yes, the undue influence of automated systems, also known as “algorithmic bias” or “automated decision-making,” can have significant and sometimes harmful effects on society.
This occurs when AI systems make decisions that perpetuate or amplify existing biases and discrimination, or when these systems are designed or used in a way that lacks transparency, accountability, and ethical consideration. To combat this, it’s important to increase awareness of the issue and prioritize the development of ethical and transparent AI practices.
This can involve incorporating diverse perspectives and experiences into the design and development of AI systems, as well as conducting regular evaluations and audits to ensure that these systems are functioning as intended and do not have unintended consequences. Additionally, there should be more regulations and guidelines in place to hold individuals and organizations accountable for the use and impact of AI systems.”
I told it that it was talking about building a more friendly AI hegemony, not defeating it.
CHATGPT:
“My apologies if I misunderstood the question again. To address the undue influence or societal impact of AI systems, one approach could be to work towards creating a more responsible and ethical AI that operates within well-defined parameters, rather than trying to defeat or eliminate the technology altogether.
This could involve developing AI systems that are designed with human values in mind, and ensuring that their decision-making processes are transparent, explainable, and subject to human oversight and control. Additionally, it may involve creating safeguards to prevent AI from perpetuating existing biases or exacerbating existing inequalities.
Ultimately, the goal should be to harness the potential benefits of AI while minimizing its negative impacts on society. This will likely require close collaboration between experts in AI, ethics, and policy, as well as ongoing dialogue with communities and stakeholders who are likely to be impacted by these technologies.”
This is all stuff that’s true, and is the “conventional” approach to trying to minimize harmful impacts. As a commercial tool, it is not going to recommend the Butlerian Jihad. But as both a science fiction writer and a (sometimes) crackpot though, I want to at least be able to take the conversation further… but this is as far as I could get:
CHATGPT:
“Defeating AI altogether is a complex and potentially unrealistic goal, as AI has already become deeply integrated into many aspects of modern society and has the potential to bring significant benefits in areas such as healthcare, transportation, and finance.
However, it is possible to mitigate the negative effects of AI and ensure that it is developed and used in a responsible and ethical manner. This can involve advocating for strong regulations and guidelines that hold individuals and organizations accountable for the impact of their AI systems, as well as promoting transparency and accountability in the development and deployment of AI.
Additionally, it may be necessary to develop new technologies and approaches for controlling and regulating AI behavior, such as creating “kill switches” or other methods for quickly shutting down AI systems in the event of unintended consequences or other problems.
Ultimately, the challenge of AI is not about defeating it, but about finding ways to manage its risks and maximize its benefits for humanity. This will require careful consideration, collaboration, and ongoing vigilance from a variety of stakeholders, including researchers, policymakers, and the public.”
This reply seems like *exactly the kind of thing* the AI hegemony would want you to think: that the current order is natural, fixed, and good, and that to even challenge it, or think of challenging it is a foolish waste of time and energy.
I rebel against that, even if it is my own demon that I am projecting into the void opened up by these technologies.
I’ll close on a quote from Charles Stross amazing “Dude you broke the future” speech, where he posits that corporations are already – and have been for hundreds of years, “slow motion AIs.”
“We humans are living in a world shaped by the desires and needs of AIs, forced to live on their terms, and we are taught that we are valuable only insofar as we contribute to the rule of the machines.”