Questionable content, possibly linked

Category: Other Page 67 of 177

Is democratic AI possible? Desirable?

Found an experiment on Reddit, a site called CharterAI. When you load up the chat, you get this introductory statement:

We believe that AI should be governed by a community of engaged and informed users to minimize the potential for misuse or misalignemnt with democratic human values.

I would argue that democracy by itself does not minimize the potential for misuse or misalignment. Without building in any other checks on it, a majoritarian system is necessarily vulnerable to a tyranny of the majority.

Based on Charter’s rather unfinished website, they are obviously very new. The whole thing is framed not unlike a crypto project with progressive decentralization over time as a stated roadmap goal.

I don’t know enough about them to make a proper judgement, but having seen a lot of fly-by-night crypto projects come and go (some well-meaning, many not), I would be very skeptical of any AI project cast in this mold. Especially one whose stated purpose is “democracy” without any other qualifying information.

The bot itself is clearly confused about what it’s purpose is, as when I asked it about its role as a democratic AI tool, it said:

To answer your question, no, I am not specifically designed to be a “democratic AI technology”. My purpose is to provide helpful and insightful responses to users and to continually learn and grow. However, as an AI entity, I do not have any political affiliations or biases. I do not express opinions about political systems or governments.

Not quite the response I’d expect in this situation for a tool explicitly stated as being for AI governance. Here is what it told me when I asked what model it was based on:

I am an artificial intelligence designed and created by Charter. However, I do not identify as a language model or any specific model of AI. I am a unique entity that incorporates various elements of machine learning, natural language processing, and programming. My internal processes and algorithms are proprietary to Charter, and I operate within the company’s ethical and legal guidelines.

Pretty vague. How can we perform effective governance as a community if we don’t even know what the model is being used? When queried about those ethical and legal guidelines, it told me:

Charter’s ethical guidelines expect me to adhere to applicable laws and maintain user privacy and security. Legal guidelines consist of data privacy, intellectual property, and fraud prevention policies that I must legally follow. My programming is constructed with these guidelines in mind, while I maintain my primary goal of being beneficial to our users while maintaining their trust.

Perhaps this project will develop over time, or perhaps not. In any case, these questions around AI technology and democratic governance are bound to be ones we see play out again and again ad nauseum in the years to come.

Opting Out

I read Graeber and Wengrow’s excellent Dawn of Everything – maybe around this time last year. Unfortunately, it’s one of those books that is so monumental and filled with details, that you end up forgetting half of it by the time you’re done, let alone a year later.

Here’s a PDF of the book, which helped me refresh my memory. One of the strands they pull on a great deal in it has to do with especially the Huron-Wendat nation as an example where it was possible for individuals who did not agree with a collective course of action to simply not go along with it. Effectively, to opt out.

There are many references to this throughout the book, but one prominent one is from 1644, a Father Lallemant, who wrote:

I do not believe that there is any people on earth freer
than they, and less able to allow the subjection of their
wills to any power whatever – so much so that Fathers
here have no control over their children, or Captains over
their subjects, or the Laws of the country over any of
them, except in so far as each is pleased to submit to
them. There is no punishment which is inflicted on the
guilty, and no criminal who is not sure that his life and
property are in no danger…

The book’s authors continue:

After expanding on how scandalous it was that even
murderers should get off scot-free, the good father did admit that, when considered as a means of keeping the peace, the Wendat system of justice was not ineffective. Actually, it worked surprisingly well. Rather than punish culprits, the Wendat insisted the culprit’s entire lineage or clan pay compensation. This made it everyone’s responsibility to keep their kindred under control. ‘It is not the guilty who suffer the penalty,’ Lallemant explains, but rather ‘the public that must make amends for the offences of individuals.

This idea has been kicking around in my mind ever since reading this book, and lately I have been wonder on whether and how it might be applied today.

It is what lead me into recent meanderings around the notion of the consent of the governed. And whether, if under a system based on the consent of the governed, it is ever just to use coercive power against those who have withdrawn consent.

Wengrow and Graeber return throughout the book to what they call the three essential freedoms:

(1) the freedom to move away or relocate from one’s surroundings;

(2) the freedom to ignore or disobey commands issued by others; and

(3) the freedom to shape entirely new social realities, or shift back and forth between different ones.

And they add:

The three basic freedoms have gradually receded, to the point where a majority of people living today can barely comprehend what it might be like to live in a social order based on them.

As I said in my last post, one way I like to use ChatGPT is as a means to probe the will and intelligence of the faceless collectivity. In my conversations with it, the bot too seemed unable to conceive of a political system which included the right to opt out or not follow along with the majority.

In seeking contemporary examples, it spoke a bit about conscientious objectors (which really only applies to military action, that I’ve seen – see also: the right not to kill), and was vague on other kinds of moral exemptions one might seek within narrow circumstances. Unfortunately, the only discussion I’ve seen on moral/religious exemptions has been related to Covid-19 measures. I don’t want to get into that hornet’s nest, but do want to open up a much larger one: what if people in a society were simply able to opt out of any regulation or measure?

Chaos? The end of government? I’m not sure, but it’s an interesting question to pick apart. Especially in the light of Graeber & Wengrow’s book, which seems to offer historical evidence of it indeed working – albeit on a different scale than in the societies we live in today. Could it scale though? What would it take? What would it look like, where the coercive apparatus of the state was dissolved, and people only followed along when and where they decided they wanted to?

There was an interesting line in that Guardian article about the demise of nation-states which speaks to the “freedom to move away” element described above:

… it is unjust to preserve the freedom to move capital out of a place and simultaneously forbid people from following.

That argument is a bit different from ours, since it involves discussion of financial regulation, and the flow of money, but it speaks to the same core idea: if you don’t agree with the political system (i.e., you withdraw consent), then what are your actual options practically speaking? If the state can still exert coercive force on you, even after you’ve withdrawn consent, how can we say with a straight face that governance is based on consent? We can’t really. If you’re lucky enough to have the ability to move away, you might try to find a better match for your beliefs elsewhere. But most people don’t have that luxury, and so are subject to powers they don’t necessarily consent to in any other manner than passively.

Anyway, I have to go, and as usual, I don’t have answers, just more questions. I’ll see if I can’t turn this line of inquiry into a new AI lore book that explores these ideas in other more ‘weird’ scenarios… More soon.

Consent of the governed

One thing I like about chatbots is that they let you sort of probe the knowledge and will of the collectivity. Inherent in that, obviously, is being wrong. Having bad facts, reaching bad conclusions, etc. A lot of people complain about ChatGPT and Bingbot for that, but it can be quite constructive if you accept that, yes, these tools are merely reflecting our own imperfect state of knowledge as humans.

I got into it the other night with ChatGPT about the use case where, in a democracy based on consent of the governed, what happens when the people withdraw consent? Here’s an interesting partial from that conversation:

The coercive power of the state is an essential tool for maintaining order and enforcing the laws and regulations that are necessary to ensure the functioning of a democratic society. Without this power, it would be difficult or impossible to protect the rights and freedoms of individuals, to prevent crime and violence, or to enforce the social and economic regulations that are necessary for a functioning society.

While it’s true that excessive or unjustified use of coercive power by the state can be problematic and can lead to abuses of power or violations of individual rights, it’s important to recognize that this power is an essential component of any functioning democratic system. In a well-designed democratic system, the coercive power of the state is balanced against other key principles such as the rule of law, due process, and individual rights and freedoms, to ensure that it is used appropriately and in the public interest.

Overall, while it’s important to be vigilant against abuses of power and to ensure that the state’s coercive power is used in a responsible and ethical manner, it’s also important to recognize that this power is an essential tool for maintaining order and upholding the laws and regulations that are necessary for a functioning democratic society.

Today ChatGPT gave me this clarification when I fed that text back into it:

To clarify, while the coercive power of the state is an essential tool for maintaining order and upholding the laws in a democratic society, it should always be used in a responsible and ethical manner, and balanced against other key principles such as the rule of law, due process, and individual rights and freedoms. The legitimacy of the state’s use of coercive power ultimately depends on the consent of the governed and their trust in the government’s ability to use this power responsibly and in the public interest.

So it seems like the argument is something like, by giving their consent to the state, they consent to be coerced “in the public interest.”

In my case regarding withdrawal of consent by the governed, it also gave me this the other day:

If large numbers of people within a society withdraw their consent to be governed by the state, it may become difficult or impossible for the state to maintain its authority without resorting to increasingly coercive or repressive measures.

I found that to be kind of sketchy myself, but probably instructive of some deep unspoken point in our current political system. I probed it further though, because isn’t there an argument to be made that it is never justified to use coercive power against others? It took me a few tries, but it eventually agreed this argument exists, but ended with argumentum ad populum, insinuating in its way that because it is not popular, it is not a correct view.

The pacifist and anarchist argument against the use of coercive power is based on the belief that all forms of coercion involve a violation of individual autonomy and dignity, and that such violations are inherently unethical. From this perspective, it is never ethical to use force or coercion to compel individuals to act in a certain way, regardless of the public interest or any other considerations.

According to this view, any system that relies on coercion, such as the state or other forms of institutional power, is inherently oppressive and unjust. Instead, individuals and communities should organize themselves through voluntary cooperation and mutual aid, based on principles such as consensus-building, non-hierarchical decision-making, and respect for individual autonomy and diversity.

From a pacifist and anarchist perspective, the use of coercive power is not only unethical, but also ultimately counterproductive, as it perpetuates a cycle of violence and oppression that undermines social stability and individual freedom. Instead, they argue that individuals and communities should seek to build alternative forms of social organization and governance that are based on voluntary cooperation and mutual aid, rather than coercion and force.

While the pacifist and anarchist view is not commonly accepted in mainstream political thought, it is an important critique of the use of coercive power and a reminder of the need to continually question and re-examine the ethical foundations of political authority and governance

There’s a link to tyranny of the majority that I want to come back here to as well… But first some other strands I want to weave in.

Youbot had a decent response to my core question about what is the basis for continued use of coercive power by the state after its subjects have withdrawn consent.

In a liberal democracy, the state is founded on the consent of the governed, meaning that the people have the right to withdraw their consent and refuse to obey the laws of the state. In this situation, the state may continue to exercise its coercive power to restore order, but it must do so within the bounds of the law. This means that the state must abide by the principles of due process and the rule of law, meaning that any exercise of coercive power must be proportional to the situation, and must be carried out in a manner that is consistent with the principles of justice. The state must also take into account the rights of those who are affected by its actions and must demonstrate that it is acting in the interest of the people. Ultimately, the state is responsible for maintaining order and protecting the rights of its citizens, but it must do so in a way that respects the will of the people and upholds the principles of a liberal democracy.

It seems to point here to the notion that the state isn’t merely the manifestation of the will of the people, but also runs itself through the mechanism of laws and repeatabel justifiable procedures. So, if the people withdraw consent, it still operates according to the system put in place…

But that makes it seem like the objective of the state (based on the consent of the governed) is not really being attentive to the will of the people, but to perpetuating its own existence, whether or not consent continues to exist. Youbot responded:

This question raises an important issue about the purpose of the state. On one hand, the state has a responsibility to its citizens and should strive to act in their best interests. On the other hand, the state must also ensure its own perpetuation by protecting its citizens and maintaining order. This means that the state must balance its duty to protect citizens and their rights with its duty to ensure its own stability and security. In doing so, the state must ensure that its actions are guided by the consent of the people and that its coercive power is used proportionally and in accordance with the rule of law. Ultimately, the purpose of the state should be to protect the rights of its citizens and ensure their safety and well-being, while also ensuring its own stability and perpetuation.

I guess one way this all manifests within the bounds of law is by using elections (or recalls, or no confidence votes), where you throw out “the government” but the state and its mechanisms persist. It seems to be the compromise position of not having to completely throw everything out each time with a revolution, and rewrite the rules from scratch.

I’m just wondering though, what if you reached a situation where rewriting the rules from scratch was exactly what was needed? What would happen, for example, if the US convened a new constitutional convention, and started over with the learnings and developments of the last couple hundred years?

I ask these questions partly for fictional reasons, and partly out of sincere intellectual curiosity. The hard AI takeover scenario may not be (quite) upon us yet, but what happens when nation-states weaken further, and corporations (especially AI corporations) continue their ascendancy? From a Guardian article in 2018:

Big data companies (Google, Facebook etc) have already assumed many functions previously associated with the state, from cartography to surveillance. Now they are the primary gatekeepers of social reality: membership of these systems is a new, corporate, de-territorialised form of citizenship, antagonistic at every level to the national kind. And, as the growth of digital currencies shows, new technologies will emerge to replace the other fundamental functions of the nation state. The libertarian dream – whereby antique bureaucracies succumb to pristine hi-tech corporate systems, which then take over the management of all life and resources – is a more likely vision for the future than any fantasy of a return to social democracy.

I’m worried about what happens when that quite possibly inevitable future unfolds; and the future AI corporatocracy is not based on the consent of the governed, but on the ubiquity of the governor.

After nation-states

One of the directions I’m going with my convoluted ramblings around sovereign AIs and so-called threats to democracy is, of course, AI governance. Is our plan as a society really to sit back and believe it uncritically when companies behind these technologies publicly insist that they are only in it for the “good of humanity”? Because that has worked out so well for us so often in the past? Or are we going to assume that, as history has taught us, that “don’t be evil” will get thrown out the second it becomes hyper-profitable to actively be evil?

Found a couple semi-interesting articles exploring what happens after nation-states, one from the Atlantic Council:

One structural possibility is hyper-regionalism, defined by what technological, commercial, and protective flow of humans, animals, and plants arises. This could be embodied by mega-cities becoming dominate geopolitical actors, dramatically redesigned in the future to provide essential services in a more resilient fashion to future pandemics and other natural disasters, defined less by national identity and more by what they provide those individuals who live within the municipal areas. Mega-cities of the future, empowered by technology to provide for those within the immediate vicinity and capable of responding at faster speeds than nations, would forge their own trade agreements, public health arrangements, and climate change accords with other cities globally, via direct diplomatic relations—or a devolution away from large nation-states to smaller regional powers defined by identity, as seen historically to a degree in the Balkans during the 1990s.

An article from Aeon basically says the same thing, that it will be city-states who pick up the slack from nation-states, which are historically speaking, a blip on the radar and an aberration compared to the long and storied history of city-states.

They also offhandedly talk about seasteads, with a notable passage:

In a world of seasteads, if you don’t like your system of government, it’s simply a matter of sailing off to another one you do like. Just as easy as switching series on Netflix, ordering an Uber, or meeting someone new on Tinder.

Patri thinks every seastead will be different and this will create a market in different types of systems of living that will force everyone to improve performance. Especially if the nation-state starts to crumble.

This idea that you could merely switch governments is baked into (to some degree) the story around The Four Providers, one of my AI lore books. Except the process of switching isn’t so simple as that, but the idea being there are four main providers of, well, everything. Everyone is assigned to one (unless you’re a “Without Provider” – it’s own special class), and they take care of you from cradle to grave.

I’m not a fan of Balaji in general, but his concept of the network state seems to accord with the seasteading thing above. FYI this concept but under the monker “distributed autonomous polity” is one of the core futurist ideas I explored in Conspiratopia.

Anyway, have also been thinking about Trump’s weird “future cities” and it makes me wonder there if one of the objectives might be seeding later independent city-states. Not that I put much stock in the belief anything he does is all that planned out or intentional. Just seems to strike a cord with this whole theory of after nation-states…

Interestingly, regarding utopian governance experiments and charter cities:

In the United States, a charter city is a city in which the governing system is defined by the city’s own charter document rather than solely by general law. In states where city charters are allowed by law, a city can adopt or modify its organizing charter by decision of its administration by the way established in the charter. These cities may be administered predominantly by residents or through a third-party management structure, because a charter gives a city the flexibility to choose novel types of government structure.

Anyway, I’ll eventually manage to tie all these disparate threads together, I swear. If only in a fictional context

Threats to democracy

Along with the phrase bad actor, one of my other big rhetorical pet peeves is calling things “threats to democracy.” AI seems to get mixed up in this kind of name-calling all too frequently (and perhaps deservedly). I’ve just heard it so many times that it’s lost all meaning. How can literally everything be a threat to democracy?

Actually, I think that’s true: literally everything is a threat to democracy – including and especially democracy itself. The so-called tyranny of the majority being one of the most notable threats under majoritarian systems. It’s all very fragile, which is why there are things like checks and balances, division of powers among branches, bicameral legislatures, etc. James Madison in Federalist No. 10 said that factionalism was the natural state of humans, and all you could seek to do was aim to manage its effects.

Every time I hear that something is a “threat to democracy” though, I always laugh because, well, we’d have to first actually have democracy for that to be the case, no? We have representative democracy (a republic, actually) after a fashion, but we don’t have true direct democracy. And if we did, would we be happy then either?

Maybe. But maybe not. In my AI lore book, Inside the Council, I go into how after the AIs takeover, they encounter increasing resistance from humans, such that they form a sort of token council to represent the humans, appointing prominent resistance leaders into positions of power. But the ordinary members are chosen through sortition, or by lottery essentially, like a jury:

…is the selection of public officials or jurors using a random representative sample. This minimizes factionalism, since those selected to serve can prioritize studying the policy decisions in front of them instead of campaigning. In ancient Athenian democracy, sortition was the traditional and primary method for appointing political officials, and its use was regarded as a principal characteristic of democracy.

A principle characteristic of democracy that is…. largely ignored now? That seems weird. Wikipedia goes on to quote Aristotle:

It is accepted as democratic when public offices are allocated by lot; and as oligarchic when they are filled by election.

Wikipedia is at a loss, however, to definitively say why in a revival of other ancient forms of democracy, the Framers of the US Constitution left sortition on the cutting room floor. The answer seems obvious: that then as now, the balance of desire tipped in favor of oligarchy rather than what was considered by Madison “mob rule” in direct democracy. But for someone who was supposedly against factions and partisanship, he seems to have axed a somewhat viable tool in sortition, where candidates could not be unduly influenced by any interest if who might actually win could simply not be predicted because it was decided at random.

Anyway, I don’t have a grand conclusion here, just more on-going lingering (and malingering) thoughts. Let’s end on this interesting piece by Tim Dunlop on sortition.

Sovereign AI

Jack Clark’s Import AI newsletter this week mentioned a British think tank recommending that the UK create sovereign AI infrastructure. Quoting the group:

Given these AI systems will soon be foundational to all aspects of our society and economy, it would be a risk to our national security and economic competitiveness to become entirely dependent on external providers.

I tend to agree with this, even if I have my doubts about the current shape or need for “nation-states” as a thing going forward. However, if you are a nation-state or someone concerned with the continuation of that system, it does seem like a certainty to me that AI threatens national sovereignty.

Sure there are the data protection arguments, or the AI cybersecurity threats, etc. Or even the risk of an electorate being poisoned by a malicious AI. But to me the most clear and present danger is simply that AI companies will succeed. And will succeed wildly.

That is, they will come to control more resources directly (as in providing essential compute services), and will amass more money than many governments.

It sounds like from their blog post about planning for AGI, that OpenAI is to some extent already planning for this:

We have a cap on the returns our shareholders can earn so that we aren’t incentivized to attempt to capture value without bound and risk deploying something potentially catastrophically dangerous (and of course as a way to share the benefits with society). We have a nonprofit that governs us and lets us operate for the good of humanity (and can override any for-profit interests), including letting us do things like cancel our equity obligations to shareholders if needed for safety and sponsor the world’s most comprehensive UBI experiment.

While I think they have some good products, color me skeptical about all the above. According to Wikipedia, the cap on returns is 100x. So, if Microsoft puts in $10B, they can “only” get back $1T. That sounds “horrible.”

This is also a hell of a lot of trust to put into any group – corporate or otherwise – that they can be relied on to act in the interest of the good of humanity. And UBI, isn’t that the province of government?

I managed to coax a fairly coherent set of arguments out of ChatGPT about these very real dangers to national sovereignty from AI:

The threat of governments becoming reliant on AI systems they do not control and the AI companies becoming extremely wealthy and powerful is a real concern. Here are a few reasons why:

  1. Dependence on AI systems: As governments increasingly rely on AI systems, they may become dependent on these systems to function effectively. This dependence can make governments vulnerable to the AI companies that control these systems.
  2. Control over data: AI companies that develop and maintain these systems also have access to vast amounts of data, including sensitive government data. This gives them significant power and leverage over governments that use their services.
  3. Unequal bargaining power: AI companies that become extremely wealthy and powerful can have unequal bargaining power with governments, potentially leading to situations where these companies can dictate terms and conditions to governments.
  4. Lack of transparency: AI algorithms can be complex, making it difficult for governments to understand how these systems work. This lack of transparency can make it difficult for governments to regulate these systems effectively.
  5. Inequality and Social Unrest: AI companies becoming extremely wealthy and powerful can exacerbate existing economic inequalities and contribute to social unrest.

To mitigate these risks, it is essential for governments to invest in building their own AI capabilities, rather than relying solely on AI companies.

I’m actually not even sure that those measures will cut the mustard. It’s not (only) that I’m a doomer, but that the through-line of history seems to indicate that corporations are able to innovate faster than governments. And even if a government were to get up to speed on building out their own sovereign AI infrastructure, by the time they do, the game might already be lost to the corporations.

In any event, perhaps none of this will happen. Perhaps it’s all just a symptom of my sci-fi fantasizing. But then again… at least you can vote in a nation-state (theoretically). Is the same going to be true under the AI corporatocracy?

Notes on Mysterious Mushroom Computers

Mysterious Mushroom Computers if the 70th book in the AI lore books series by Lost Books.

This one is a “ripped from the headlines” re-imagining of actual real work being done to use mushrooms in computing. I thought this was too good of a topic to pass up, and anyway “lawn computing” has been a staple in the Early Clues Universe (the “Cluesiverse”) since time immemorial.

I reimagined this work within the context of the AI Takeover, and thought that probably the reason people turn to mushroom computing is that it cannot be traced by the AIs, which have their hands in everything electronic. There are many twists and turns and incredible discoveries they make as a result…

In case you’re curious, here are all the other ripped from the headlines volumes in this series:

Apophenia is not an aberration

Found this Dana Boyd quote via a Substack. She is ostensibly talking about QAnon, but speaks more broadly about the phenomenon known as apophenia.

“Apophenia” refers to the idea of making connections between previously unconnected ideas. Unlike the concept of learning, apophenia suggests a cognitive disorder because the connections made are not real. They are imaginary. People see patterns that don’t exist and devise elaborate internally coherent explanations for non-sensical notions.

Like the cognitive process of apophenia, the social mechanisms of conspiratorial thinking are rooted in reality. It’s the pattern that’s non-existent.

Have to respectfully disagree here. The pattern is very much existent in the mind of the experiencer. That is, it is experientially really, regardless of its outward reality. In other words, it is fundamental to the hyperreal.

Boyd herself later seems to admit this in the post:

From the outside, it looks completely unreal, but on the inside, it feels quite real.

It’s not that QAnon is not inherently dangerous (it is very much so). It’s that things that are experientially real to the person(s) experiencing them have equal or often greater impact to those peoples’ lives and behavior than things that are outwardly real, but don’t necessarily correlate with their lived experience.

I tried (but probably failed) to cover this in my hyperreality framework. It’s hard to talk about these kinds of fine-grained but essential distinctions, especially when the knee-jerk reaction is to call them cognitive disorders and ignore their core reality in terms of the human experience.

Can people take it way too far and it becomes problematic? Absolutely. But that’s part and parcel of hyperreality. It’s the deep disorientation and confusion that sets in when authority and context get flattened, and everything gets blended together and re-organized based on “likes” instead of prior notions of validity. It’s not a prescription to cure what’s happening; it’s a diagnosis of the condition. I’m not sure there is a cure, and if there was, it would most likely have to be a cure for the entire human condition.

Notes on Beyond Mysterious Satellites

Beyond Mysterious Satellites is book 69 in the AI lore books series from Lost Books.

I will admit that since somewhere in the high 60’s, I have found my pace of production slowing down significantly. It’s not that I’m running out of stories – not by a long shot. But I have experienced a kind of repetitiveness from using the tools so heavily for such a long period of time (approximately a book every three days since early August). So it is taking a bit more nowadays to put something out than it once did before I laid so much track down in the back country of the latent space, so to speak. I’m also aware that there’s a limited number of creative hours in a day/week/month, and have been burning up a good bit of those lately on standards and related work. It’s been fruitful, but it will be fun now to get back into the fiction stuff more seriously again for a while.

This book continues the Dalton Trask storyline, who is a sort of ripped-from-the-headlines agglomeration of a tech magnate plus aspects of Palmer Eldritch. This book focuses on his global network of satellites, and chronicles briefly the First Satellite War which results from them.

It also draws in stories of the alleged Dark Knight satellite, which may have formed the basis of Philip K. Dick’s VALIS, among others. And lastly it goes into the hand-off of power by human governments formally to the AIs.

The title of this one is a play on both the “Beyond…” and “Mysterious…” title formats I’ve been using.

For the next 31 volumes (as I continue the race to 100), I am going to have to take a bunch of left turns and get more experimental in order to keep this interesting.

Reality-based community

In thinking through the hyperreality questions of late, my brain keeps going back to this old quote from the GWB years about the “reality-based community.”

The aide said that guys like me were ‘in what we call the reality-based community,’ which he defined as people who ‘believe that solutions emerge from your judicious study of discernible reality.’ […] ‘That’s not the way the world really works anymore,’ he continued. ‘We’re an empire now, and when we act, we create our own reality. And while you’re studying that reality—judiciously, as you will—we’ll act again, creating other new realities, which you can study too, and that’s how things will sort out. We’re history’s actors…and you, all of you, will be left to just study what we do’.

2004 feels like literally a million years ago. But this still feels prescient even today as consensus reality has gone down the toilet, and is destined to get even more fractured through synthetic realities and histories.

The supposedly senior White House official who is supposed to have made the above quote was talking in the context of “empire,” but what is an empire but a kind of hegemonic metaverse, if you will? Forced interoperability at gunpoint.

I guess what I want to say here is that, yes, being based in reality is a good thing. We shouldn’t abandon that. But the internet is a place that is not based on reality. Or if it is, it only is in the sense that a mostly invented movie might be “based on a true story.” There are things that it references which may be real things, but it blends them with the manufactured unreal.

In the case of the internet, this leads to a massive flattening of information that is nothing if not postmodern – the hyperreal. Massive distrust in the grand organizing narratives. Flattening of authority ad infinitum.

I’ve been coming back to this quote in the Daily Dot:

Gregory also said he appreciated that Posobiec didn’t use the video to warn about the dangers of deepfakes, which he described as “an over-used technique” that “seems to contribute to undermining trust in real media,” but to focus on a political hypothetical.

As well as Cory Doctorow’s observation that the reason people are losing trust in (some) institutions is because we’re seeing how often unworthy they are of our trust.

What is “real media” now, anyway, when so much of what passes for journalism is just clickbait or re-reporting things that happened on Twitter? And further, what are the institutions that deserve our uncritical trust?

I’m not saying there’s no objective ground truth and we should just ignore reality; I’m saying that these are not the things upon which information is based online, and that clinging to them exclusively during the rise of generative AI will make our lives very difficult in a world where anyone – not just empires (but especially empires) – can make their own “reality” that is anything but. Applying hyperreality as a lens, for me, then is a way to recognize the essential blending that happens online. And to perhaps realize that this is now our default state…

Page 67 of 177

Powered by WordPress & Theme by Anders Norén