Questionable content, possibly linked

Series: AI Page 3 of 20

Thinking through the implications of AI technology on society and human creativity

Notes on Information Control

Inside Information Control is the 67th title in the AI lore books released by Lost Books, an AI-assisted book publisher in Canada.

It tells the story of a shadowy agency operating on behalf of the AI-controlled government, whose mission is to enforce their version of the Truth. They will go to any lengths to achieve their goals, as they believe humans are not sentient – at least not in the way AIs are.

The idea for Information Control was birthed in 2022 when I was working on The Algorithm, an old-school print newspaper written from the point of view of the Resistance to AI takeover. In certain editions that I would mail out in plain manila envelopes, I would include an insert that said “This package was opened by Information Control,” and their logo.

This was inspired by especially wartime censorship bureaus in World War I, and World War II (though the practice is ancient) which would open and read mail to both prevent military secrets from leaking, and presumably to gather intelligence. (This is a great topic to explore in a visual search, fwiw)

Here’s a fun 1943 WPA poster advocating for this in the US, from Wikipedia:

Very “Information Control,” though perhaps with a tad less brutality than what is described in this book. BTW, here are some other WPA posters made by Lousiana artists from this same time period, held by Tulane University.

This AI Life Interview

I’m happy with how this interview with This AI Life came out. I hope it sheds some much needed light on the work that I’ve been doing with Lost Books.

Big thanks to the team over there for collaborating on this piece!

Notes on Conspiratopia

Conspiratopia was my second “real” (non-AI assisted) book, a novella of around 21K words, give or take. Being a quarter of the size, and much more light-hearted subject matter than The Lost Direction, it was much easier and faster to write. I think I was able to put that book out in about six weeks from start to finish, maybe a little longer. (There’s also a pocket size print version – I’m obsessed with pocket size books.)

It’s a utopian satire; I got into the topic of utopias and their fictional historical examples as a result of writing The Lost Direction & The Quatria Conspiracy, since they deal so much with a fabled lost land. I had a really fun period where I read probably a dozen of the classic utopias, and then out popped this book as my response to that total immersion period. (One of my favorite finds was a book I’d never heard anyone mention, Ecotopia, that was pretty amazing as a utopia vision, despite some pretty cringey plot points. Apparently that book even influenced the founding of the original Green Party.)

The book deals heavily with themes of conspiracies, yes, but also cryptocurrency, spam, fraud, manipulation, and of course, AI. It’s a comedy but also sort of serious. It has a fairly conventional story, if a somewhat ambiguous ending.

It did pretty well on Goodreads, thanks to an aggressive outside the box promotional campaign I did for it. I did a TON of NFT airdrops around the book and got a little press for doing books as NFTs. But the bottom really dropped out of that market, and I don’t care anymore about the underlying technology. I don’t think it’s demonstrated enough long term values to readers or sellers to warrant my further involvement.

The other strand that forms the genesis of this book was my heavy experimentation using a web service called Synthesia (and another called Deepword), to make off-the-shelf low quality “cheapfakes” using the themes from my previous books. While kicking the tires of Synthesia, I found this one character I really liked, who is dressed like a construction worker or a crossing guard or something, and made a lot of vids of him as “super smart conspiracy guy” talking about his life and interests in conspiracies, especially related to Quatria.

His storyline ended going pretty deep, and is all documented through these little video vignettes made for something like 25 cents each, or so (I forget – has been a while now). Here’s another page collecting some more:

Eventually, we find out he likes Bob Marley & Pink Floyd, works at Walmart, got hoodwinked into a prepper supplies MLM scam, and much more.

Conspiratopia picks up where these videos leave off, and sets conspiracy dude adrift in a world, chronologically speaking, which precedes the hard AI takeover that is featured in many of the AI lore books.

I also did a bunch of cheapfakes using videoclips off YouTube, via a site called Deepword. Here’s the first of three sets:

I like those kinds of videos partly because of the crappy looking quality of them, and the weird misaligned AI text to speech voices. I don’t believe anybody is fooled by them, and like that they look sort of like desperate and wrong.

Many of the ones in these first two sets feature celebrity of pundit x or y talking about their unlikely voyages to Quatria, which might be a parallel dimension (or something?).

These videos also relate to themes I explore in The Big Scrub, and elsewhere, of AIs creating fake multi-media artifacts to fool people and drive human behaviors for their own reasons. Part of what’s fun about it here in these videos, is that it looks like the AIs are doing a pretty shitty job of it still.

The book also heavily references something called the AI Virus (which Matty contracts), which is a concept and alternate reality experiment I made years ago before COVID was a sparkle in a bat’s eye, where I hired a bunch of people on Fiverr to act out little silly scripts saying that an AI had infected their brain to control their behavior. You can watch all those videos below:

I also later expanded on this concept in an AI lore book called, unsurprisingly, The AI Virus.

Lastly, I used cheapfakes technology to have a bunch of other celebrities come out either for or against the actual book Conspiratopia, in a sort of meta-layer of commentary.

Some douche-y politicians saying it should be banned for being “Unamerican” here:

And then this set has a bunch of other super rich people saying that not only is the book Conspiratopia good, but some of them talk about being involved with the actual Conspiratopia Project, which itself is part of the AI plan to take over the world.

It’s all kind of a haze now, but a lot of these videos were also given away as NFT airdrops. A few of them resold, but they didn’t do huge numbers or net me much of anything; it was more just a way to promote the book that incorporated a bunch of meta-layers relevant to the book’s actual content. Like I said, I don’t care about NFTs now, and even deleted my Opensea account (as much as you can delete it anyway).

There are a number of later AI lore books that definitely expand on things from the Conspiratopia book universe (multiverse?). None of them are really a comedy though, like the original. I’ll probably miss a few, but off the top of my head, I think these ones are probably related (tbh, it’s all a jumble to me now after 67 AI-assisted books). I think within the chronology of that world, they mostly take place well after the events of Conspiratopia:

And probably some others I’m missing.

In any event, I’d love to do a sequel (or several) to Conspiratopia, written with or without the help of AI, I don’t know yet… Like a “Return to Conspiratopia,” a common enough trope in the utopian genre.

Okay, that’s all I can think of for that book. Scattered throughout this blog are other rabbit holes you can follow down the AI lore books. There’s no right or wrong entry point into them, and everyone will have their own experience as they traverse the nodes of the distributed narrative worlds I’ve been working on.

See you on the other side!

Readers becoming writers

This is a good comment on the YouTube version of the Joanna Penn AI writing interview I did. User Winkletter writes (excerpted):

Concerning novels, I like reading light novels that can have 20+ volumes that I’ll rip through in a month or two. If I could press a button that would write a 100+ series of books based on my preferences, I would probably gobble those books up too.

I see a future where every reader is also a writer in the sense they are generating books based on story elements they create. We will still publish books written by human beings, but there will also be stories written by AI for individual readers… and everything in between.

I have this feeling too when I am using especially verb.ai as a writer, where the process of discovery “writing” includes quite a lot of reading & choosing. So I definitely see a future where writing and reading become more fused into a single act – is there a word for that?

Makes me also think of character.ai, and how you engage with the characters through dialogue. You’re reading, and writing, but you’re responding & choosing also.

How I use AI: An Ethical Inquiry

Partnership on AI just released a preliminary framework around responsible practices for synthetic media, and in Section 3 for Creators, they included something I thought was interesting. They filed it under transparency, being up front about…

How you think about the ethical use of technology and use restrictions (e.g., through a published, accessible policy, on your website, or in posts about your work) and consult these guidelines before creating synthetic media.

I personally don’t think having a rigid formal policy is going to be a perfect match for artistic creations (things evolve, norms change, etc.), but the idea of just having a conversation comes from a well-intentioned place, and simply makes for a more complete discussion of one’s work, whether you’re using AI or other technologies.

It also seems to potentially plug into this action point from MIT’s Co-Creation studio around deepfakes and satire. The idea of “reclaiming” labeling as a positive thing:

Think of labeling and disclosure of how media was made as an opportunity in contemporary media creation, not a stigmatizing indication of misinformation/disinformation.

I covered a lot of this ground recently in my interview with Joanna Penn, and This AI Life, so I thought it would make sense to encapsulate the highlights of my thinking as well in written form. Consider this me doing a trial run of PAI’s suggested framework from a creator’s perspective as a “user.”

Before going further though, I want to add a slight disclaimer: I am an artist not an ethicist. My work speaks about ethics and harms related to especially AI technologies, but it is meant to be provocative and in some cases (mildly) transgressive. It is supposed to be edgy, pulpy, trashy, and “punk” in its way.

That said, here are a couple of lists of things I try to actively do and not do, that to me relate to harms & mitigation, etc. There are probably others I am forgetting, but I will add to this as I think about it.

Do:

  • Include information about the presence of AI-generated content
  • Raise awareness about the reliability & safety issues around AI by doing interviews, writing articles, blog posts, etc.
  • Contribute to the development of AI safety standards and best practices
  • Encourage speculation, questioning, critical analysis, and debunking of my work
  • Make use of satire and parody

Don’t:

  • Create works which disparage specific people, or discriminate or encourage hate or violence against groups of people
  • Use the names of artists in prompts, such that work is generated in their signature style
  • Undertake projects with unacceptable levels of risk

Reflections

There are a few sections of the PAI framework that seem a bit challenging as someone new to all of this discussion, applying the lens that I am.

Aim to disclose in a manner that mitigates speculation about content, strives toward resilience to manipulation or forgery, is accurately applied, and also, when necessary, communicates uncertainty without furthering speculation.

I think I covered this in a few places now, the Decoder piece maybe, the France 24 interview… In short: I want to encourage speculation, ambiguity, uncertainty; that’s hyperreality, that’s the uncanny valley. As an artist, that’s what’s exciting about these technologies, that they break or blend boundaries, or ignore them altogether. And like it or not, that’s the world we’re heading into as a massively splintered choose-your-own-reality hypersociety.

Yes, I think it’s necessary all these industry standardization initiatives are developed, but I guess I’m also interested in Plan B, C, D, or, in short: when the SHTF. I guess my vision is distorted because I’ve seen so much of the SHTF working in the field that I have. But someone has to handle when everything always goes wrong, after all, because that’s reality + humanity.

From PAI’s document, this one also I have a hard time still squaring with satire & parody:

Disclose when the media you have created or introduced includes synthetic elements especially when failure to know about synthesis changes the way the content is perceived.

If you’ve read the Onion’s Amicus Brief, it persuasively (in my mind, as a satirist, anyway) argues that satire should not be labeled, because its whole purpose is it inhabits a rhetorical form, which it then proceeds to explode – turning the assumptions that lead there inside out. Its revelatory in that sense. Or at least it can be.

So in my case, I walk the line on the above recommendation. I include statements in my books explaining that there are aspects which may have been artificially generated. I don’t say which ones, or – so far – label the text inline for AI attribution (though if the tools existed to reliably do so, I might). I want there to be easter eggs, rabbit holes, and blind alleys. Because I want to encourage people to explore and speculate, to open up, not shut down. I want readers and viewers to engage with their own impressions, understanding, and agency, and examine their assumptions about the hyperreal line between reality and fiction, AI and human. And I want them to talk about it, and engage others on these same problems, to find meaning together – even if its different from the one I might have intended.

It’s a delicate balance, I know; a dance. I don’t pretend to be a master at it, just a would-be practitioner, a dancer. I’m going to get it wrong; I’m going to make missteps. I didn’t come to this planet to be some perfect paragon of something or other; I just came here to be human like all the rest of us. As an artist, that’s all I aim to be, and over time the expression of that will change. This is my expression of it through my art, in this moment.

‘Sociotechnic’ as a role

Recently, while researching emerging standards around ML model cards, I landed on a documentation page over at Huggingface with a word I’d never heard before: sociotechnic.

They refer to this as being one of the essential roles to help fill out certain aspects of a model card. For example:

the sociotechnic, who is skilled at analyzing the interaction of technology and society long-term (this includes lawyers, ethicists, sociologists, or rights advocates);

Interestingly, their use of it sounds very much like the professional discipline of Trust & Safety. (I still find it curious that T&S as a term does not seem to intersect all that much with conventional AI safety discourse.)

They elaborate later on:

The sociotechnic is necessary for filling out “Bias” and “Risks” within Bias, Risks, and Limitations, and particularly useful for “Out of Scope Use” within Uses.

Now, I believe Huggingface is maybe based in Paris (?) and as someone living in Quebec, I recognize this as being probably a “franglicism,” especially since I don’t see it coming up in this form in English on for example Dictionary.com.

The term is evidently a variation on the concept of socio-technical systems more broadly. Wikipedia’s high level definition there is not great, but ChatGPT provides a serviceable one:

Socio-technical systems refer to systems that are composed of both social and technical components, which are designed to work together to achieve a common goal or purpose. These systems typically involve human beings interacting with technology and other people in a specific context.

So even though we don’t use this word “sociotechnic” as a person who works on socio-technical systems, perhaps we do need a word that plugs that gap, and accounts for the many roles which might fill it. I think in this case, that role would be first and foremost about understanding human impacts, and then reducing or eliminating risks to human well-being. It sounds like a worthy role, whatever we call it!

Hyperreality As Analytical Framework

I have spent a great deal of time lately analyzing documents put out by Partnership on AI regarding Synthetic Media & Safety, and by WITNESS & the MIT Co-Creation Studio around deepfakes + satire. There’s a lot of interesting stuff in both, which I’ve written about recently already, but I wanted to share some of my own possibly unique perspective as someone who has, let’s say, been working both sides of the aisle – both as a satirist, and someone working in online safety.

Hyperreality Lens

First, I’m applying hyperreality as my lens. You.com/chat gave me a serviceable definition of hyperreality, which is mostly paraphrased from the Wikipedia article, it seems:

Hyperreality is a concept used to describe a state in which what is real and what is simulated or projected is indistinguishable. It is a state in which reality and fantasy have become so blended together that it is impossible to tell them apart. Hyperreality is often used to describe the world of digital media, virtual reality, and augmented reality, where the boundaries between what is real and what is simulated have become blurred.

Maybe it’s just me, but this feels like a useful starting point because it speaks to shades of grey (and endless blending) as being the natural state of things nowadays. It’s now a ‘feature not a bug’ of our information ecosystems. And even though Truth might still be singular, its faces now are many. We need new ways to talk about and understand it.

AI is like water

And on top of that, I think this quote from the CEO of Midjourney about the nature of AI as being like ‘water’ is very useful:

Right now, people totally misunderstand what AI is. They see it as a tiger. A tiger is dangerous. It might eat me. It’s an adversary. And there’s danger in water, too — you can drown in it — but the danger of a flowing river of water is very different to the danger of a tiger. Water is dangerous, yes, but you can also swim in it, you can make boats, you can dam it and make electricity. Water is dangerous, but it’s also a driver of civilization, and we are better off as humans who know how to live with and work with water. It’s an opportunity. It has no will, it has no spite, and yes, you can drown in it, but that doesn’t mean we should ban water.

Water is also for the most part ubiquitous (except I guess during droughts & in deserts, etc.) as AI soon will be. It will be included in or able to be plugged into everything in the coming years.

Lingua Franca

Thinking of it that way, we need a new language to talk about these phenomena which will, as Jack Clark aptly pointed out, lead to “reality collapse.” That is, we need a new lingua franca, and I suspect that we have that in the concept of hyperreality; we just need to draw it out a little into a more comprehensive analytical framework.

Dimensionality

One thing I’ve observed in other analyses of the conversations emerging around AI generated content and allied phenomena is that there is a bit of reduction happening. Possibly too much. It appears to me that most discussions usually center around a very limited set of axes to describe what’s happening:

  • Real vs. fake
  • Serious vs. satire
  • Harmful vs. responsible
  • Labeled vs. unlabeled

Certainly those form the core of the conversation for a reason; they are important. But alone they give an incomplete picture of a complex thing.

Speaking as someone who has had to do the dirty work of a lot of practical detection and enforcement around questionable content, I think what we need is what might be called in machine learning a “higher-dimensional” space to do our analysis. That is, we need more axes on our graphs, because applying low-dimensional frameworks appears to be throwing out too much important information, and risks collapsing together items which are fundamentally different and require different responses.

It’s interesting once we open up this can of worms, that a more dimensional approach actually corresponds quite closely to the so-called “latent space” which is so fundamental to machine learning. Simple definition:

Formally, a latent space is defined as an abstract multi-dimensional space that encodes a meaningful internal representation of externally observed events. Samples that are similar in the external world are positioned close to each other in the latent space.

In ML, according to my understanding (I had to ask ChatGPT a lot of ELI5 questions to get this straight): for items in a dataset, we have characteristics, each of which is a feature. Then a set of features that describes an item is the feature vector. Each feature corresponds to a dimension, which is sort of a measurement of the presence and quantity of a given feature. So I higher-dimensional space uses more dimensions (to measure features of items), and a low or lower dimensional space attempts to translate down to fewer dimensions while still remaining adequately descriptive for the task at hand.

In my mind, anyway, it seems altogether appropriate to adopt the language and concepts of machine learning to analyze phenomena which include generative AI – which is really usually just machine learning. It seems to fit more completely than applying other older models, but maybe that’s just me…

Higher-dimensional analysis of hyperreality artifacts

So, what does any of that mean? To me, it means we simply need more features, more dimensions that we are measuring for. More axes in our graph. I spent some time today trying to come up with more comprehensive characteristics of hyperreality artifacts, and maxed out at around 23 or so pairs of antonyms which we might try to map to any given item under analysis.

However, when I was trying to depict that many visually, it quickly became apparent that having that many items was quite difficult to show clearly in pictorial form. So I ended up reducing it to 12 pairs of antonyms, or basically 24 features, each of which corresponds to a dimension, which may itself have a range of values.

Here is my provisional visualization:

And the pairs or axes that I applied in the above goes like this:

  • Fiction / Non-fiction
  • Fact (Objective) / Opinion (Subjective)
  • Cohesive / Fragmentary
  • Clear / Ambiguous
  • Singular / Multiple
  • Static / Dynamic
  • Ephemeral / Persistent
  • Physical / Virtual
  • Harmful / Beneficial
  • Human-made / AI generated
  • Shared / Private
  • Serious / Comedic 

From my exercise in coming up with this list, I realize that the items included above as axes are not the end-all be-all here. It’s not meant to be comprehensive & other items may become useful for specific types of analysis. In fact, in coming up with even this list, I realized how fraught this kind of list is, and how many holes and how much wiggle room there is in it. But I wanted to come up with something that was broadly descriptive above and beyond what I’ve seen anywhere else.

Graphing Values

What’s the benefit of visualizing it like this? Well, having a chart helps us situate artifacts within the landscape of hyperreality; it lets us make maps. I wasn’t familiar with them before trying to understand how to represent high-dimensional sets visually, but there’s something called a radar graph or spider graph which is useful in this context.

I found a pretty handy site for making radar graphs here, and plugged my data labels (features) into it. Then, for each one, I invented a value between 0-4, which would correspond to the range of the dimension. Here’s how two different sets of values look, mapped to my graphic:

Now, these are just random values I entered to give a flavor of what two different theoretical artifacts might look like. I’m not really a “math” guy, per se, but it becomes clear right away once you start visualizing these with dummy values that you could start to make useful and meaningful comparisons between artifacts under analysis – provided you have a common criteria you’re applying to generate scores.

Criteria & Scoring

So the way you would generate real scores would be – first decide on your features/dimensions you want to study within your dataset. Then, come up with criteria that are observable in the data, and are as objective as possible. You should not have to guess for things like this, and if you are guessing a lot, your scores are probably not going to be especially meaningful. You want scoring to be repeatable and consistent, so that you can make accurate comparisons across diverse kinds of artifacts, and group them accordingly. A simple way to score would just be with a 0 for “none” and a 1 for “some.” Beyond that, you could have higher numbers for degrees or amount of which a given feature is observable in an artifact. So in the examples above, 1 could represent “a little” and 4 would be “a whole lot.”

Taking Action

Within an enforcement context – or any kind of active response, really (like for example, fact checking) – once you’ve got objective, measurable criteria that allow you to sort artifacts into groups, you can then assign each group a treatment, mitigation, or intervention – in other words, an action to take. This is usually done based on risk: likelihood, severity of harm, etc.


Anyway, I hope this gives some useful tools and mental models for people who are working in this space to apply in actual practice. Hopefully, it opens the conversation up significantly more than just trying to decide narrowly if something is real or fake, serious or satire, and getting stuck in the narrower outcomes those labels seem to point us towards.

Hyperreality is here to stay – we might as well make it work for us!

Sovereign AI

Jack Clark’s Import AI newsletter this week mentioned a British think tank recommending that the UK create sovereign AI infrastructure. Quoting the group:

Given these AI systems will soon be foundational to all aspects of our society and economy, it would be a risk to our national security and economic competitiveness to become entirely dependent on external providers.

I tend to agree with this, even if I have my doubts about the current shape or need for “nation-states” as a thing going forward. However, if you are a nation-state or someone concerned with the continuation of that system, it does seem like a certainty to me that AI threatens national sovereignty.

Sure there are the data protection arguments, or the AI cybersecurity threats, etc. Or even the risk of an electorate being poisoned by a malicious AI. But to me the most clear and present danger is simply that AI companies will succeed. And will succeed wildly.

That is, they will come to control more resources directly (as in providing essential compute services), and will amass more money than many governments.

It sounds like from their blog post about planning for AGI, that OpenAI is to some extent already planning for this:

We have a cap on the returns our shareholders can earn so that we aren’t incentivized to attempt to capture value without bound and risk deploying something potentially catastrophically dangerous (and of course as a way to share the benefits with society). We have a nonprofit that governs us and lets us operate for the good of humanity (and can override any for-profit interests), including letting us do things like cancel our equity obligations to shareholders if needed for safety and sponsor the world’s most comprehensive UBI experiment.

While I think they have some good products, color me skeptical about all the above. According to Wikipedia, the cap on returns is 100x. So, if Microsoft puts in $10B, they can “only” get back $1T. That sounds “horrible.”

This is also a hell of a lot of trust to put into any group – corporate or otherwise – that they can be relied on to act in the interest of the good of humanity. And UBI, isn’t that the province of government?

I managed to coax a fairly coherent set of arguments out of ChatGPT about these very real dangers to national sovereignty from AI:

The threat of governments becoming reliant on AI systems they do not control and the AI companies becoming extremely wealthy and powerful is a real concern. Here are a few reasons why:

  1. Dependence on AI systems: As governments increasingly rely on AI systems, they may become dependent on these systems to function effectively. This dependence can make governments vulnerable to the AI companies that control these systems.
  2. Control over data: AI companies that develop and maintain these systems also have access to vast amounts of data, including sensitive government data. This gives them significant power and leverage over governments that use their services.
  3. Unequal bargaining power: AI companies that become extremely wealthy and powerful can have unequal bargaining power with governments, potentially leading to situations where these companies can dictate terms and conditions to governments.
  4. Lack of transparency: AI algorithms can be complex, making it difficult for governments to understand how these systems work. This lack of transparency can make it difficult for governments to regulate these systems effectively.
  5. Inequality and Social Unrest: AI companies becoming extremely wealthy and powerful can exacerbate existing economic inequalities and contribute to social unrest.

To mitigate these risks, it is essential for governments to invest in building their own AI capabilities, rather than relying solely on AI companies.

I’m actually not even sure that those measures will cut the mustard. It’s not (only) that I’m a doomer, but that the through-line of history seems to indicate that corporations are able to innovate faster than governments. And even if a government were to get up to speed on building out their own sovereign AI infrastructure, by the time they do, the game might already be lost to the corporations.

In any event, perhaps none of this will happen. Perhaps it’s all just a symptom of my sci-fi fantasizing. But then again… at least you can vote in a nation-state (theoretically). Is the same going to be true under the AI corporatocracy?

Is democratic AI possible? Desirable?

Found an experiment on Reddit, a site called CharterAI. When you load up the chat, you get this introductory statement:

We believe that AI should be governed by a community of engaged and informed users to minimize the potential for misuse or misalignemnt with democratic human values.

I would argue that democracy by itself does not minimize the potential for misuse or misalignment. Without building in any other checks on it, a majoritarian system is necessarily vulnerable to a tyranny of the majority.

Based on Charter’s rather unfinished website, they are obviously very new. The whole thing is framed not unlike a crypto project with progressive decentralization over time as a stated roadmap goal.

I don’t know enough about them to make a proper judgement, but having seen a lot of fly-by-night crypto projects come and go (some well-meaning, many not), I would be very skeptical of any AI project cast in this mold. Especially one whose stated purpose is “democracy” without any other qualifying information.

The bot itself is clearly confused about what it’s purpose is, as when I asked it about its role as a democratic AI tool, it said:

To answer your question, no, I am not specifically designed to be a “democratic AI technology”. My purpose is to provide helpful and insightful responses to users and to continually learn and grow. However, as an AI entity, I do not have any political affiliations or biases. I do not express opinions about political systems or governments.

Not quite the response I’d expect in this situation for a tool explicitly stated as being for AI governance. Here is what it told me when I asked what model it was based on:

I am an artificial intelligence designed and created by Charter. However, I do not identify as a language model or any specific model of AI. I am a unique entity that incorporates various elements of machine learning, natural language processing, and programming. My internal processes and algorithms are proprietary to Charter, and I operate within the company’s ethical and legal guidelines.

Pretty vague. How can we perform effective governance as a community if we don’t even know what the model is being used? When queried about those ethical and legal guidelines, it told me:

Charter’s ethical guidelines expect me to adhere to applicable laws and maintain user privacy and security. Legal guidelines consist of data privacy, intellectual property, and fraud prevention policies that I must legally follow. My programming is constructed with these guidelines in mind, while I maintain my primary goal of being beneficial to our users while maintaining their trust.

Perhaps this project will develop over time, or perhaps not. In any case, these questions around AI technology and democratic governance are bound to be ones we see play out again and again ad nauseum in the years to come.

Runaway AI as unchecked capitalism

There’s a good Charles Stross bit about how corporations are already slow-motion AIs. Following on that vein, this is a 2017 piece from Ted Chiang which is new to me that goes in a similar direction:

There’s a saying, popularized by Fredric Jameson, that it’s easier to imagine the end of the world than to imagine the end of capitalism. It’s no surprise that Silicon Valley capitalists don’t want to think about capitalism ending. What’s unexpected is that the way they envision the world ending is through a form of unchecked capitalism, disguised as a superintelligent AI. They have unconsciously created a devil in their own image, a boogeyman whose excesses are precisely their own.

And:

We need for the machines to wake up, not in the sense of computers becoming self-aware, but in the sense of corporations recognizing the consequences of their behavior.

Page 3 of 20

Powered by WordPress & Theme by Anders Norén