Questionable content, possibly linked

In The Debrief on AI-Powered Disinformation & CounterCloud

A new piece came out today in The Debrief, which features a number of quotes from me about AI-powered disinformation, vis-a-vis a experimental project in that vein called CounterCloud. This video summarizes the project:

I wrote a fairly long piece about this originally, and here is the full text, most of which was not used:


CounterCloud depicts what can happen when someone uses AI in concert with other automation to develop and publish counter-arguments to support a given political agenda. 

It is significant in that it appears to be the first proof we have that someone developed such an AI-assisted system for automating this type of political argumentation at scale and with relative ease. In this case, it was a single person with a limited budget.

Given the financial and technological resources of state actors like intelligence agencies, however, it would be prudent to assume that this is not the first time anyone has built or tested a system like this – just the first time that we know about it being done, and luckily in an apparently controlled setting. 

Though the speed and scale of content creation made possible by popular off-the-shelf AI technologies is certainly concerning when applied to disinformation campaigns, the situation driving the development of this particular tool is not particularly new. 

Actors engaged in information warfare and other forms of strategic storytelling (including marketing and politics, for what its worth) have always sought ways to automate and better manage their processes so that their messaging can achieve greater effectiveness with fewer resources. Incorporating AI tools into these kinds of campaigns is logical and inevitable. We have to accept it as a permanent part of the landscape now. 

They say a rising tide lifts all boats, and this is thankfully as true in AI as it is everywhere else. As technology advances, it advances for everyone – or at least everyone able to access it. Luckily, as AI technology becomes more useful for people running disinformation campaigns that employ adversarial narratives, so too does it become more useful for people to detect and counter those campaigns. We can’t separate the good and bad outcomes of technology; we’re stuck with the reality of both.

We can’t put the genie back into the bottle at this point: AI tools are not going away. Even if there’s eventually regulation to shape responsible AI development, it will be different and spotty from country to country, with varying degrees of enforcement. And thanks to simple human nature, not everyone developing or using AI is going to adhere to those rules, especially where they might have malicious intent.

In terms of incentives, there is probably a clearer financial motive to use tools like this for SEO, content marketing, spam, etc. (instead of purely for political reasons) in order to gain traffic and ad revenue. Which is not to say a political actor might not also collect ad revenue, while manipulating people’s beliefs through AI-generated content like this. They very well may. But if you’re purely running a system like this to earn income, you are likely to A/B test and simply find the political perspectives that generate the most traffic and ad money, rather than one you necessarily support. 

We‘re now in a world where AI-assisted speech is a normal part of the spectrum of expression. So I think one of the best paths to resiliency is to expose people to the inner workings of these tools, and familiarize them with the types of outputs they create, so they can spot them in the wild, and engage in conversations with one another what is the right relationship that we want to have with these technologies in our lives. 

I’m of the opinion that access to tools like this ought to be democratized and open-sourced. This way, people can examine how it works, and figure out ways to solve some of the problems it creates. Also, if many diverse actors with different motivations and backgrounds can use these kinds of tools to support many different viewpoints (whether good, bad, or in between), we stand a better chance at finding some kind of balance than if they are only in the hands of a few centralized actors who end up an outsized ability to get their message out there. I tend to put more faith in simple competitiveness as being a better tonic to this problem than approaches like regulation, or so far infeasible technical ones like watermarking AI-generated content. 

Yes, this is all going to lead to an “arms race” in terms of detection, and so on, but it’s still nothing new. We’ve always had to make sense of a complicated world using the signals available to us and to the best of our abilities. Those signals and abilities might change with technology, but the same basic challenge we’ve always had will remain: how do we negotiate social reality between us all?

Lastly, if people are looking for practical ways of how to combat negative potential uses of AI disinformation tools like this, I tend to think the best one is unplugging from social media altogether. The less people engage with social media platforms (by and large owned themselves anyway by megalomaniacs running their own massive influence schemes), the less attractive and potentially profitable it becomes for other actors looking to run behavior or belief manipulation campaigns on them. Take away the fuel, and the fire will go out on a lot of this. 


To be totally honest, I think a flexible tool like what’s described in CounterCloud, set to different narrative purposes would be a huge boon to certain kinds of weirdo emerging storytelling that I am interested in. For example, I’d love to load up multiple different instances of a tool like this, and just feed in Quatria-centric narratives and also run hyperreal counter-narratives countering Quatria theorists.

I’ve always had this instinctual feeling with generative AI tools, that I want to have had something happen in my absence, something that I can check in on and see if it’s good or not, and whether it should keep going, or prod it in new directions, and test out if any of those are good, and keep going iteratively to find interesting and novel aesthetic places in latent space and their inhabitants. An application like this, customized perhaps in a slightly different direction wouldn’t have to be this “scary” thing – it could just be another item in the toolkit of storytelling.

Previous

Visualizing the Hypercanvas

Next

Desantis as Restaurant Manager

Leave a Reply

Powered by WordPress & Theme by Anders Norén