France 24 recently published a piece about certain claims circulating online that there are pyramids and a lost civilization that was re-discovered in Antarctica. It’s an interesting piece, and worth a read, and the fact-checker responsible did a decent job with assembling the pieces of the puzzle as presented.

There are some elements which got left on the cutting room floor though, that in my mind are worth retaining in some form, if not in the primary reporting on it. That’s a good use for blogs, in my opinion.

I’ll excerpt the ones below that to me had the most important concepts in them, which may not have been covered in the finished piece.


FRANCE 24: How did the idea of this project come to you ?   

My prediction is that AI-generated content will soon make up the vast majority of content on the web. I’ve seen that platforms, non-profits, and government organizations that many of their efforts around media literacy and counter-disinformation/counter-radicalization move quite slowly and use very conventional approaches to educating audiences. They tend to focus only on fact-checking and debunking, but in my experience, that approach does little to reach audiences who would most benefit from it. Because many of those people are skeptical of mainstream news and fact-checking, so sometimes those efforts backfire and only ever reach people who already understand the problem. 

As an artist, I’m able to experiment outside of the constraints of what those organizations can do or are willing to do, and reach people directly where they are consuming this type of content. 

Which behaviors or reflections do you aim to encourage with these publications?

I want to pique people’s curiosity, and encourage people to be suspicious. It is intended to be provocative. I want them to look at the AI-generated material that I create, and identify what about it seems off or wrong for them, and then have them share that with one another in an open and honest discussion, and explain their reasoning to other people. Many times these messages have a more profound impact when they are delivered by one’s peers or community than they would from an external authority figure.

Which advice would you give for anyone to detect AI-generated content in general ? 

This technology is moving so fast that any general advice I could give about detecting AI-generated content is either only going to apply in very narrow cases where a specific tool(s) is being used, or it will only be meaningful advice for a very short period of time until the tools reach their next iteration. For example, some AI image generators still have a very difficult time depicting human hands correctly, but as you can see online, people are working round the clock to improve them. Within a year or two, all of those indicators will either change to something else, or they will go away altogether, and much of the AI-generated content will become indistinguishable from the real thing.

I would say in general that your best bet is always going to be classic detection methods: does an image or other artifact appear “too good to be true?” Then chances are, it probably is not true. Doing reverse image searches is also still a useful method to uncover the original source of something you find on social media. Though, that said, it is also trivial to create a false trail of provenance for an image, so that it seems to have come from a real person, when in fact it was generated.

What do you think about these publications that use your content? Did you expect these pictures to be used to spread disinformation ? Was it what you expected / what you wanted or do you consider it more like a collateral damage?

There is a branch of medicine called nuclear medicine, or radioisotope scanning, that is relevant here. In it, a small amount of a radioactive substance, called a radioisotope, is introduced into the body. The radioisotope emits radiation that can be detected by special cameras or scanners, which produce images of the inside of the body. These images can be used to trace the path of the radioisotope and identify any abnormalities or problems.

I see other people using the content that I create in much the same way: where the images and stories I create are like the radioisotopes, and when they get picked up by other people in an information ecosystem, we can use them as a way of tracking how and where these things propagate online. We can see the paths they take, and even the way the artifacts mutate through re-transmission – such as them getting cropped by other people, or the original disclaimers about the presence of AI-generated content being removed.

All of these activities by other people interacting with the content become an important part of the story of the information, what it does, and how it functions structurally.

Concretely, how do you expect your publications to raise awareness about disinformation?

One thing I see in the subcultures around conspiracy theories is that, while those groups are usually highly skeptical of information coming from mainstream media, they sometimes are not critical at all of much more questionable information sources. This opens those audiences up to a great deal of manipulation and propaganda by any number of state, political, or other actors, so long as the content matches the preconceived notions, ideology, or emotions of the target audience.

By sharing the kinds of artifacts I’m creating, I hope to find another pathway, one that might be able to playfully trigger or provoke the critical faculties of those audiences, so that they understand just how easy and widespread this type of manipulation can be, and how to guard against it. 

I also want to start mentally preparing people for what happens when much more skilled and better funded actors start using these tools to manipulate at scale. I’ve only spent a few hundred dollars generating images for 50 books. What happens when someone invests millions of dollars into AI-generated disinformation? We simply won’t be able to fact-check all of it.

We need to help train peoples’ discernment and agency to find the truth for themselves. And we need to find creative and interesting ways to reach people where they are, and understand what motivates them to seek out this type of content, instead of just calling them “crazy” and writing them off forever. We can’t afford to leave vast segments of the population behind, just because we think the things they believe might be wrong. We need to find ways for all of us to be able to move forward together, by talking out loud about the things that matter most to us