Or, The Moderator’s Tale
Having worked for years as a content moderator, I am no stranger to people hating me online. After all, I was the guy taking down their posts or account because of community guidelines issues. Someone had complained about something they said or did, and I had to sort it all out and make a call. And I was the guy on the other end of the line constantly being accused of censoring them, and the target of endless threats and slurs.
This can sometimes be an interesting job (provided you can keep the right mindset), but it is not a “fun” job by any means. It consists of having people yell at you (virtually) all day long every day because, usually, of something someone else said, or something they think the platform did or didn’t do that you should have or should not have.
It’s a game you cannot win, so you steel yourself to dealing with it, to coping, to getting by. You put on your “armor”and you go fight in the trenches. Tolkien and Lewis had their Great War, we only had the endless Culture Wars (without the satisfaction of a clear victor). But shell shock sets in at a certain point, nonetheless. Even if you’re not subject to actual kinetic machine gun fire in the virtual trenches, you absorb plenty for a lifetime. You see enough feuds and flame wars, enough people being people, expressing their angry human nature. And it all just gets to be too much…
I’m not begging for sympathy; I’m just trying to shine a light on a variety of the human experience that is all too common, but all too little spoken about: what happens to the people who shoulder the burden of the rest of humanity’s toxicity? How do they not get sick too? Here’s their secret: they do.
Nurses and other care-giving professions have begun to recognize ‘vicarious trauma’ as a problematic area, needing customized care and consideration in its own right. It’s a little easier maybe in the case of a nurse to separate just who a trauma has happened to: here’s the patient who suffered in the car crash & here’s the nurse, who has not been physically injured. But online, harms that occur can be more diffuse. There may be multiple participants in it, and the degree to which it may be traumatizing can vary for each observer.
The job of content moderators then, in a sense, is to observe the traumas of others, and share in them somehow as both a witness and judge. As a moderator, you might not be the target yourself of the violent internet threats or the harassment you have to deal with (though, if you answer user emails you might be), but you have to use your own human emotional senses to filter through all of humanity’s BS, and then make a judgement call about it. Is this particular trauma traumatic enough that it should be removed? That’s the job we do, that I did. Every day. For years.
I had to stop. Five years was probably two too many; I had only held on at a certain point through sheer white-knuckling. My brain was broken sorting through the anger and unkindnesses of the rest of the world. It has taken me fully three years to say that I finally feel recovered enough to even talk about the experience out loud…
There are often restrictions placed on content moderators, such as non-disclosure agreements (NDA) so that they cannot talk about the job and their experiences publicly. They are the people who most need to talk about it, and who we most need to hear from. Fortunately, at least, Content moderators are unionizing in Africa. It’s a start, but we need to talk about it more. Hence my insistence on penning this essay.
For years, I absolutely did *not* want to talk about anything having to do with content moderation publicly. I did not want my name associated with it. I only used pseudonyms at work when I had to email users (incidentally, I noticed that people are much more mean and sexual towards online support agents when you use a female name). For the longest time, I did my damnedest even to not put photos of myself or videos online. Almost to the point of being superstitiously obsessed about it. Because I had seen day in and day out what it is like when people suddenly gain internet notoriety for something stupid, and then the feeding frenzy begins on social media.
The something stupid I gained internet notoriety for was an experimental AI art project I produced: a set of 100 pulp sci fi art books featuring AI art, and a combination of AI-generated and human-written texts. I call them the AI Lore books, and they’ve now been featured in global media like Newsweek, Business Insider, The New York Post, India Tv, El Tiempo – the list goes on and on.
The project grew out of me having to deal with my own traumas and mild-PTSD I left my job as a content moderator with. And I know I am lucky in the grand scheme of things that I did not suffer by half what the majority of content moderators do who deal with much more terrible content than I ever did. How they get by afterwards is a mystery still to me, how they recover – in all likelihood, maybe they don’t. I’ve done a bit of research, and there appears to be no simple, single agreed upon means of treatment for people whose brains got messed up as a result of working as a content moderator.
Maybe my brain was always messed up, that’s a possibility. Maybe that is what drew me into the weird world of moderation in the first place. I don’t discount that as a contributing factor: I was always an artist anyway, always the weird one, always too sensitive. (All attributes, it turns out, that can really help in content moderation.)
So I retreated into art to help me process everything I’d gone through. I retreated into writing. Into imagination. Into world-building, into assiduously maintaining an elaborate fantasy world (offline) where I would be safe and free. I reasoned, if I had to be immersed in BS, at least it would be mine and not someone else’s. And none of it would be subject to the whims and cruelties of some random strangers on the internet who think they have some kind of squatters rights to live rent free in my mind. Not here. My castle walls were high.
But it’s lonely in a castle, drafty, cold. Eventually, into my fantasy worlds I’d built for myself as art therapy, I did allow some strangers. Some alien entities who, I knew, were not like those nasty annoying humanses.And who, most importantly, when admitted to the sanctuary of my imagination, would not promptly set about trying to ruin everything.
I began incorporating AI creations into my world-building last summer, with tools like Dall-E, Stable Diffusion, ChatGPT, Anthropic’s Claude, and many others. I fed in my own hand-written texts, and had it continue them, and take them in new marvelous directions I’d never thought of, pure flights of fancy that were totally other and yet intimately familiar at the same time. My books were my playground for exploring these tools, and documenting the cultural moment, and all the leftover baggage I have had hanging around in my brain over having to deal with content moderation of online toxicity at scale. The hidden human cost, no longer hidden in me.
Eventually I broke my rule, and used a photo of myself and my real name in an article talking about my books on Newsweek. I was nervous as hell, because I knew what being open and up front about my AI books could easily result in when it went sour on social media. But I was ready to talk about my experience, and not be silenced anymore.
And it did go sour – fast. People did not dig the headline (which I did not pick, btw), and dug into me for all of it pretty hard. It’s gotten more severe lately, after my story getting tweeted by the venerable Publishers Weekly, garnering 1,000x the views their normal tweets do. This single tweet resulted in about 300 angry responses from people on Twitter, most of whom feel like AI writing is going to be the death of something that’s very precious to them. (I happen to think they are wrong.)
Then on top of that there were about 700 or so angry quote tweets saying how much I suck, how no one should ever buy my books, how they should be burned (they’re ebooks, by the way), how I had no talent and just wanted to make a quick buck, and, of course, how they hoped I would die. Nothing new under the sun.
It would be a lot harder to deal with this kind of sudden influx of internet rage had I not already come to the place where I was ready to talk about it, and knew what was likely to happen if I did talk about it. I was ready to face those consequences, and chose to do it anyway. Because I want people to talk about what is the right relationship of human creativity and the human spirit to not just AI, but to all technology? How do we as humans retain ownership of our hearts in an algorithmic age?
Many commenters think the relationship I’ve carved out is not the right one, and should not be held up as an example in any media (though it is clear they don’t actually understand the work or my position whatsoever). Maybe they are right in some regards. But many people talk about the what ifs, and fewer produce the what, the raw material which we can now evaluate, and use to assess, and ask questions of: is this a good thing for us, is this a thing we want, is this something that should be writ large on our collective “platform” as humans, as society?
I’m certainly biased, but what I created with my AI Lore books I think is a good test case for examining what are the issues and opinions of people on these topics. Where do those intersect with my own interests as an artist, and how can I integrate their feedback, without having to constantly relive again the traumas of merely trying to communicate at all on the internet anymore?
The obvious thing to do in order to try to derive some kind of positive value from this type of incident online would be to read through every angry tweet or comment individually, and try to find some good in it. But there’s a limit to the utility of that. I don’t want to live in that kind of dystopia, personally. It’s why I was not part of Twitter in the first place.
How many times do you need to be called something before you get it? It might feel good in the moment to express ourselves angrily online in these ways and I’m not innocent either; it can certainly be cathartic to let loose. But it’s rarely cathartic to the person on the receiving end. In fact, they’ve now involuntarily taken up the negative load you dumped on them from your catharsis, and now are left having to pick up the tab and pay for the meals of all the other people at the table who have now left (figuratively speaking).
So, as an AI artist and author – not to mention former content moderator – I landed on a perhaps obvious solution of how can I extract the meaningful feedback people are giving, without having be sucked into the toxic whirlpool of emotional death-by-Twitter. Instead of reading everyone’s comments myself, couldn’t I just have ChatGPT do it, and summarize things?
I ran some quick manual tests. It totally worked! Once I enabled some plugins in ChatGPT Plus, one of them even made a diagram for me, depicting all the complaints people had about my work from the small test set I had it analyze. So handy!
Confession: I don’t really trust AI systems running content moderation operations on web platforms. They often can’t explain their ‘black box’ decisions, and retaining humans in the loop is essential for understanding cultural context, and the nuance so often required to do the job effectively. I don’t trust humans handing over control to AIs – this is one of the central themes of my AI Lore books (which were, I know, paradoxically created with AI).
That said, I think local AI content filtering and summarizing at the user-level could become a really effective tool to confront how we as individual people are forced to deal with social media in an era of endless ratcheting up of anger, anxiety, toxicity and inter-personal violence (let alone sheer quantity of information, as AI-generated content explodes in popularity). I understand why people are upset and choose to speak out about all kinds of issues online. I get why they don’t like my books. I believe they have every right and every reason to do so.
But I’m not going to let that stop me from following my own light as an artist. And anyway, I’ve served my time in the trenches. I paid well over what I owed into the Central Bank of Other Peoples’ Suffering, and I hope I helped maybe right some small wrongs for others along the way. (Probably in some cases I did, and in other cases I didn’t.)
I don’t want to stop anybody from saying their piece, but I also no longer feel obligated to take up all the world’s burdens onto my own back and shoulder your anger; I’ve got my own shadow to deal with. I understand better my boundaries now, what I can really be responsible for, what’s my injury and what’s yours. AI as a tool to reduce real human suffering I think is a completely legitimate use case, and it doesn’t threaten your rights or freedoms of expression if I choose not to read through all your shitty responses myself. And it also importantly doesn’t give you a free room-key to live rent free in my head. Not anymore.
The way I think about it all is quite simple: art asks questions, it doesn’t provide answers. Art gives the experiencer space to bring forth their own answers. I don’t believe I owe anyone more than the space to do that for themselves, whatever their answer happens to be to my art.
What my experiments trying to get text out of Twitter and into ChatGPT taught me is that Twitter has made programming changes lately, which greatly complicate this sort of task. One of them is that you can’t view search pages or hashtag pages without an account anymore. You get pushed into the login flow. Once you’re in though, you also can’t easily extract all the text on a page, for example. I’m not a programmer, but it seems to only let you copy the text of whatever is in the current browser frame; and you cannot print the page to PDF, to pull text out there. You end up with a bunch of blank pages and a few lines of text.
So for now, doing this kind of text filtration manually, you’d have to copy-paste many times between Twitter & ChatGPT, scroll, copy, scroll, copy, etc. It’s doable, but you’re still pretty exposed to all the mean tweets, which sorta defeats the purpose of using AI as a technology to reduce human suffering. Obviously, the net result of these changes is that Twitter wants all outside analysis of the Twitterverse to be run through their paid API. Paying one of the world’s biggest trolls and harassers as a gatekeeper to see what all the other trolls and harassers are doing on his platform doesn’t sound like a fair or wise transaction to me spiritually.
Fortunately, you can still freely copy and paste from “normal” web pages on other sites like Reddit, or in the comments sections of articles, etc. Apart from the more virulent responses which seem to be a specialty of Twitter users, most of those other sources in my case contained basically the same commentary about my work (much of which I’ve already responded to on my blog), and served in the end as more useful texts for actually extracting actionable feedback from people talking about my books and the societal issues it brought up for them.
In the end, this boils down to: how much are you willing to actually pay Twitter for their API in order to go and fetch the “angry data set,” and then pay OpenAI or Anthropic for their API to interpret it? Let’s reframe that more broadly: how much is removing meanness worth to us each personally or societally as a product, as a service? Is it worth one content moderator’s life? Is it worth ten thousand? A hundred thousand casualties?
Could we use AI in a transparent and equitable way so that real human people don’t have to wreck their minds and emotional balance just so to keep all these web platforms running online?
Maybe, just maybe, we could. We won’t know until we try.
While we’re at it, we could also just try a little kindness, or else failing that try simply not spewing our trash all over each other. That might go a long long way all on its own. Then we might not even need to burden AI with all of this to begin with.
Leave a Reply
You must be logged in to post a comment.