Via Foxglove, a UK non-profit, this is an interesting read in the on-going struggles of content moderators for better working conditions and recognition of the importance and difficulty of their work. It’s a manifesto generated as a result of a summit held in Germany among content moderators, representatives and policymakers, which was in the last couple of days apparently presented before German parliament.

The whole thing is worth reading, but I wanted to respond to a few specific elements in it for further discussion.

Despite day-to-day exposure to toxic content, we earn no hazard pay. Companies must also provide a hazard bonus of at least 35% of moderators’ annual salary.

It’s a big ask, but why not? It is most definitely hazardous to humans to have to manage this kind of content, especially at scale. Think about it: if the content moderators are exposed to daily in the hundreds or thousands is “too dangerous” for the general public to be exposed to, what makes it perfectly fine and okay to constantly inject into the nervous systems of content moderators?

One of my concerns here actually though is: is there any amount that is “safe”? Like, beyond hazard pay, is there actually a way to make this work *not* destroy people’s well-being? I’m not even sure…

Proper mental health care must be provided to all content moderators. Content moderation poses serious risks to our mental health including depression, anxiety, insomnia and PSTD. Each company must obtain independent, expert advice on effective safeguards and implement recommendations without delay. In the meantime, access to an independent, mental health clinician must be provided to each of us, on a 24-hour basis.

This goes towards my point above, and one I made previously: what does effective prevention/treatment for content moderators even look like? Is there any emerging consensus around this? Are there even studies being done to find out the best options here?

There are also many stigmas associated with accessing something like a 24-hour available mental health clinician, due to job-related issues caused by extreme high volume exposure to graphic content. For example, moderators might be (rightfully) concerned that accessing these services might not be truly confidential and might make them a less desirable candidate for continued employment or advancement. Or there might be stigmas among certain groups around even seeking out help for one’s inner well-being.

While I think the section above is well-intentioned and a necessary first step, there is a lot lot more that needs to be opened up in this area to really make effective progress.

To top it off, I think merely “making available” help is not the same and does not have the same impact as actively integrating mitigations as part of the work day, such that workers know that taking care of themselves is a routine part of the job (like wearing personal protective equipment in a construction environment), totally normal, and they will be well-compensated for doing so.

No NDA can legitimately stop us raising concerns about the conditions of our work. We must be allowed to speak about the conditions of their work, to ease the pressure we face, and to allow for organising. These NDAs must be dissolved with immediate effect.

I’m into this!

5. All outsourcing of content moderation must stop. The critical safety work of content moderation must be brought in-house by each social media company. As companies transition, there must be no differential treatment in pay or benefits between those of us who are employed directly and those working via third party companies.

Contractors versus employees is a weird, complex, and epidemic set of problems across tech. For things like moderation though, it is absolutely even more extreme as a dichotomy.

I think a lot of these things are good goals, and I think most tech companies will definitely balk at them, because it brings into view the true hidden human cost of this work, which is already something companies believe to be losing them money. So to make it more and more expensive for them to run content moderation by bringing it in house and making it – gasp – equitable, will likely seem insane.

And the social media companies might reply, well we can’t afford to run our business if that’s how much moderation will cost us. My personal response might be: do we really even need them? Why do we accept the existence of social media companies as a given and a necessity? If this makes it too expensive to run, a big part of me thinks that might just be okay in the first place. Maybe we should be seeking a radically different type of internet – one where this type of work doesn’t even have to exist…

Social media companies must ensure equal work is equally compensated. Social media companies must guarantee workers are treated the same irrespective of background or country of residence. We are content moderators in Germany, but we stand with our colleagues around the world who do the same work for a fraction of the American or European wage and under far harsher conditions. This digital colonialism must end, with all disparity in pay, benefits and conditions removed, and our standards made uniform across the world.

Again, a really big one & I fully agree. I also think platforms will fully not agree. And I don’t know how to reconcile the two, but for now I think it’s good that these things are being brought to light and articulated, and I hope more people take notice.