Yesterday, on the 17th of February 2024, the European Union’s much anticipated Digital Services Act came into effect. Much like the GDPR did before it, this regulation is poised to revolutionize the world of content moderation, in that it requires certain things from digital service providers, many of which the providers typically do not currently provide.

Article 17 is of special interest in this case, as it states that providers must disclose specific reasons for removal of user content or account termination. Quoting the reg:

Article 17

Statement of reasons

1.   Providers of hosting services shall provide a clear and specific statement of reasons to any affected recipients of the service for any of the following restrictions imposed on the ground that the information provided by the recipient of the service is illegal content or incompatible with their terms and conditions:

(a) any restrictions of the visibility of specific items of information provided by the recipient of the service, including removal of content, disabling access to content, or demoting content;

(b) suspension, termination or other restriction of monetary payments;

(c) suspension or termination of the provision of the service in whole or in part;

(d) suspension or termination of the recipient of the service’s account.

This is a big deal because, as I can personally attest to from my time spent in the trenches as a content moderator and Trust & Safety professional, companies do not want to disclose any of this because it may open them up to further on-going disputes with affected users, as well as potential legal liability.

The typical response from service providers, like that which I recently received for my ban by Midjourney, which the company undertook apparently in retribution for blowing the whistle on safety issues in their model, is merely to assert that you broke their Terms of Service of Community Guidelines. When pressed for the specific reason, they will not give it to you. This applies across the board in my experiences with platforms, but it appears that Midjourney has a long history of this type of refusal, as evidenced by all the claims against them via the Better Business Bureau. It seems this a pattern for them.

And I maintain that it is a bad pattern that does not result in just outcomes for users. The EU appears to agree with me. So much so that, in addition to requiring the ability to have clear explanation of reasons for actions against accounts or content, it also requires that companies like Midjourney have an established internal appeals process (which Midjourney does not in any meaningful capacity for account terminations), and that users have the ability to take their complaint to an outside officially-approved third party body for review. If the outside dispute resolution body finds against the company, there are specific potential legal consequences which the Act outlines.

While I am not a resident nor an EU citizen, it’s possible I may have what is considered ‘establishment’ in the Union due to some business which I am currently engaged in there. I am investigating those options more closely, as I believe this is an important area for activists to bring forward in order to level the playing field between service providers and their users, who otherwise are often left with little to no recourse when companies like this make secret determinations that impact the fundamental rights of users.

If you are a citizen of or located in the EU, and you’ve experienced similar prejudicial content moderation actions at the hands of Midjourney or any other company anywhere in the world (the Act applies extraterritorially to non-EU companies offering services to EU citizens), I urge you to contact the Digital Services Coordinator in your EU member state and begin proceedings to protect your rights against those who would systematically infringe them in the name of profit.