There’s a user-generated content I’ve experimented with which has rules about labeling AI-generated content. Presumably they scan submitted texts for signs of that, and reject the submitted post until you fix it and/or apply the label.

When the articles thus tagged in their publish UI are made public, they have a little tag that appears at the top of the page “AI-Generated.”

I’ve got complicated opinions about this, but it will take some picking apart, so I’ll just drill down on one side of it. Telling me if something is or isn’t AI-generated does nothing to tell me about its supposed truth value, origin, agenda, author, etc. It’s a supposedly meaningful context signal that ultimately isn’t – I think – all that meaningful as a determiner of action or belief of the recipient/viewer/audience. It could be “AI-generated” and true, after a rough text gets cleaned up and formalized. And that’s still I think a good and even authentic use of AI…

So what do we really want from these labels?

(To be continued)