This is a reaction to a TechCrunch article about the EU Commission, and the intention to apparently require companies to label AI-generated content.

There’s a lot to say about this, but much of it is long-winded and boring, and my drilling down into the details as a “practitioner” is likely to convince no one important of any course corrections.

Instead, I’ll just pick out a few key things to quibble with.

Transparency Commissioner Vera Jourova is quoted as saying:

But when it comes to the AI production, I don’t see any right for the machines to have freedom of speech.

I can’t tell if this is just naivete or misdirection, but it’s important to highlight here that the freedom of speech of AIs is not in question, as we all recognize under the law that humans have human rights, not machines.

The potentially affected right to free expression at issue, then, is not that of machines, but of the human operators of those machines.

Also quoted in the article is the idea that signatories to the EU Disinformation Code which create generative AI tools…

…should commit to building in “necessary safeguards that these services cannot be used by malicious actors to generate disinformation”.

I don’t think anyone on earth actually knows what this even means. Prevent AIs from saying things that aren’t true, and of making pictures of things that don’t exist outside the imaginal realm? Without more clarity, this is imo just a lot of mouth-flapping and buck-passing as politicians wiggle their thumbs and hope that someone else will come along and fix the ambiguity, and they won’t be saddled with actual responsibility for drafting codes of conduct and potentially legislation with bad imprecise language.

The second part of the above is that platforms which might distribute such gen-AI content should…

put in place “technology to recognise such content and clearly label this to users”.

Assuming for a second this technology to recognize AI content actually exists & is fully reliable (it does not, and is all unreliable as hell. I’ve tested every one I can find, and they’re ALL wrong), nobody in these conversations ever says what they thing this kind of labeling will actually achieve.

They think people will be scrolling their social feeds on the toilet, and see a label “AI generated” and that _____ will happen? Nobody ever fills in the ____. It’s always an unstated fantasy that people will see the label, and this will lead them down some sort of path of critical thinking, which ends in them rejecting the thing as “false” and of no consequence.

For a body ostensibly dedicated to studying and preventing the spread of malicious disinformation, this represents a stunning lack of awareness of how disinformation – and also just normal information – even works in the first place. It clings to an Enlightenment Era ideal that is never said aloud: if people just know what the “truth” is, they will rationally respond and act responsibly, and not continue to elect fucking morons, tank civilization, and destroy the planet, etc. We only need to look around to see how well that approach is working out.

One more, related:

…the EU wants labels for deepfakes and other AI generated content to be clear and fast — so normal users will immediately be able to understand that a piece of content they’re being presented with has been created by a machine, not a person.

This is silly because, it’s almost never an AI that is randomly creating and distributing AI generated content all by themselves. There is basically ALWAYS a human involved somewhere. It say “it was made by a machine not a person” is to fundamentally misunderstand the nature of these technologies, and how all they do is amplify human creativity.

Lastly, the thing that drives me crazy about all this is, put into simple terms:

The EU government is literally mandating that for-profit corporations take responsibility for differentiating for people what is “truth.”

Because that’s what these types of content labels ultimately point to: yes, x is real (and therefore good), no y is invented (and therefore wrong). It might seem like, well, hey this “code” is purely voluntary – for now. The incoming Digital Services Act, and eventually the AI Act in 800 years (in AI time) when it comes into force, however will shift the balance.

Then you could say, well, they aren’t telling platforms which things are real, and which things are false – they’re leaving that up to the corporations. Is… that… better? Really? Corporations get to decide for us? At least democratic governments have to keep up the illusions of public oversight and accountability. Corporations generally have far less of this need to keep up such appearances.

Anyway, blah blah blah. I know nobody’s listening on these kinds of things. All the big players have their entrenched positions, and the rest will just run itself through its horrible paces semi-autonomously whether I like it or not.