I’m both interested in and puzzling over this clause in Article 5(a) of the EU AI Act:

(a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm;

This is both a potentially deep wording and also one which is potentially very fluffy and so not concrete that it might be a practical impossibility to effectively enforce it.

What the heck does “beyond a person’s consciousness” even mean?

Also there is such a broad spectrum of technologies which already “materially distort a person’s behavior” that we need to look much more carefully and closely at which of those and precisely how they bring people to especially psychological harm.

I would argue, for example, that opening up Youtube on a smart tv already aims entirely to materially distort a person’s behavior by bringing you a bunch of recommended videos to watch. It might be theoretically to your benefit because you went there to watch videos, but I’ve seen user behavior in kids especially where they turn on the tv with the idea “let’s watch ___” (such and such show or film) but then when Youtube opens, their original intention is diverted to something that Youtube instead has decided they should watch.

It might be a right or wrong recommended video, but that’s beside the point. The point is the fracturing and fragmentation of the original human intent which caused the person to engage with the system in the first place. I would personally argue this is over the long term and extremely dangerous and damaging UX pattern to normalize, as it essentially subjugates the human will to the algorithmic.

YouTube uses, no doubt, machine learning at least if not “AI” (whatever that even means anymore). So it could conceivably be covered under the Act – except is this UX pattern “beyond a person’s consciousness” or not? It’s highly unclear. It’s not exactly “subliminal” in the way I think most of us mean it – like hidden messages in the videos or something. But I think this pattern does sort of slip through the cracks in such a way that most people might not realize it that if their original intention was materially distorted by making use of the system.

Anyway, this is just one of many extremely confusing parts of the AI Act that I wrote about in my last post, and I’ll try to continue dissecting here as time permits between many other projects.