As I mentioned in my post about Canada’s weak/incomplete attempt at AI legislation, I started working up a theoretical/fictional concept piece that would be a “Terms of Service” for AI companies operating in Canada. I dropped in my draft text so far, and it criticized me for being unrealistic:
Remove or rephrase points that seem ambiguous, repetitive, or potentially unrealistic/unfeasible. For example, points about co-owning or controlling the AI system as a user may be difficult to implement in practice for most companies and services.
It’s true that these ideas of co-ownership & co-control over the direction of AI technologies by the communities (explored in my book: Occupy AI) which use them might appear “difficult to implement in practice for most companies.” But that doesn’t mean we as a society should all bow down and bend over backwards to whatever is most convenient for companies.
The entire point of this document is to try to articulate the freedoms we need to protect against the encroachment of AI services. It’s aspirational, but AI systems should not tell me that my dreams are wrong and unrealistic and that I should abandon them so companies can have their way unfettered.
The point of the AI TOS is to stick it to companies, and make them put humanity first. I already know they absolutely won’t do it and will balk at seeing such radical suggestions included. And that’s precisely why they need to stay in the document. Because someone needs to say the things and imagine the possibilities about how we might consciously choose another path; part of that path lies through the imagination.