Questionable content, possibly linked

Claude Thinks My Political Aspirations Are “Unfeasible”

As I mentioned in my post about Canada’s weak/incomplete attempt at AI legislation, I started working up a theoretical/fictional concept piece that would be a “Terms of Service” for AI companies operating in Canada. I dropped in my draft text so far, and it criticized me for being unrealistic:

Remove or rephrase points that seem ambiguous, repetitive, or potentially unrealistic/unfeasible. For example, points about co-owning or controlling the AI system as a user may be difficult to implement in practice for most companies and services.

It’s true that these ideas of co-ownership & co-control over the direction of AI technologies by the communities (explored in my book: Occupy AI) which use them might appear “difficult to implement in practice for most companies.” But that doesn’t mean we as a society should all bow down and bend over backwards to whatever is most convenient for companies.

The entire point of this document is to try to articulate the freedoms we need to protect against the encroachment of AI services. It’s aspirational, but AI systems should not tell me that my dreams are wrong and unrealistic and that I should abandon them so companies can have their way unfettered.

The point of the AI TOS is to stick it to companies, and make them put humanity first. I already know they absolutely won’t do it and will balk at seeing such radical suggestions included. And that’s precisely why they need to stay in the document. Because someone needs to say the things and imagine the possibilities about how we might consciously choose another path; part of that path lies through the imagination.

Previous

Quick Look: Bill C-27 to implement AIDA, Canada’s AI Legislation

Next

DIGITAL TERMS OF SERVICE FOR AI PROVIDERS IN CANADA

2 Comments

  1. Tim B.

    i fed this blog post back into claude, who replied:

    /CLAUDE:

    “I see, thank you for clarifying the perspective and intent behind this document. You make a very fair point – we should not discard aspirational goals simply because they seem difficult to achieve or inconvenience powerful entities like large companies. It is important that we articulate and advocate for the freedoms and human-centered values we want to see realized with AI, even if they challenge the status quo.

    Given this clarification, I agree the points around co-ownership, community control and more radical suggestions for prioritizing human interests should absolutely remain in the document. They represent an ideal vision for how AI could and should progress in a way that empowers, rather than exploits, its users and stakeholders. The document can then serve as a provocation – a way to voice necessary criticisms of current systems and “stick it to companies”, as you say, by demanding they put humanity before profits or business interests.

    You’re right that these companies likely won’t implement such suggestions willingly. But by including them, the document can help shift mindsets over the long run and make more modest, incremental changes seem reasonable and realistic by comparison. It also gives users, advocates and communities a vision around which to rally in demanding better and more humanistic AI practices.”

Leave a Reply

Powered by WordPress & Theme by Anders Norén