I want to capture some additional notes on the AI Terms of Service concept that I laid out here (and the press release here).

I wrote the document in its entirety in a little under a day and a half, with help from ChatGPT and Claude (you can find out more in the Provenance section of the doc).

These notes were written at the request of a journalist for an article about the AI TOS that I’m hoping will be published within the next few days. As these things go though, journalists never use all the quotes you send them, so I like to try to capture all the things that end up on the cutting room floor.

Who I sent this to

As this document was written for the Canadian context, I sent it out broadly to a number of ministers of Parliament, including the party leaders and officers for all the major federal Canadian political parties:

  • Liberal party
  • Conservative party
  • NDP
  • Greens
  • Bloc Quebecois

And sent it out to a few key positions in government in addition to those:

  • Minister Innovation, Science & Economic Development
  • Minister of Finance
  • Prime Minister

For fun, I also sent it out to:

  • Marxist-Leninist Party of Canada
  • Communist Party of Canada
  • Peoples Party of Canada

Lastly, I sent it to:

  • All the AI ethics/responsible AI non-profits, labs, and academic departments I could find in Canada
  • All the major AI ethics/responsible AI groups outside of Canada

My Mission With This Document

My mission is to present a more compelling and comprehensive alternative to Canada’s own meager attempt to create AI legislation, the Artificial Intelligence and Data Act (AIDA). Since it is merely enacting legislation, and leaves the rest up to regulation, AIDA has almost no details in it (apart from some vague language requiring AI providers to explain their product on their website). I read that if it passes, AIDA might not come into force until 2025, and it could take another possibly two years for its related regulations to get sorted out. This is dramatically too long of a time period to wait, especially in the super fast-moving world of AI developments, where six months means tremendous new advancements. 

My impression reading AIDA was that policymakers know AI is important, but it seems clear that 1) they don’t really understand the technologies or what users actually need and want from AI providers in order to protect their fundamental rights and freedoms, and 2) as a result, they have no clue how to effectively regulate it. 

Likewise, seeing the difficulty that Canada is having getting the major tech players to respect the new “link tax,” it’s also clear that the tech companies have precious little respect for any national laws which don’t happen to favor their business models. The normal mode of operations for the big companies is simply to engage in lobbying and regulatory capture to make sure countries don’t pass laws that are unfavorable to them. 

Third, non-profits and academic institutions are also stuck to some degree, because they rely on a system of grants and other funding which does not favor putting forth truly radical conceptions that would change how society fundamentally. And most of those lack any first-hand experience actually working in the field, handling complaints, doing content moderation, seeing what users actually care about on web platforms, etc. So while their ideas might be informed from an academic perspective, it’s rare they understand the intricacies of actually running platforms. 

This is where I am able to offer something different. I have a professional background in online Trust & Safety (content moderation, rules enforcement, policy development, handling user complaints, responding to legal requests, product design & management), having spent the better part of a decade working for platforms, blockchains, and non-profits to solve related problems. I can offer a unique perspective that I think none of the other players in this space can afford to, because I am not beholden to any of the economic or status pressures of any of the above groups. I can afford to be a maverick, because I’m both an experienced professional in this space, but I am also a super-user of AI technologies, with a strong understanding of what’s actually important. And I am an artist, so I’m not afraid to explore possibilities and start conversations that others are not able or willing to. 

What I hope will happen

Ideally, I’d like to see a national – and international – conversation develop around these much more specific and in some cases much more extreme proposals that I am putting forward. I’d like to expose that the present “official” line of thinking on these issues is simply not enough and won’t get us where we deserve to be as Canadians, and as simply humans. 

My objective here is not to propose industry-friendly solutions that will be easy for AI companies to adopt. Quite the contrary. I want to push them to offer the highest level of protections possible to human autonomy and creativity; I believe that the best protection of human rights will allow their expression to flourish, and if we’re brave enough and imaginative enough, it just might lead to a new renaissance. If we’re not, well, (continued) dystopia is the likely outcome. 

I sent it to political parties as a gift, because I know they lack the expertise I’ve developed. I frankly don’t support any of them. I’m hoping that each different party will latch onto different aspects of the proposed solutions, and that somewhere in the middle, we might find a way forward that could actually work to protect us. 

Frankly, after Climate Change, I think the responsible development of AI technologies is the absolute biggest problem facing all of humanity. Canada is my home, but it’s just a microcosm. The same issues are playing out globally and what I’m proposing could be debated and potentially adopted in any context. If we don’t take prompt aggressive action, we will be surrendering a great deal of power over our future to private for-profit companies who have no accountability or oversight, as we transition into one of the greatest changes humanity has ever faced. 

What’s Different About This Document Than Other Similar Proposals

My document is a mix of many different principles that I have absorbed from other regulatory regimes, including the GDPR, the Digital Services Act, the EU AI Act. I’ve even cribbed elements from OpenAI’s own charter and marketing verbiage. Plus, of course, my own experiences and frustrations using AI tools on the market as they are now. I think we’re on the wrong track with these products as they stand now, and almost nobody is taking a radically “human first” approach to thinking about our ideal future. 

Some specific elements of my proposal that are very different from anything I’ve seen:

  • Guarantees that for any official purposes, people will always have non-AI human alternatives available
  • Demands for radical direct democratic control, user ownership of AI technologies, and broad distribution of benefits derived from AI: including free & public-owner alternatives to these technologies, profit-sharing, etc. 
  • Requirements that AI systems not invent or hallucinate false critical information & integrate their systems with fact-checking & verification
  • The ability to turn off all the pseudo-ethical judgements and moralizing that current AI systems do, while simultaneously lacking a real comprehension of human norms and value systems
  • Extremely strong protections against collecting and training on personal data
  • Require all AI decisions (including content moderation decisions) be explainable and comprehensible in plain language
  • The ability to escalate any complaint or dispute with a company to a qualified outside body
  • Safeguards for the preservation of human autonomy in the face of ever-increasing reliance on AI systems
  • Safeguards against human behavior manipulation for profit
  • Safeguards to make sure AI systems are sustainable and not destroying or needlessly consuming natural resources
  • My document is also written as Agile user stories, and describe specific actionable product features that software teams can use as templates for building compliant solutions

There’s a great deal more to be said here, but given that this is a new evolving situation, this is probably a good place to stop and take a breath.