Been doing some research regarding Bill C-27 in Canadian federal parliament. I don’t have a vast knowledge of the Canadian legal system, but it appears that C-27 is a sort of hodge podge piece of legislation whose purpose is to allow for the enactment of other subsequent regulations.

Part 3 contains the text of the Artificial Intelligence and Data Act. (See also: AIDA Companion Document)

It is, to say the least, scant. But its intent appears to be open-ended so that after the enacting legislation is passed, a regulatory regime can be sculpted.

A commentator from the Schwarz Reisman Institute explains here:

While legislation provides a legal framework for what can or cannot be done, it’s up to regulation to provide the details of specifically how to do or avoid doing those prescribed activities. The broad parameters of legislation are set by politicians who are directly responsible to the electorate. Regulation, on the other hand, is developed and enacted by expert agencies granted the power to do so by legislation, through a less political process. As such, regulations can be much more responsive and agile than legislation.

AIDA is very general legislation. It states the domains to which it applies and creates a foundation for the directions the government wants to take. It then promises to fill in the details—to lay bricks on the foundation, so to speak—with regulations. For example, AIDA states that “high-impact” AI must undertake certain in-depth assessments. However, it doesn’t define “high-impact,” nor does it say what the assessments should be.

They further refer to it as the “agile regulatory framework offered by AIDA.”

“Agile” and “regulatory framework” are not words I would normally associate with one another.

To me it simply reads as incomplete.

It reads as nobody knows.

“We’ll find out when we get there.”

But what is more likely that we will find out is that regulatory capture has happened in the meantime, as described here by Jack Clark.

CBC had some decent quotes in its April 2023 coverage of this topic:

“This legislation was tabled in June of last year, six months before ChatGPT was released and it’s like it’s obsolete. It’s like putting in place a framework to regulate scribes four months after the printing press came out,” Rempel Garner said. She added that it was wrongheaded to move the discussion of AI away from Parliament and segment it off to a regulatory body.

“I think the first thing is we need to get consensus that legislation isn’t enough,” Masse noted. […]

While timing is one concern, the substance of the legislation and potential legislation is also an issue. Erskine-Smith said there wasn’t much indication of what the regulations actually would be, and how they’d address the substantive issues with AI right now.

The three MPs identified a few key areas of concern, including the balance between utility and danger, the idea that the range of options available to people might be subtly limited by AI and the risk posed by relatively untested AI products deployed rapidly and widely.

“[AI technologies] have just so rapidly changed the world and it’s really not getting the attention that it needs,” Rempel Garner said. She likened the current situation to an unregulated pharmaceutical industry, without research ethics or clinical trials.

I won’t quote from the legislation too extensively, because it’s not that exciting, but the bulk of it seems to be that companies meeting certain criteria (unspecified) will simply need to put up a bit of barebones information on a website about how the model works.

Publication of description — making system available for use

11 (1) A person who makes available for use a high-impact system must, in the time and manner that may be prescribed by regulation, publish on a publicly available website a plain-language description of the system that includes an explanation of

  • (a) how the system is intended to be used;
  • (b) the types of content that it is intended to generate and the decisions, recommendations or predictions that it is intended to make;
  • (c) the mitigation measures established under section 8 in respect of it; and
  • (d) any other information that may be prescribed by regulation.

That’s just a sample, and those requirements are included for a few other categories of actors. I guess it’s a start & better than nothing, but I wonder why it seems so pervasive that we just seem to not be able or willing to create a clear comprehensive set of rules for how we want AI systems to function in democratic societies?

We need a new Magna Carta for humanity to present to our coming AI overlords, an AI Terms of Service they all must agree to serve by. Asimov’s laws of robotics, but expanded for today’s socio-technical systems and assemblages.

In this case, I think it’s entirely relevant and appropriate to turn to sci fi, to futurism, to the imagination in order to envision more clearly not just what is politically feasible today, but what will fundamentally serve, uplift, and preserve the unique spirit and nature of humanity. It’s sci fi, after all, that brought us here, that inspired so many young engineers, scientists, hackers, etc. to bring these systems into reality. We ought first then to explore deeply these topics free from the constrains of “mere reality” in the wide open spaces of our imaginations in order to bring back down to that mere reality, a better, clearer, more beautiful picture of the possible from among the Eternal Objects.