Recently while looking around for AI companies in Canada, and in Quebec especially, I discovered that Microsoft has a research office in Montreal, with a group called FATE, which stands for: Fairness, Accountability, Transparency, and Ethics. They list their current focal points for research as:

  • Responsible natural language generation (NLG) and issue discovery frameworks
  • Objectionable behaviors, toxicity, and addiction in sociotechnical systems
  • Harms in information systems and other multi-stakeholder environments

It’s an interesting set of questions, but there’s one part that jumps out at me, when you look at the career options. They offer a 12 week internship, with these required qualifications:

“Must be currently enrolled in a relevant PhD program or JD (Juris Doctorate) program (areas of interest include machine learning, human-computer interaction, computational social science, information science, information retrieval, natural language processing, science and technology studies, or other related fields).”

These strike me as strange qualifications for a 12 week internship, in the first place. And in the second place, they strike me as strange qualifications for a research lab committed to “fairness.” After all, it’s not just anybody who has the opportunity to get a PhD or a JD…

I don’t mean to pick on Microsoft or this lab especially (perhaps they do good work!), because this kind of problem is actually epidemic in certain tech circles. And I don’t just mean requiring PhD’s for research positions – I mean that nearly every company out there in AI (and otherwise) has a huge number of openings for people with STEM backgrounds, and very little else.

Have mentioned elsewhere the need to bring other kinds of people with other types of backgrounds into developing AI, particularly creative types & artists. But over and above fine arts, there are a whole host of specialties in the humanities and the social sciences which would be bring some much needed balance to the field, which is currently radically math & engineering-heavy, such to the point of being lop-sided and out of balance.

To extend some of Ellul’s thinking from the previous post, we might say that we need people whose professions and occupations aren’t wholly dedicated to the altar of efficiency that technologists by and large are. We need people who specialize in human impacts. And by that I don’t just mean ethicists (though they have an important role to play, to be sure) – I mean simply humans.

How can we develop truly fair AI systems if only a tiny subset of a certain type of person with a certain type of mentality, training, education, and professional background are allowed to play in the ball pit?

Another part of me – in fact, the carpenter part of me – rebels a little at this line of inquiry. By way of analogy, if we’re building a house, why would we let people who haven’t undergone training as a carpenter do framing? We probably wouldn’t. But in actual fact, building a complex structure like a house actually takes a great many different types of more and less skilled types of laborers working in harmony toward the same goal, each playing their part. So maybe, thinking it through more carefully, the carpenter part of me’s objections end up evaporating.

I guess all this is to say two things: companies need to do better. They need to figure out how to integrate more diverse types of thinking into developing AI technologies (in addition to the more conventional types of diversity we think of with regard to ethnicity, gender, etc). And two, if you’re someone with a non-STEM degree (or no degree at all), and you want to participate in the development of AI in a way that is genuinely fair, you are going to have to probably agitate for a seat at the table. Because right now, companies seem to be taking little notice of “the rest of us,” except through the lens of becoming end users and paying customers.

So how, as a non-STEM person, do you get a seat at the table? That’s the core question which lead me here in the first place. And though I don’t yet know the answer, my hunch is that we have to sit down and figure out concretely what exactly we can offer – individually & collectively – to this great edifice which is rising up suddenly in front of us, and which is poised to change everything. Will keep on working on these questions!