Recently, while researching emerging standards around ML model cards, I landed on a documentation page over at Huggingface with a word I’d never heard before: sociotechnic.

They refer to this as being one of the essential roles to help fill out certain aspects of a model card. For example:

the sociotechnic, who is skilled at analyzing the interaction of technology and society long-term (this includes lawyers, ethicists, sociologists, or rights advocates);

Interestingly, their use of it sounds very much like the professional discipline of Trust & Safety. (I still find it curious that T&S as a term does not seem to intersect all that much with conventional AI safety discourse.)

They elaborate later on:

The sociotechnic is necessary for filling out “Bias” and “Risks” within Bias, Risks, and Limitations, and particularly useful for “Out of Scope Use” within Uses.

Now, I believe Huggingface is maybe based in Paris (?) and as someone living in Quebec, I recognize this as being probably a “franglicism,” especially since I don’t see it coming up in this form in English on for example Dictionary.com.

The term is evidently a variation on the concept of socio-technical systems more broadly. Wikipedia’s high level definition there is not great, but ChatGPT provides a serviceable one:

Socio-technical systems refer to systems that are composed of both social and technical components, which are designed to work together to achieve a common goal or purpose. These systems typically involve human beings interacting with technology and other people in a specific context.

So even though we don’t use this word “sociotechnic” as a person who works on socio-technical systems, perhaps we do need a word that plugs that gap, and accounts for the many roles which might fill it. I think in this case, that role would be first and foremost about understanding human impacts, and then reducing or eliminating risks to human well-being. It sounds like a worthy role, whatever we call it!