My hunch is that AI technology is developing too rapidly, and in too many different ways, that trying to have a single set of industry standards is going to be extremely tough sell. Maybe that’s always been the case, but the problem feels amplified in AI.
There are too many different kinds of actors with different approaches, needs, and goals: for-profit enterprises, non-profits and educational institutions, governments, open source projects, end users, and others to be determined. A single one-size-fits-all set of standards may not accurately reflect the needs and priorities of all these actors.
Additionally, there’s always the possibility that a common standard may be wrong or insufficient, given the rapidly changing landscape of AI, and its impacts on people which are bound to compound or snowball over time. So, it seems important to have a diverse range of micro-standards that can adapt to the needs and trends of different actors and use cases.
One of the advantages of AI is that, when done well, one of its strengths is the ability to easily convert or translate between forms, styles, modes, etc. My hunch about having many micro-standards then is that once things are humming along smoothly, it should be relatively trivial to transfer compliance efforts between many different types of micro-standards, with varying degrees of fidelity and efficiency.
A single common industry standard might sound easier, but a more flexible and nuanced approach might prove to be healthier and more adaptable in the long run.