One thing I liked in Tom Barrett’s piece about including mini-model cards as part of a “provenance package” for AI generated content was that he proposes using a tool like ChatGPT itself to make the sometimes complex technical information contained in a model card accessible to audiences of different knowledge and interest levels. In other words, an ELI5 option for the info contained in the model card.
I asked ChatGPT about the “toilet doomscroller problem,” and by the end it sort of lands on a similar proposal:
You’re right, it’s possible that for some end users of a platform, ML Model Cards may not be immediately relevant or accessible. For example, if a user is simply scrolling through content on a social media feed, they may not have the time or interest to read a detailed Model Card for each piece of content.
However, for other users or use cases, ML Model Cards can be an important tool for promoting transparency and enabling informed decision-making. For example, for users who are interested in a particular topic or application area, a Model Card can provide important information on the capabilities and limitations of the model, helping them to make more informed decisions about how to use or interpret the content.
Similarly, for developers and other technical users who are integrating ML models into their own applications, Model Cards can provide important information on how the model was trained, what data it was trained on, and what performance metrics were used. This information can help developers to understand how the model works and how it can be used effectively in their own applications.
To make ML Model Cards more accessible and relevant to a wide range of users, it may be necessary to develop more user-friendly interfaces or tools that allow users to quickly access and understand the key information in a Model Card. For example, platforms could provide simple visual indicators or summaries that highlight key information from the Model Card, or could provide users with the option to customize their view of the Model Card based on their level of technical expertise or interest.
It seems to suggest that the primary use case for model cards (presently) is actually not end users browsing a web platform where AI gen content appears, but for developers and technical users who are using the tools for a given purpose, or to for example integrate into another service.
That’s of course a legitimate and important use case, but it does little to address the broader needs of web readers who may (or may not) want to know more about content that appears in their feeds, or who may want to customize their settings to allow or disallow certain types of AI gen content.
For me, this all still boils down to answering the fundamental question of: why do readers/audiences care? What is it that they do or might want to know about the tools used to create a piece of content? What about when the AI and human contributions to the content are highly blended?
These are not simple problems to tease apart in a meaningful way yet, and it’s likely that their applicability will further reveal themselves only in time as we observe the impacts of the proliferation of generative AI content…