Questionable content, possibly linked

Making ML Model Cards Accessible to Non-Experts

One thing I liked in Tom Barrett’s piece about including mini-model cards as part of a “provenance package” for AI generated content was that he proposes using a tool like ChatGPT itself to make the sometimes complex technical information contained in a model card accessible to audiences of different knowledge and interest levels. In other words, an ELI5 option for the info contained in the model card.

I asked ChatGPT about the “toilet doomscroller problem,” and by the end it sort of lands on a similar proposal:

You’re right, it’s possible that for some end users of a platform, ML Model Cards may not be immediately relevant or accessible. For example, if a user is simply scrolling through content on a social media feed, they may not have the time or interest to read a detailed Model Card for each piece of content.

However, for other users or use cases, ML Model Cards can be an important tool for promoting transparency and enabling informed decision-making. For example, for users who are interested in a particular topic or application area, a Model Card can provide important information on the capabilities and limitations of the model, helping them to make more informed decisions about how to use or interpret the content.

Similarly, for developers and other technical users who are integrating ML models into their own applications, Model Cards can provide important information on how the model was trained, what data it was trained on, and what performance metrics were used. This information can help developers to understand how the model works and how it can be used effectively in their own applications.

To make ML Model Cards more accessible and relevant to a wide range of users, it may be necessary to develop more user-friendly interfaces or tools that allow users to quickly access and understand the key information in a Model Card. For example, platforms could provide simple visual indicators or summaries that highlight key information from the Model Card, or could provide users with the option to customize their view of the Model Card based on their level of technical expertise or interest.

It seems to suggest that the primary use case for model cards (presently) is actually not end users browsing a web platform where AI gen content appears, but for developers and technical users who are using the tools for a given purpose, or to for example integrate into another service.

That’s of course a legitimate and important use case, but it does little to address the broader needs of web readers who may (or may not) want to know more about content that appears in their feeds, or who may want to customize their settings to allow or disallow certain types of AI gen content.

For me, this all still boils down to answering the fundamental question of: why do readers/audiences care? What is it that they do or might want to know about the tools used to create a piece of content? What about when the AI and human contributions to the content are highly blended?

These are not simple problems to tease apart in a meaningful way yet, and it’s likely that their applicability will further reveal themselves only in time as we observe the impacts of the proliferation of generative AI content…

Previous

ML Model Cards: Simple Explanation

Next

Notes on Das Machina

1 Comment

  1. Tim B.

    I had to press ChatGPT to get there, but it eventually agreed that including data from model cards has basically no impact on non-technical end users consuming information on social media:

    % “Yes, for non-technical users who are not interested in the technical details of how AI models work, the immediate impact of ML Model Cards may be less direct. However, it’s important to note that promoting transparency and accountability in the use of AI can have a broader impact on society as a whole, even if the effects are not immediately visible to individual users.” %(ChatGPT)

    When I pressed it on what it means by “less direct”:

    %”When I say that the impact of ML Model Cards may be “less direct” for non-technical users, what I mean is that the information provided in a Model Card may not be immediately relevant or meaningful to these users in their day-to-day experience of consuming content on the web.” %(ChatGPT)

    So “less direct” seems to actually mean “not immediately relevant or meaningful.”

    The program is quick every time to point out that including model cards is good for transparency, accountability, industry, trust in AI & platforms, etc. etc. All of that may be true, but it doesn’t solve the root problem that’s more important for me of actually impacting end users in an immediately relevant and meaningful way…

Leave a Reply

Powered by WordPress & Theme by Anders Norén