Questionable content, possibly linked

AI / ML Chatbots Should Link to Their Model Cards

Have written a little bit in the past about AI/ML model cards, and their potential importance in AI attribution. So, I started asking ChatGPT and YouChat (not sure if that’s its “official” name?) to give me information about their underlying models (LLMs), and to point to model cards. And both services seem to make up a variety of responses to these questions, without offering any kind of – so far as I can tell – definitive authoritative responses.

YouChat in particular was hallucinating URLs where it claimed its model card could be found, both on its own domain, and on the Google AI Platform, and it told me different things about its underlying model depending on what questions I asked it, so I took that to mean that the information was unreliable. I didn’t carefully track ChatGPT’s replies (and it was many days ago now), but it was a similar run-around that it gave me.

So I spent a little time earlier sketching out a draft for a proposed set of best practices for how AI & ML tools such as chatbots ought to be able to give accurate and reliable info about their underlying models & their model cards. I published the first version (extremely rough draft) as a Github gist for now until I can spend more time crafting a more comprehensive set of recommendations to replace it.

Eventually, it seems like it would make sense to merge these efforts with the other parallel conventions I’m exploring around markup/markdown attribution and self-identification restrictions for chatbots. This kind of work tends to come in fits and starts anyway, so doing it piece by piece makes sense, as the issues become more plain & potential solutions likewise reveal themselves.


Privacy for sale


Eroding trust in institutions


  1. Tim B.

    From their FAQ:

    “What technology is YouChat built on?

    YouChat is built off of the combination of existing large language models and our in-house technology. ”

    I know this is still ‘early days’ but that’s a totally inadequate amount of public facing disclosure, imo.

  2. Tim B.

    also interesting is that the youchat bot insists on me emailing an address in what seems to be a non functional domain ( in order to get support. but all the FAQs use include email addresses.

  3. Tim B.

    i asked chatGPT also how to contact support and it gave me a bunch of different error 404 URLs, though they were at least in the correct domain. it did seem to give me their correct email address for support tho, so thats a start

  4. Tim B.

    regarding responses given by chatbots around these important provenance issues, perhaps there could be a badge next to replies made by the AI that say something like “Human verified” and link out to a claim analysis?

    i guess more broadly that would point towards incorporating APIs with fact checks into AI chatbot replies, but this is more micro in that it is data verified by the company about itself, not just any free range claims

Leave a Reply

Powered by WordPress & Theme by Anders Norén