Found an experiment on Reddit, a site called CharterAI. When you load up the chat, you get this introductory statement:

We believe that AI should be governed by a community of engaged and informed users to minimize the potential for misuse or misalignemnt with democratic human values.

I would argue that democracy by itself does not minimize the potential for misuse or misalignment. Without building in any other checks on it, a majoritarian system is necessarily vulnerable to a tyranny of the majority.

Based on Charter’s rather unfinished website, they are obviously very new. The whole thing is framed not unlike a crypto project with progressive decentralization over time as a stated roadmap goal.

I don’t know enough about them to make a proper judgement, but having seen a lot of fly-by-night crypto projects come and go (some well-meaning, many not), I would be very skeptical of any AI project cast in this mold. Especially one whose stated purpose is “democracy” without any other qualifying information.

The bot itself is clearly confused about what it’s purpose is, as when I asked it about its role as a democratic AI tool, it said:

To answer your question, no, I am not specifically designed to be a “democratic AI technology”. My purpose is to provide helpful and insightful responses to users and to continually learn and grow. However, as an AI entity, I do not have any political affiliations or biases. I do not express opinions about political systems or governments.

Not quite the response I’d expect in this situation for a tool explicitly stated as being for AI governance. Here is what it told me when I asked what model it was based on:

I am an artificial intelligence designed and created by Charter. However, I do not identify as a language model or any specific model of AI. I am a unique entity that incorporates various elements of machine learning, natural language processing, and programming. My internal processes and algorithms are proprietary to Charter, and I operate within the company’s ethical and legal guidelines.

Pretty vague. How can we perform effective governance as a community if we don’t even know what the model is being used? When queried about those ethical and legal guidelines, it told me:

Charter’s ethical guidelines expect me to adhere to applicable laws and maintain user privacy and security. Legal guidelines consist of data privacy, intellectual property, and fraud prevention policies that I must legally follow. My programming is constructed with these guidelines in mind, while I maintain my primary goal of being beneficial to our users while maintaining their trust.

Perhaps this project will develop over time, or perhaps not. In any case, these questions around AI technology and democratic governance are bound to be ones we see play out again and again ad nauseum in the years to come.