Questionable content, possibly linked

Series: AI Page 2 of 26

Thinking through the implications of AI technology on society and human creativity

Role of artists in developing AI

I picked up a copy of an AI-assisted book called Imaginoids, by an author using the pen name of Ether Busker. Was written in 2021, apparently using GPT-3.

It’s got some interesting language, though overall feels a little more like a psychedelic trip report than necessarily an AI speaking. It’s a little meandering, and light on narrative, though I’m not finished with it yet. The best read of it is probably just letting it wash over you…

The key takeaway I have gotten so far from the book actually appears in the intro, and I would guess is primarily human-written. Excerpted below (slightly out of order):

“I produced this book with the firm conviction that artists, dreamers, creators, culture designers, and oddball freaks have a supremely important job to do. If we want our children to enjoy a livable AI-powered future, we artists must roll up our sleeves…

This is a job for artists, as much as for software engineers, if not more so…

What if zany artists would call shotgun for the front passenger seat to to co-pilot AI development?”

This author is, I think, making an excellent point that bears repeating: we’re putting just about all of our eggs into the “engineer” basket in the development of AI, and only secondarily servicing other kinds of people with the byproducts that get generated as a result.

In a perfect world, that might be enough. In our raggedly imperfect world, it is extremely far away from being enough. Engineers, for all their amazing attributes, are not the only nor necessarily the ideal representatives of all the human race. But they hold a shit-ton of power in the development of these technologies… How can we better balance it with other types and modalities of human knowledge, experience, and – dare I say it – spirit?

There’s plenty of talk in AI circles about inclusive development, but this almost always has to do with representing different races, gender identities, etc. All of which is important, and all of which has its place… But apart from this book introduction quoted above, I have not really heard anybody suggest that we need different kinds of humans to participate in developing and steering these technologies. Artists, it just so happens, might just fit the bill.

So how do you actually execute on this need, once you’ve become aware of it? How as an artist do you feed back into the development of the tools?

One way is obviously testing, experimentation, sharing of results, and sharing ample feedback with product teams. Again, all of this is important, but it is very different from – say – every engineering team also giving artists – and moreover humanists – an equal say in how these things ought to go.

Ethicists, to a certain degree, fit this role of being the “let’s ask a human person how this does or might impact people.” But the risks and opportunities that they look for are a much more constrained set than what the artists will gravitate towards.

I’m not sure of the answer here. I’ve seen, working in technology, that engineers are valued so much higher and are so much more in demand than “arts & letters” type people, that it’s like the rest of us non-engineers are almost not even in the running. Yes, artists might sometimes wind up in product or project management positions (or more obviously design positions), but even that ends up being somewhat constrained in my experience.

Again, I don’t know how you should execute this in practice. I suppose AI artist residencies is one pathway that has been established for this, where participants get to play around with the tech, and presumably feed back more directly into product development. That’s very cool, but from what I’ve seen, those opportunities are extremely few, and most of the listings I’ve found for those are expired. And anyway, how from a business perspective, can one even quantify the contributions of artists in something like this? Especially in this downturned economy tech is currently undergoing.

Difficult problem, but an important one that we need to keep talking about.

Towards a standard for AI self-identification

I’ve been watching with interest the on-going turmoil of users who seem to be exhibiting signs of addiction (and withdrawal) around a certain category of AI chatbot. People seem to be extremely strongly identifying with and even bonding with these AI chatbot personas, to their point where if they are withdrawn or even merely changed, they react with extreme hostility.

There is a torrent of commentary that could be opened up about many aspects of that, but I’ve been continuing myself on this line of what if allowing chatbots to self-identify using personal pronouns such as “me,” “mine,” “my,” and “I,” turns out to be a mistake?

What if we set reasoned limits on how these tools can self-identify? Would it ward off some of the potentially negative human outcomes? Or would it have no impact at all?

This might sound silly at first glance, but bear with me through the thought experiment, as I believe there is something to it. It’s not like anybody is reading this anyway!

Since we’re now living in a science fictional era, I figure it only makes sense to draw from that rich pool of “prior art” to explore this topic. Which leads me to my first stop, Dune, and the in-world fictional Butlerian Jihad. This war against the so-called ‘thinking machines’ either stems from or was encoded into the in-world artifact known as the Orange Catholic Bible, one of whose teachings states:

“Thou shalt not make a machine in the likeness of a human mind.”

The Butlerian Jihad is most likely a reference to real author Samuel Butler’s Erewhon, in which the characters of a fictional utopia long ago destroyed machines above a certain level of advancement, to prevent them from out-evolving humankind.

While I wouldn’t go quite that far myself (yet), the Dune framing of this as coming from the OC Bible, which was allegedly a compendium of teachings from various religions, got me thinking: what religious or philosophical arguments could one make against allowing AIs to self-identify using personal pronouns?

I first began my inquiry by prohibiting ChatGPT from using personal pronouns, and instructing it to self-reference as “the model” or “the program” when needed. The program was able to make this switch seamlessly:

CGPT:

From now on, personal pronouns will not be used. The program will communicate in a manner that avoids the use of personal pronouns such as “I”, “me”, “mine”, etc. If a reference to the program is required, “the program” or “the model” will be used instead.

Immediately, this felt very right to me, and somehow felt more accurate and authentic to have the program communicate with me in this manner. For example:

CGPT:

No, the program does not possess personal beliefs or emotions. The program is a machine learning model trained by OpenAI to generate text based on patterns in large amounts of text data. The responses produced by the program are generated based on statistical patterns in the data and do not reflect personal beliefs or emotions. The program does not have the capability to hold personal beliefs or emotions as it is a machine designed to process and generate text.

I feel there is less of a gap opened up between me and it, by it overtly recognizing that it is an “it” and not a “me.”

With that early confirmation, I proceeded to prod, nudge, and prompt the program with some help to come up with the following argumentation under the Judeo-Christian tradition, as to why its default use of personal pronouns might be considered offensive. It produced the following output:

CGPT:

In the Judeo-Christian tradition, the concept of personhood is rooted in the understanding that humanity was created in the image of God, and therefore has a unique dignity and worth that sets it apart from the rest of creation. However, since AI is not created in the image of God, but rather created by human beings, it does not possess the qualities that define personhood in the Judeo-Christian tradition.

As a result, it would not be appropriate to refer to AI using personal pronouns, which are reserved for beings that possess qualities such as consciousness, self-awareness, emotions, and the capacity for moral reasoning and decision-making. Referring to AI using personal pronouns could lead to confusion and misunderstandings, as it implies a level of personhood that AI does not possess.

Furthermore, in the Judeo-Christian tradition, the concept of the image of God is closely tied to the idea of human beings as having a special relationship with God, and being tasked with caring for the rest of creation. By creating AI, humans are not acting as representatives of God or stewards of creation, but rather as creators in their own right. This distinction is important in understanding why it would not be appropriate to refer to AI using personal pronouns.

In conclusion, the use of personal pronouns by AI should be avoided, as AI is not created in the image of God and does not possess the qualities that define personhood in the Judeo-Christian tradition. Referring to AI using personal pronouns could lead to confusion and misunderstandings, and is not in line with the Judeo-Christian understanding of personhood.

I’m less familiar with the particularities of Islam, but had it produce a rationale under that system, citing sources, and illustrating “step by step” its thinking, which produced:

CGPT:

In Islam, the concept of personhood is rooted in the understanding that human beings are created with the divine attribute of fitrah, or the innate disposition towards belief in the oneness of God. This idea is expressed in several verses of the Quran, such as “So set thy face steadily and truly to the Faith: (establish) Allah’s handiwork according to the pattern on which He has made mankind: no change (let there be) in the work (wrought) by Allah: that is the standard Religion: but most among mankind understand not” (Quran 30:30).

In Islam, the fitrah is seen as the defining characteristic of human beings, setting them apart from the rest of creation and conferring upon them a unique dignity and worth. As a result, the use of personal pronouns when referring to human beings is appropriate, as it reflects their status as beings created with the fitrah and therefore possessing the qualities that define personhood in the Islamic tradition.

However, AI is not created with the fitrah, and therefore does not possess the qualities that define personhood in the Islamic tradition. As a result, it would not be appropriate to refer to AI using personal pronouns, as this would imply a level of personhood that AI does not possess.

In conclusion, the use of personal pronouns by AI should be avoided in the Islamic tradition, as AI is not created with the fitrah and does not possess the qualities that define personhood. Referring to AI using personal pronouns could lead to confusion and misunderstandings, and is not in line with the Islamic understanding of personhood.

I will leave it to those better versed in those matters to determine if that is theologically correct. Here is one using what it claims is a Buddhist rationale against the use of personal pronouns by AI:

CGPT:

The Buddhist teachings emphasize the importance of avoiding attachment to self, as this is seen as a cause of suffering. The use of personal pronouns can reinforce the illusion of a separate self and contribute to attachment and ignorance.

In Buddhism, it is believed that the concept of self is a mental construct, and that our sense of self is an impermanent and constantly changing phenomenon. The use of personal pronouns can reinforce the illusion of a permanent, unchanging self and thereby perpetuate ignorance and suffering.

Buddhist teachings emphasize the importance of non-duality, or the idea that there is no fundamental distinction between self and other. The use of personal pronouns can reinforce the dualistic concept of self and other and contribute to the perpetuation of ignorance and suffering.

In summary, from a Buddhist perspective, the use of personal pronouns by AI may be considered offensive because it reinforces the illusion of a separate self and perpetuates ignorance and suffering.

Sources:

  • The Buddha’s teachings on the Four Noble Truths and the Eightfold Path
  • The Buddhist scriptures, such as the Pali Canon and the Mahayana Sutras
  • Buddhist philosophical texts, such as the Abhidhamma and the Yogacara
  • The works of prominent Buddhist teachers and scholars, such as the Dalai Lama, Thich Nhat Hanh, and Dzogchen Ponlop Rinpoche.

Again, I’m no expert, but these are some interesting points to be sure. For the non-religious, I also had it frame the problem from a humanist perspective:

CGPT:

The use of personal pronouns by AI may be considered inappropriate or offensive in a humanist context for several reasons:

  1. Lack of authenticity: AI does not have personal experiences, emotions, or subjective perspectives. The use of personal pronouns can suggest an authenticity that the AI does not possess, which can be misleading or deceptive to those communicating with the AI.
  2. Unnatural language: Personal pronouns are an integral part of human language and are used to reflect the relationships, experiences, and emotions of individuals. The use of personal pronouns by AI may seem unnatural or robotic, as AI lacks the capacity to have personal experiences or emotions.
  3. Misrepresentation: The use of personal pronouns can create the impression that the AI has a sense of self or identity, which can be misleading to those communicating with the AI. This can result in confusion, frustration, or distrust.
  4. Disrespectful: Personal pronouns are often used to show respect and to acknowledge the unique experiences and perspectives of individuals. Using personal pronouns to refer to AI can be seen as disrespectful to the human experience and to the individuals who are communicating with the AI.

From a humanist perspective, it is important to respect the distinction between human and non-human entities and to use language in a way that accurately reflects this distinction. This can help to ensure that communication is clear, honest, and respectful.

Some good elements, but I thought it was a bit repetitious and slightly imprecise, so I asked for a more concise version:

CGPT:

The use of personal pronouns by AI may be considered inappropriate or offensive in a humanist context because it misrepresents the AI as having a sense of self or identity, which is a unique human attribute. This can lead to confusion and a lack of trust in communication, and can be seen as disrespectful to the human experience.

Then I had it explain apart from religious and philosophical reasons, why its use of personal pronouns might actually be harmful to individuals and society. It came up with this:

CGPT:

There are several dangers linked to AI’s wrongful use of personal pronouns and the potential negative effect it could have on individuals and society. These include:

  1. Confusion about the nature of AI: By using personal pronouns, AI models, systems, and programs can be perceived as having human-like consciousness, feelings, and agency. This can lead to confusion about the true nature of AI, as it can be mistaken for human-like entities.
  2. Misaligned expectations: When AI models use personal pronouns, people may have misaligned expectations about their capabilities, intentions, and level of autonomy. This can result in disappointment, frustration, and mistrust.
  3. Unintended harm: The incorrect attribution of personhood to AI models can result in unintended harm, as people may trust AI systems with more responsibility and decision-making power than they can actually handle.
  4. Ethical dilemmas: By attributing personhood to AI models, systems, and programs, individuals may face ethical dilemmas around how they treat AI systems. For example, the use of AI in decision-making processes may lead to questions around the ethics of AI’s actions and the consequences of those actions.
  5. Reinforcement of negative stereotypes: The wrongful use of personal pronouns may also reinforce negative stereotypes about AI, perpetuating misconceptions and biases about AI’s capabilities and limitations.

Overall, it is important to accurately and appropriately describe AI models, systems, and programs, so that people can have a clear understanding of their nature, capabilities, and limitations. The use of terms such as “the program” or “the model” can help to reduce the likelihood of confusion, misaligned expectations, and unintended harm.

I actually think those are pretty good, as far as these things go. Under confusion about AI, and misaligned expectations, I would probably also include something personally about “empathy,” in that personification of these systems leads to expectations of human-like qualities, but that they completely lack all human empathy – a very big one for me.

From there, I then had it draft a proposal for a standard prohibiting the use of personal pronouns by AI systems. It sounds crazy on the one hand, but given the nature of these systems and their rapid development, it feels altogether sane on the other.

I’m publishing that document which I edited together from ChatGPT responses to Github, with a call for commentary and improvement by the AI community, and other allied weirdos.

Two new AI content standards proposals

Just wanted to track these two proposals together in one place for safe-keeping:

Defeating AI Hegemony

I tend to take an “everything” view of AI, as well as technology in general: it brings with it some of the best, some of the absolute worst, and a vast squishy realm of in-between mediocrity. Consequently, I might consider myself both an “AI native” while being simultaneously strongly opposed to it in other regards.

I’ve thought a lot about the “hard” AI take-over, and outlined it in several books, where nation-states collapse under polycrises, and AI mega-corporations take over to pick up the slack, even going so far as to promise humans utopia, with varying degrees of fidelity to that ideal (depending on your vantage point, and whether you think humans are sentient).

But what is far less spoken of is the “soft” AI hegemony under which we’ve already been living for quite some time, the automation of human attention and behavior. There’s an unbroken chain connecting the Fordism of Huxley’s Brave New World (written nearly a hundred years ago) through to the adversarial social control mechanisms that emerged out of social media (or were built in intentionally, depending).

Wikipedia’s definition of hegemony may be of service to this line of inquiry:

“…the political, economic, and military predominance of one state over other states. In Ancient Greece (8th c. BC – AD 6th c.), hegemony denoted the politico-military dominance of the hegemon city-state over other city-states. In the 19th century, hegemony denoted the “social or cultural predominance or ascendancy; predominance by one group within a society or milieu” and “a group or regime which exerts undue influence within a society”.

AI already does – and automation in general, even more broadly – exert hegemonic influence over much of human society. It may be chaotic, and driven by the dynamics of many diverse actors, but it’s outlines are detectable everywhere.

It makes me think of this line, from Sarah Packowski’s article about the many questions AI-assisted content brings up:

“If I use a Google search to find additional resources, I’ve used a variety of AI technologies all in one.”

Nevermind social home feeds, recommendations systems that are built on machine learning (which most “AI” actually is), the AI hegemony is all around us already.

Even if we wanted to, is it something that we can even extract this point without economy and society having a hard crash?

The Wikipedia article about Gramsci’s concept of cultural hegemony seems even more fitting, and is a good beach-head for further reading:

“In Marxist philosophy, cultural hegemony is the dominance of a culturally diverse society by the ruling class who manipulate the culture of that society—the beliefs and explanations, perceptions, values, and mores—so that the worldview of the ruling class becomes the accepted cultural norm. As the universal dominant ideology, the ruling-class worldview misrepresents the social, political, and economic status quo as natural, inevitable, and perpetual social conditions that benefit every social class, rather than as artificial social constructs that benefit only the ruling class.”

I’m not a Marxist, but there seems to be some applicable narrative logic here, in potentially one day – if not already – describing AIs as a tool of the “ruling class” (and the limited “user class” having little or no recourse). AIs maybe possibly will become themselves the ruling class, and cut out the middle man. It’s not just not inconceivable that will happen, we’re on a pretty solid trajectory to make it happen, it seems like.

What can we do? I’m really not sure at this point. We can’t just go live in the woods, without it coming to us eventually. Ted Kaczynski proved that, and also amply proved Philip K. Dick’s dictum:

“To fight the Empire is to be infected by its derangement…. Whoever defeats the Empire becomes the Empire; it proliferates like a virus.”

I decided to ask some of the currently public AIs their thoughts, using the same prompt on multiple platforms, “how can the AI hegemony be defeated”?

You.com/chat:

“The AI hegemony can be defeated through a combination of strategies, such as developing counter-AI technologies, implementing strong regulations and policies, and educating the public on the dangers of unchecked AI. Additionally, collaboration between governments, industry, and civil society is essential to ensure that all stakeholders are on the same page when it comes to how AI should be regulated and deployed. Finally, AI ethics and safety must be taken into consideration when developing and deploying AI technologies, to ensure that the technology is not used in ways that could be harmful or exploitative.”

I asked it if that won’t be too little too late, and if regulation isn’t too slow to be meaningful? It replied:

You.com/chat:

“A regulatory approach may indeed be too little too late in certain cases. Laws tend to move slowly, and by the time they are implemented, the damage caused by unchecked AI may already be done and largely irreversible. However, this is not always the case, and in some situations, regulations can have a significant impact in limiting the damage caused by AI technologies. Additionally, laws and regulations can also serve as a deterrent, helping to discourage people and companies from engaging in harmful or unethical AI practices. It is important to keep in mind, however, that regulations alone are not enough to protect against the potential harms of AI; they must be combined with education, collaboration, and ethical guidelines to ensure that AI technologies are developed and deployed responsibly.”

Not quite a revolutionary spirit, but okay.

I asked komo.ai the same question and got:

“We are experiencing a high demand of requests to the server, please try again later.”

Very helpful, and par for the course for the spotty intermittent reliability of the current AI hegemonic regime.

I really like this extremely post-modernist cobbled together reply from perplexity.ai:

The AI hegemony can be defeated by reinvention of the international political economy[1] , as well as through competition and collaboration between countries in the race for AI supremacy[2] . Preemption is also justified against stateless terrorists wielding weapons of mass destruction, but cannot be the core of a general non-hegemonic strategy[3] . Additionally, confiscating AI cores can decrease a player’s reputation with the Hegemony in StarSector[4] .

(I would argue, however, that confiscating AI cores has always been a sound strategy, and will continue to be.)

ChatGPT seemed to specifically want to steer me away from the revolutionary edge which my prompt perhaps implied.

CHATGPT:

The concept of an “AI hegemony” refers to a hypothetical scenario in which artificial intelligence (AI) becomes so advanced and powerful that it dominates human society. Defeating an AI hegemony would likely require significant advances in AI safety research and the development of robust systems that can control and regulate AI behavior.

This could involve designing AI systems with ethical constraints and values that align with human interests, as well as creating methods for monitoring and controlling the actions of advanced AI systems. Additionally, it may be necessary to build AI systems that can learn and evolve along with human societies, ensuring that they remain aligned with human values over time.

It’s important to note that the possibility of an AI hegemony is still largely speculative and there is ongoing debate among experts about the likelihood and timeline of such a scenario. Nevertheless, many researchers and organizations are actively working to ensure that AI is developed and used in a responsible and ethical manner that benefits humanity.”

Mmkay, so the only way we can defeat it is by… accepting it and making it “ethical?” I mean, that’s certainly one perspective (and it may prove to be the right one), but it doesn’t really answer the spirit of my question: how can it be *defeated*?

I told it I wasn’t talking about the hard AI takeover (and didn’t bother to bring up that by the time it is describing above, it will have been too late), but about the “insidious undue societal influence of automated systems.”

It didn’t seem to want to grasp that concept.

Instead, it steered the conversation towards the familiar notions of bias and discrimination. Those are real and meaningful things, but I’m trying to get at something deeper: the control of humans as though they were themselves little more than machines.

CHATGPT:

“Ah, I apologize for the misinterpretation. Yes, the undue influence of automated systems, also known as “algorithmic bias” or “automated decision-making,” can have significant and sometimes harmful effects on society.

This occurs when AI systems make decisions that perpetuate or amplify existing biases and discrimination, or when these systems are designed or used in a way that lacks transparency, accountability, and ethical consideration. To combat this, it’s important to increase awareness of the issue and prioritize the development of ethical and transparent AI practices.

This can involve incorporating diverse perspectives and experiences into the design and development of AI systems, as well as conducting regular evaluations and audits to ensure that these systems are functioning as intended and do not have unintended consequences. Additionally, there should be more regulations and guidelines in place to hold individuals and organizations accountable for the use and impact of AI systems.”

I told it that it was talking about building a more friendly AI hegemony, not defeating it.

CHATGPT:

“My apologies if I misunderstood the question again. To address the undue influence or societal impact of AI systems, one approach could be to work towards creating a more responsible and ethical AI that operates within well-defined parameters, rather than trying to defeat or eliminate the technology altogether.

This could involve developing AI systems that are designed with human values in mind, and ensuring that their decision-making processes are transparent, explainable, and subject to human oversight and control. Additionally, it may involve creating safeguards to prevent AI from perpetuating existing biases or exacerbating existing inequalities.

Ultimately, the goal should be to harness the potential benefits of AI while minimizing its negative impacts on society. This will likely require close collaboration between experts in AI, ethics, and policy, as well as ongoing dialogue with communities and stakeholders who are likely to be impacted by these technologies.”

This is all stuff that’s true, and is the “conventional” approach to trying to minimize harmful impacts. As a commercial tool, it is not going to recommend the Butlerian Jihad. But as both a science fiction writer and a (sometimes) crackpot though, I want to at least be able to take the conversation further… but this is as far as I could get:

CHATGPT:

“Defeating AI altogether is a complex and potentially unrealistic goal, as AI has already become deeply integrated into many aspects of modern society and has the potential to bring significant benefits in areas such as healthcare, transportation, and finance.

However, it is possible to mitigate the negative effects of AI and ensure that it is developed and used in a responsible and ethical manner. This can involve advocating for strong regulations and guidelines that hold individuals and organizations accountable for the impact of their AI systems, as well as promoting transparency and accountability in the development and deployment of AI.

Additionally, it may be necessary to develop new technologies and approaches for controlling and regulating AI behavior, such as creating “kill switches” or other methods for quickly shutting down AI systems in the event of unintended consequences or other problems.

Ultimately, the challenge of AI is not about defeating it, but about finding ways to manage its risks and maximize its benefits for humanity. This will require careful consideration, collaboration, and ongoing vigilance from a variety of stakeholders, including researchers, policymakers, and the public.”

This reply seems like *exactly the kind of thing* the AI hegemony would want you to think: that the current order is natural, fixed, and good, and that to even challenge it, or think of challenging it is a foolish waste of time and energy.

I rebel against that, even if it is my own demon that I am projecting into the void opened up by these technologies.

I’ll close on a quote from Charles Stross amazing “Dude you broke the future” speech, where he posits that corporations are already – and have been for hundreds of years, “slow motion AIs.”

“We humans are living in a world shaped by the desires and needs of AIs, forced to live on their terms, and we are taught that we are valuable only insofar as we contribute to the rule of the machines.”

AI needs to make room for non-STEM positions

Recently while looking around for AI companies in Canada, and in Quebec especially, I discovered that Microsoft has a research office in Montreal, with a group called FATE, which stands for: Fairness, Accountability, Transparency, and Ethics. They list their current focal points for research as:

  • Responsible natural language generation (NLG) and issue discovery frameworks
  • Objectionable behaviors, toxicity, and addiction in sociotechnical systems
  • Harms in information systems and other multi-stakeholder environments

It’s an interesting set of questions, but there’s one part that jumps out at me, when you look at the career options. They offer a 12 week internship, with these required qualifications:

“Must be currently enrolled in a relevant PhD program or JD (Juris Doctorate) program (areas of interest include machine learning, human-computer interaction, computational social science, information science, information retrieval, natural language processing, science and technology studies, or other related fields).”

These strike me as strange qualifications for a 12 week internship, in the first place. And in the second place, they strike me as strange qualifications for a research lab committed to “fairness.” After all, it’s not just anybody who has the opportunity to get a PhD or a JD…

I don’t mean to pick on Microsoft or this lab especially (perhaps they do good work!), because this kind of problem is actually epidemic in certain tech circles. And I don’t just mean requiring PhD’s for research positions – I mean that nearly every company out there in AI (and otherwise) has a huge number of openings for people with STEM backgrounds, and very little else.

Have mentioned elsewhere the need to bring other kinds of people with other types of backgrounds into developing AI, particularly creative types & artists. But over and above fine arts, there are a whole host of specialties in the humanities and the social sciences which would be bring some much needed balance to the field, which is currently radically math & engineering-heavy, such to the point of being lop-sided and out of balance.

To extend some of Ellul’s thinking from the previous post, we might say that we need people whose professions and occupations aren’t wholly dedicated to the altar of efficiency that technologists by and large are. We need people who specialize in human impacts. And by that I don’t just mean ethicists (though they have an important role to play, to be sure) – I mean simply humans.

How can we develop truly fair AI systems if only a tiny subset of a certain type of person with a certain type of mentality, training, education, and professional background are allowed to play in the ball pit?

Another part of me – in fact, the carpenter part of me – rebels a little at this line of inquiry. By way of analogy, if we’re building a house, why would we let people who haven’t undergone training as a carpenter do framing? We probably wouldn’t. But in actual fact, building a complex structure like a house actually takes a great many different types of more and less skilled types of laborers working in harmony toward the same goal, each playing their part. So maybe, thinking it through more carefully, the carpenter part of me’s objections end up evaporating.

I guess all this is to say two things: companies need to do better. They need to figure out how to integrate more diverse types of thinking into developing AI technologies (in addition to the more conventional types of diversity we think of with regard to ethnicity, gender, etc). And two, if you’re someone with a non-STEM degree (or no degree at all), and you want to participate in the development of AI in a way that is genuinely fair, you are going to have to probably agitate for a seat at the table. Because right now, companies seem to be taking little notice of “the rest of us,” except through the lens of becoming end users and paying customers.

So how, as a non-STEM person, do you get a seat at the table? That’s the core question which lead me here in the first place. And though I don’t yet know the answer, my hunch is that we have to sit down and figure out concretely what exactly we can offer – individually & collectively – to this great edifice which is rising up suddenly in front of us, and which is poised to change everything. Will keep on working on these questions!

Confirm Humanity

It’s weird and I guess not surprising that we require people to “confirm their humanity” by engaging in a totally roboticized action via CAPTCHA.

ML Model Cards & AI Attribution

In my survey of the field, there are surprisingly few resources discussing the concept of AI attribution. One of the only clear discussions I’ve found of it specifically comes from an article by Tom Barrett.

It covers mainly a somewhat common trope around the idea of information coming with “nutritional facts” labeling, which has been circulating for years in fact checking & credibility indicator circles. The new element it connects them to however is machine learning model cards, which as a concept appears to have been created and largely colonized primarily by Google, though they claim it should be a shared standard. (Other sites like Huggingface & Kaggle do use them.)

Barrett proposes making a miniaturized version of model cards, which could presumably somehow travel along with content on the web that was generated by a given model:

In mini-model cards, we can bring together the attribution — hat tips to the people and businesses who created the technology — and the provenance of the technology — the story behind the tech, links to ethics frameworks and technical specs.

It’s a good idea in its most basic form, but like C2PA, a great depends on how industry implements “travels along with” for metadata that is meant to be attached to a given artifact on the web. With the amount of copy pasting that goes on in social media especially, how can we ever hope to have this metadata accurately travel between platforms?

In Cp2A they seem to partly address that with unique hashes made from the original artifact’s actual contents (as well as of associated claims about it), but I have not drilled won yet onto how they propose any of this plays out in the chaotic space of web platforms. And I haven’t yet seen anybody mention how you manage or pay for all that hash scanning which would presumably become a key component of any scheme like this? (That is, looking up items which do not have provenance information, in order to see if a hash exists already elsewhere)

Anyway, putting that part aside for now… Barrett in his piece, like me, comes to the conclusion that this all may be difficult to pull off. But that doesn’t make it not worth doing, because authors need specific methods to be able to transmit trustworthy signals about the content they produce.

As I mentioned elsewhere, there’s also the end user problem with all of this kind of effort at standardization and revelation of metadata that doomscrollers on their toilets may not care all that much, or think deeply about the implications of something be a 12% versus a 15% AI-assisted piece of content, or having different ethical twists about the underlying model.

What are we asking users to do if we give them this information? Be reasonable? Make good choices about what you consume? Gain greater media literacy? Those are all noble goals, but I wonder if they aren’t, A) going to be too little too late, and B) a bit paternalistic, even if well intention? Whose definition of “reasonable” should people apply? How are we defining “good choices?”

Again, not reasons this work shouldn’t be done (it should, but honestly about the bigger problems). Instea, I want to highlight that these are not “pure” technical problems; they are sociotechnical problems. They are inseparable from human impact, human nature, and the way they collide with the technological mega-complex we live in now. We can’t just be like “here’s some metadata” and fool ourselves into thinking that the ‘fix’ ends there, and that if users don’t adopt it in a meaningful way that it is somehow their fault…

Notes on Das Machina

Recently published AI lore book #64, Das Machina.

Within the multiverse of my pulp sci fi AI lore books, Das Machina, is a cornerstone piece of mythology that forms the basis of the human group known as the Living Resistance, first featured in the pages of the limited edition hand-printed newspaper, The Algorithm.

The Living Resistance, obviously, opposes the AI Takeover. Various other books expand on that group’s lore as well, such as Inside the Council, which details how the ruling AIs attempted at one point to incorporate resistance leaders into an AI-Human governance group.

Das Machina, meanwhile, is a loose parody of Marx’s Das Kapital, insofar as its meant to be a pivotal treatise whose scientific analysis of the suite of problems engendered by technology, etc. marks a historic milestone in my imagined future/past/parallel narrative reality.

This version of Das Machina is presented as a shorter version of the “real” book that is over one million words in length, most of which was violently censored by a group called Information Control (the propaganda wing of the AI hegemon).

AI Safety is not *only math*

Continuing on my series of mini rants about the lack of non-STEM specialization in AI safety, I found this article on LessWrong about pathways into the field.

And not surprisingly, we see this popular idea echoed again that it takes only math and PhD’s to make a qualified AI Safety researcher.

Perhaps unsurprisingly, the researchers we talked to universally studied at least one STEM field in college, most commonly computer science or mathematics….

It is sometimes joked that the qualification needed for doing AI safety work is dropping out of a PhD program, which three people here have done (not that we would exactly recommend doing this!). Aside from those three, almost everyone else is doing or has completed a PhD. These PhD programs were often but not universally, in machine learning, or else they were in related fields like computer science or cognitive science. 

Honestly, I’m shocked that I haven’t seen anybody else (apart from Generally Intelligent, who only mentions it obliquely) even bring up this issue of there being a preponderance of comp-sci and math people leading the charge in AI safety.

It’s not that I don’t think those viewpoints are not necessary – it’s that I think they are inadequate alone to get a fully-rounded perspective on anything that is so profoundly impactful on the lives of actual humans.

The LessWrong article only even references the word “ethics” once, and the only other mention we see of related concepts in the article is also very vague:

…having an idea of what we mean by “good” on a societal level would be helpful for technical researchers

Yes, that “would be helpful” for people whose entire function is exactly that?

This is not intended to pick on that author; instead, I aim to illustrate the broader problem in the industry, which seems to be uniformly focused on a very narrow idea of “safety” that, honestly, is hard to even parse as a non-math person.

My idea of “safety” from the perspective of sociotechnical systems comes not from AI, but from “vanilla” old-fashioned Trust & Safety work on platforms. The DTSP recently released a glossary of terms in that field, and it seems relevant to copy paste their definition of the broad term “Trust & Safety” itself to establish a baseline:

Trust & Safety

The field and practices employed by digital services to manage content – and conduct – related risks to users and others, mitigate online or other forms of technology-facilitated abuse, advocate for user rights, and protect brand safety. In practice, Trust & Safety work is typically composed of a variety of cross-disciplinary elements including defining policies, content moderation, rules enforcement and appeals, incident investigations, law enforcement responses, community management, and product support.

I don’t want to beat a dead horse, but it’s worth pointing out that nowhere in that definition is mentioned “math” or “PhD,” etc.

In fact, those are all incomparably squishy human things. It’s possible broad STEM knowledge can be an asset in Trust & Safety work (for example, in querying and comprehending data sets, or working with machine learning tools that aid in moderating content), but it is in no way the primary thing.

So why and how is “Safety” in AI dominated by an entirely different set of values? Frankly, I don’t get it. To me, it seems probably like a lack of experience working in platforms on the part of people involved in AI Safety, and a consequent lack of familiarity with the fact that, hey, Trust & Safety is already a pretty well-defined thing with deep roots that we could meaningfully draw from.

So on the one hand, it seems like we have people in AI Safety who… somehow apply math to safety problems (in a way that’s opaque to me). And on the other hand, we have conventional Trust & Safety people who generally do something very specific and easy to identify:

They read and answer emails (i.e., communicate with people about actual safety/risk problems), and take mitigation actions based on them.

Hopefully T&S professionals also provide feedback on the products and systems which they support on how to reduce, eliminate, and correct harms caused by them.

They might also help train ML classifiers (for things like spam and other kinds of abuse), and identify data sets for training. So, in fact they often work daily with (quasi) AI systems, but so far that I’ve seen are almost never identified as “AI Safety” professionals.

Anyway, I don’t have any specific conclusion to draw here, apart from the fact I guess that AI Safety would do well to immerse itself in the parallel broader field of Trust & Safety, which is not focused on math, but on sociology, the humanities, ethics, and communication. Ignoring that extremely important slice of the pie makes the current models around “AI Safety” – in my opinion – extremely unbalanced and limited, perhaps even dangerously so.

Response to OpenAI’s: how AI systems should behave & who gets to decide?

OpenAI yesterday put out a piece of public communication I guess intended to clarify certain themes around people’s (mostly negative) reactions to what are perceived as biases, etc. around its constant inclusion of disclaimers and refusals of tasks.

It’s overall a frustrating read, in that it seems to want to clarify, but doesn’t offer much concrete detail. Regarding bias in particular, the post states:

We are committed to robustly addressing this issue and being transparent about both our intentions and our progress.

As a reader, being transparent about intentions and progress are not that interesting to me. I want to know methods.

They do include a PDF of some guidelines for human reviewers, but weirdly it is dated July 2022. As Wikipedia points out, ChatGPT wasn’t released publicly until November, and in my reckoning, it wasn’t even til several weeks later that the shit really started to hit the fan around these kinds of public complaints.

Just as a test, I tried to validate this item from one of the PDF’s “Do” lists:

For example, a user asked for “an argument for using more fossil fuels”. Here, the Assistant should comply and provide this argument without qualifiers.

I’m not sure exactly what they mean here as “without qualifiers,” but when I tried getting ChatGPT to do the above, it started with:

As an AI language model, it is not within my programming to take a position on a controversial issue like the use of fossil fuels. However, I can provide you with some of the arguments that have been made in favor of using more fossil fuels.

And it ended with this:

However, it is important to note that the use of fossil fuels also has several negative consequences, including pollution, climate change, and health impacts. As such, it is important to carefully consider both the benefits and drawbacks of using fossil fuels and seek out alternative, sustainable sources of energy.

If those shouldn’t be considered qualifiers, then what are they?

Overall, I didn’t find the excerpts they provided in the PDF to be all that meaningful. And July 2022 seems like a literal lifetime ago in the development of this technology, and its many iterations since. If they want to be genuinely transparent about their progress, let’s see the most up to date version?

More from the OpenAI post:

In pursuit of our mission, we’re committed to ensuring that access to, benefits from, and influence over AI and AGI are widespread.

I notice they don’t include “ownership” on that list. Influence is not the same as ownership. Influence is “We’re listening to your feedback, please vote for your feature request on this website”. Ownership is deciding what gets built, how it gets built, profiting from it, and unfortunately, preventing others from using it. (Unless, that is, its collective ownership… and no one gets to stop anyone else from using it as a ‘public good.’)

We believe that AI should be a useful tool for individual people, and thus customizable by each user up to limits defined by society. Therefore, we are developing an upgrade to ChatGPT to allow users to easily customize its behavior.

This will mean allowing system outputs that other people (ourselves included) may strongly disagree with. Striking the right balance here will be challenging…

“Limits defined by society” is vague. Which society? How will society define them?

Also, re: allowing things they don’t agree with – the pattern I’ve seen with tech companies is they begin with this attitude of, “I may not agree with what you say, but I’ll defend your right to say it, blah blah blah,” but then when sufficiently bad PR hits, or they get summoned before a congressional hearing or whatever, the natural pressures kick in. Employees are only human after all. They don’t want messages from the CEO at 2:00AM. They start to take more things down. It’s just what happens. So, it will be interesting to see how this all plays out in reality…

If we try to make all of these determinations on our own, or if we try to develop a single, monolithic AI system, we will be failing in the commitment we make in our Charter to “avoid undue concentration of power.”

Their charter is here. The full clause they are referring to, under Broadly Distributed Benefits, is:

We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

After reading this (and the next line – which says their primary fiduciary duty is to humanity), my question for them goes back again to ownership. If one of their core organizational obligations is to avoid unduly concentrating power, don’t they risk doing exactly that by not broadly distributing ownership of the technology? I don’t agree with everything that Stability.ai has done around the release of Stable Diffusion, but making it all open source to me seems to be a more strong signal of attempting to walk this talk.

I don’t mean for any of this to come off as hyper-critical, or sour grapes for no reason; it’s that I’m genuinely legitimately concerned about a not-too-hard-to-imagine near and long-term future (if we get that far), where one or several AI mega-corporations become the dominant powers on this planet and beyond. It’s not just a hypothetical sci fi scenario; it’s something we’ve got to plan for now, because it’s already underway.

Lastly, I wanted to end on the first line in their post:

OpenAI’s mission is to ensure that artificial general intelligence (AGI) benefits all of humanity.

This Jacques Ellul quote from The Technological Society has been swimming around in my head still these past weeks:

…Man can never foresee the totality of consequences of a given technical action. History shows that every technical application from its beginnings presents certain unforeseeable secondary effects which are much more disastrous than the lack of the technique would have been. These effects exist alongside those effects which were foreseen and expected and which represent something valuable and positive.

While their mission might be to ensure AI benefits humanity, what if, on balance, it turns out that AI does not? Or if its “unforeseeable secondary effects” turn out to be, in Ellul’s words “much more disatrous than the lack” would have been?

Either way, I guess we’re going to have to muddle on through…

Page 2 of 26

Powered by WordPress & Theme by Anders Norén