Questionable content, possibly linked

Examining the AI Elections Accord

A couple of days ago, there was a splash in the news about how a number of tech companies signed a new “AI Elections Accord” in Munich, which the BBC reports on here. According to the official press release, more than four billion people will head to the polls in forty countries this year, making the likelihood of generative AI having a significant impact on the democratic process globally extremely high and risky. The press release further states:

As of today, the signatories are: Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap Inc., Stability AI, TikTok, Trend Micro, Truepic, and X.

Microsoft has its own quite lengthy article about these commitments available here, attributed to Brad Smith. From that:

Its goal is straightforward but critical – to combat video, audio, and images that fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders. It is not a partisan initiative or designed to discourage free expression. It aims instead to ensure that voters retain the right to choose who governs them, free of this new type of AI-based manipulation.

One thing I see conspicuously absent from all of these proclamations is any evident recognition that *these AI companies created this problem.* Full stop. It’s obvious that they did it knowingly, moving forward with the release of these technologies despite the absence of fully-adequate safeguards in place. And here we are supposed to be congratulating them for signing an agreement to – after the fact – put the genie back into the bottle with some very, very weak measures? No applause from me, sorry.

I especially don’t accept this statement from Twitter (from here), possibly the most garbage company in the world today:

Linda Yaccarino, CEO of X said, “In democratic processes around the world, every citizen and company has a responsibility to safeguard free and fair elections, that’s why we must understand the risks AI content could have on the process. X is dedicated to playing its part, collaborating with peers to combat AI threats while also protecting free speech and maximizing transparency.”

Absolutely laughable.

Microsoft’s standalone statement, while more measured and detailed than Twitter’s blurb above, is full of references to the concept of “bad actors,” a term which I personally find dated and not adequately descriptive, especially in the world of generative AI. Usually what “bad actors” means is something like: people doing things we don’t want for publicity or legal reasons, but which are fully possible due to the design of our technology.

I’m a firm believer in the cybernetics axiom, The Purpose of A System Is What It Does. If your system creates nude images without asking, then the purpose of your system is the creation of involuntary nudes. If your system creates election misinformation and disinformation, then the purpose of your system is destroying elections. It *might* have other purposes as well, but we can’t simply draw a line around the uses we disfavor but technically allow, and say those ones are “bad” and these ones are “good.” If you don’t want those capabilities in your system, you need to go back to the drawing board and do the hard work of removing those capabilities from your BAD DESIGN instead of accusing those who make use of these systems as designed of being “bad actors.”

I certainly hope the commitment of these companies reaches to this deep level, but from the evidence I’ve seen on the ground as an artist making use of them, and as a Trust & Safety professional is that this is rarely the case. Just as it was with prior calls and accords around reducing the spread of violent extremist content online, or reducing disinformation. So much of it is just words, just publicity, and just buttressing things these orgs have already touted as solutions like C2PA/Content Credentials, which are ridiculously easy to defeat.

More importantly, the problems presented by generative AI are just the tip of the metaphoric iceberg that is melting and destined to break free in the vast sea of polluted democratic processes globally. I personally see us plunging further and further into a runaway train scenario where no matter how much we put the brakes on, we’re destined for a collision as the masses which have been put into motion continue on their course until their ultimate conclusion. I still think we should do our best to ease the collision, but I’m honestly no longer hopeful it can be stopped. While we should continue the fight on the front lines, it’s also past time to fortify our fallback positions and brace ourselves for impact. Cause this shit isn’t going away, no matter the thoughts and prayers (and accords) we throw at it.

Previous

Happy DSA Day! (Belated)

Next

Anti-Cell Phone Propaganda Posters

2 Comments

  1. Tim B.

    yikes, 2 out of 4 of the responses to the official AI Election Accords website on r/singularity call elections a “waste of time”

    https://old.reddit.com/r/singularity/comments/1asjg1q/a_tech_accord_to_combat_deceptive_use_of_ai_in/

    imo we’re well and truly fucked

  2. Tim B.

    i’m also so sick of seeing people trot out “media literacy” as a solution to all these problems. its always wheeled out in these meetings, but never seems to amount to a hill of beans IRL:

    https://www.aielectionsaccord.com/uploads/2024/02/A-Tech-Accord-to-Combat-Deceptive-Use-of-AI-in-2024-Elections.FINAL_.pdf

    > “6. Public Awareness: Engaging in shared efforts to educate the public about media literacy best practices, in particular regarding Deceptive AI Election Content, and ways citizens can protect themselves from being manipulated or deceived by this content.”

    Good luck in a world where social media simply reinforces whatever people already believe anyway.

Leave a Reply

Powered by WordPress & Theme by Anders Norén