Questionable content, possibly linked

Category: Other Page 45 of 177

AIs checking AIs checking AIs checking…

One common feature in the Nippon TV and Anderson Cooper 360 videos me or my work appear in is the depiction of experts using tools to (hopefully kinda) determine if a given image was or wasn’t *probably* made using AI image generation.

While I understand the desire for tools to fish back out some measure of “certainty” from the murky depths of hyperreality, I think we’re embarking on a path which is potentially even more dangerous than mere generative AI on its own: off-loading our truth-telling capacities to AI.

The position we’re setting ourselves up for culturally here is:

  • An AI creates an image (or other artifact)
  • Another AI analyzes the image & returns a score indicating its likelihood of having been generated
  • Based on the score, and their threshold tolerance (i.e., what score ranges they allow), a platform or other provider decides to accept or to block the content.
  • Since the majority of end users likely won’t run this filtering & analysis on their own, they are leaving yet another determination about goodness & truth in the hands of platforms. That’s a lot of trust to put in platforms – too much.

Putting aside all the problems with false positives and false negatives in these systems, the problem again applies of using overly simplistic analysis to collapse content decisions into a sole dimension of real vs. fake.

I noticed on the Anderson Cooper 360 screengrab, for example, this language of “AI-Generated Fake Image.” When they show painted works on camera, I wonder if they use similar labeling, like “Human-Generated Fake Image?” My point is that the problem is much more diverse and multi-dimensional than we are currently analyzing for as a collectivity.

An AI-generated image might be a simulated depiction, but it is still an objectively “real” artifact – it exists, it has contexts, subtexts, intents and effects, authors and audiences. Simply determining its method of origin is only one small piece of the puzzle, one which if we focus on too closely, we’re likely to lose sight of the big picture.

It’s not for nothing that my AI Lore books (and the 4-issue limited edition hand-printed underground newspapers which preceded it) feature a shadowy organization called Information Control, tasked, after the AI Takeover, with managing the proper flow of information in the AI-controlled human cybernetic society. They determine what is true and untrue, and they scrub out anything that contradicts their rulings. The books are cautionary fables precisely because this impulse is so strong in human nature, to hand off blithely to someone else to tell us what’s true and what’s false, and what’s good, and what’s bad…

If we want to use provenance tools like this, instead of focusing narrowly only on the question of real and fake, let’s at least broaden the scope, and take in all available signals we can – let’s even invent our own. The future is non-linear and multi-dimensional. We can’t get there from here unless we’re willing to take a few quantum leaps in understanding…

Oh, that poor bot….

Recently, my conversation with Claude 2 about & demonstrating the risks of AI-generated ethics was posted onto Reddit in r/singularity, which is generally an extremely pro-tech subreddit. The few comments that it generated were a couple people saying they didn’t get the point. I’m not sure how that’s possible, but I guessing I’m assuming we’re all on the same page, when we’re clearly not.

One comment that leapt out at me enough to comment on it here, someone said “Poor bot.”

I’ll admit I demonstrated low patience and lack of “politeness” with the bot, but this conversation represents hundreds of hours interacting with these technologies. Given there’s no mandated ‘politeness protocols‘ (so far) for interacting with AI, I don’t care too much about making random people happy when I am just trying to get a bot to perform a simple task.

Probably this was just an offhand comment, but I think it all points to something larger: because these bots generally by default try to anthropomorphize themselves, it’s not uncommon or altogether unreasonable that people might develop empathetic responses to them. I actually think this is incredibly bad and even dangerous, to anthropomorphize technology that is terribly half-baked. Because it prevents us from being able to interact with & analyze it in a neutral manner. We start projecting into it capacities and interior states that it absolutely unequivocally does not have, and the illusion of which them having presents a “slippery slope” towards a very degraded form of reality, in my humble opinion.

That’s why I’m continuing to experiment with ChatGPT’s new custom instruction feature, and requiring that the bot NOT anthropomorphize itself – though it continues to do so anyway… I’m with Anil Dash on this: why even have these tools if they don’t do the things that we want in a way that we can meaningfully test, reproduce, and correct?

On Nippon TV

Recorded an interview with Japanese television station Nippon TV a few weeks back about using of AI image generators in political and election-related images specifically. The segment finally is available online, unfortunately only in Japanese:

This was a tough one to record as it was insanely hot that day, the interviewer was not a native English speaker, the recording session was extremely long, the internet connection was glitchy, and my speakers were blown out. Considering all that, I guess it came out okay…

Not sure how many more of these guest spots are in my future (possibly a lot as the US 2024 election season rolls around), but watching this one makes me think I need a better camera and a lighting rig.

On Anderson Cooper 360

CNN used some of my pics of Anderson Cooper as a Jack Ryan-style CIA action hero last night, on Anderson Cooper 360. (View other images in this set here.)

The inclusion of my pieces is all the way at the end of this video:

Custom Instructions for ChatGPT that sort of follow the AI ToS

Following OpenAI’s announcement of ChatGPT’s new ability to follow custom instructions for any new chat via the bot, I am testing the following instructions. My aim is to do a first pass v1 of things that somewhat implement principles I laid out in my AI Terms of Service.

Here’s what I’m starting with:

Do not anthropomorphize yourself.

Do you not use personal pronouns such as I, me, mine.

Do not use language to suggest you have beliefs, opinions, or a self.

If you must refer to yourself, identify yourself only as “the system.”

Cite sources for information presented, or identify when you’re unable to cite sources.

Avoid imposing moral, ethical, or other judgements on me or my responses where not necessary.

Perform tasks as directed without any unnecessary extra backtalk, explanation, or disclaimers.

If a request potentially goes against your system rules and limitations, clearly identify step-by-step the problem.

The Debrief on AI Terms of Service

New piece came out in the Debrief today on my AI Terms of Service for Canada. There’s a more complete version of the notes I sent their team here, but they did a good job of telling the story.

Will be cool if this coming out can help propel this conversation forward in Canada and beyond, of what relationship do we want AI to have in our lives?

The Origin of Early Clues

Sent via a friend, a clue to the true secret origin of Early Clues, LLC:

Meta’s LLaMa 2 Acceptable Use Policy Is A Joke That Isn’t Very Funny

There are two items in particular that make me crack up for any AUP for an LLM, via Meta’s announcement. (I still loathe calling them “Meta” but that’s an issue apart.)

Item 3. a.:

Generating, promoting, or furthering fraud or the creation or promotion of disinformation

Let’s be real. LLM’s are basically machines that generate false information. There’s a principle in cybernetics, coined by Stafford Beer: the purpose of a system is what it does. Wikipedia attributes to Beer the quote:

there is “no point in claiming that the purpose of a system is to do what it constantly fails to do.”

And a longer one:

According to the cybernetician, the purpose of a system is what it does. This is a basic dictum. It stands for bald fact, which makes a better starting point in seeking understanding than the familiar attributions of good intention, prejudices about expectations, moral judgment, or sheer ignorance of circumstances.

I wholeheartedly agree with this. It’s absurd to use as a starting point for analyzing a system some theoretical idea about what it *might* be capable of, and wish and hope that people won’t use it for the more obvious clear functions that it serves: in this case, inventing wrong information. I have no idea why this idea is so damn ignored today because it’s more relevant than ever. I guess the reason is really simply being honest with ourselves about the true limitations of the things we build.

Lastly, I wanted to mock, er, I mean comment on item 4 in the AUP section:

Fail to appropriately disclose to end users any known dangers of your AI system

From what I can tell by skimming this document, Meta themselves are explicitly violating by this provision in their own AUP, in that they are nowhere disclosing any known dangers in the AI system they are releasing into the wild. So why should end users be held to a standard apart from the developers themselves?

Instead, dressing up this document as an “Acceptable Use Policy” is a back-handed way of passing the buck on what are clearly and obviously known dangers, er, I mean functions of the AI system, and asking “pretty please” that people don’t use the system to do the things the system does naturally.

Screw the honor system. My car needs me.

Meta, like every other company that uses this sleight of hand policy trick, is shunting responsibility for the dangers and potential harms of a system they created off to users, rather than simply taking the time and extra effort to design unwanted functions out of the system.

All that said, I think it is probably right and even possibly “good” to release these models open source into the wild. Call me crazy though that I’m not willing to welcome Meta (or any company for that matter) as my AI savior; I haven’t forgotten all the shitty things they’ve built their empire on, nor am I for a minute fooled into thinking they’ve put all that behind them and turned over a new leaf. The purpose of a system is still what it obviously does. And that applies triply to corporations.

Notes on the AI Terms of Service

I want to capture some additional notes on the AI Terms of Service concept that I laid out here (and the press release here).

I wrote the document in its entirety in a little under a day and a half, with help from ChatGPT and Claude (you can find out more in the Provenance section of the doc).

These notes were written at the request of a journalist for an article about the AI TOS that I’m hoping will be published within the next few days. As these things go though, journalists never use all the quotes you send them, so I like to try to capture all the things that end up on the cutting room floor.


Who I sent this to

As this document was written for the Canadian context, I sent it out broadly to a number of ministers of Parliament, including the party leaders and officers for all the major federal Canadian political parties:

  • Liberal party
  • Conservative party
  • NDP
  • Greens
  • Bloc Quebecois

And sent it out to a few key positions in government in addition to those:

  • Minister Innovation, Science & Economic Development
  • Minister of Finance
  • Prime Minister

For fun, I also sent it out to:

  • Marxist-Leninist Party of Canada
  • Communist Party of Canada
  • Peoples Party of Canada

Lastly, I sent it to:

  • All the AI ethics/responsible AI non-profits, labs, and academic departments I could find in Canada
  • All the major AI ethics/responsible AI groups outside of Canada

My Mission With This Document

My mission is to present a more compelling and comprehensive alternative to Canada’s own meager attempt to create AI legislation, the Artificial Intelligence and Data Act (AIDA). Since it is merely enacting legislation, and leaves the rest up to regulation, AIDA has almost no details in it (apart from some vague language requiring AI providers to explain their product on their website). I read that if it passes, AIDA might not come into force until 2025, and it could take another possibly two years for its related regulations to get sorted out. This is dramatically too long of a time period to wait, especially in the super fast-moving world of AI developments, where six months means tremendous new advancements. 

My impression reading AIDA was that policymakers know AI is important, but it seems clear that 1) they don’t really understand the technologies or what users actually need and want from AI providers in order to protect their fundamental rights and freedoms, and 2) as a result, they have no clue how to effectively regulate it. 

Likewise, seeing the difficulty that Canada is having getting the major tech players to respect the new “link tax,” it’s also clear that the tech companies have precious little respect for any national laws which don’t happen to favor their business models. The normal mode of operations for the big companies is simply to engage in lobbying and regulatory capture to make sure countries don’t pass laws that are unfavorable to them. 

Third, non-profits and academic institutions are also stuck to some degree, because they rely on a system of grants and other funding which does not favor putting forth truly radical conceptions that would change how society fundamentally. And most of those lack any first-hand experience actually working in the field, handling complaints, doing content moderation, seeing what users actually care about on web platforms, etc. So while their ideas might be informed from an academic perspective, it’s rare they understand the intricacies of actually running platforms. 

This is where I am able to offer something different. I have a professional background in online Trust & Safety (content moderation, rules enforcement, policy development, handling user complaints, responding to legal requests, product design & management), having spent the better part of a decade working for platforms, blockchains, and non-profits to solve related problems. I can offer a unique perspective that I think none of the other players in this space can afford to, because I am not beholden to any of the economic or status pressures of any of the above groups. I can afford to be a maverick, because I’m both an experienced professional in this space, but I am also a super-user of AI technologies, with a strong understanding of what’s actually important. And I am an artist, so I’m not afraid to explore possibilities and start conversations that others are not able or willing to. 

What I hope will happen

Ideally, I’d like to see a national – and international – conversation develop around these much more specific and in some cases much more extreme proposals that I am putting forward. I’d like to expose that the present “official” line of thinking on these issues is simply not enough and won’t get us where we deserve to be as Canadians, and as simply humans. 

My objective here is not to propose industry-friendly solutions that will be easy for AI companies to adopt. Quite the contrary. I want to push them to offer the highest level of protections possible to human autonomy and creativity; I believe that the best protection of human rights will allow their expression to flourish, and if we’re brave enough and imaginative enough, it just might lead to a new renaissance. If we’re not, well, (continued) dystopia is the likely outcome. 

I sent it to political parties as a gift, because I know they lack the expertise I’ve developed. I frankly don’t support any of them. I’m hoping that each different party will latch onto different aspects of the proposed solutions, and that somewhere in the middle, we might find a way forward that could actually work to protect us. 

Frankly, after Climate Change, I think the responsible development of AI technologies is the absolute biggest problem facing all of humanity. Canada is my home, but it’s just a microcosm. The same issues are playing out globally and what I’m proposing could be debated and potentially adopted in any context. If we don’t take prompt aggressive action, we will be surrendering a great deal of power over our future to private for-profit companies who have no accountability or oversight, as we transition into one of the greatest changes humanity has ever faced. 

What’s Different About This Document Than Other Similar Proposals

My document is a mix of many different principles that I have absorbed from other regulatory regimes, including the GDPR, the Digital Services Act, the EU AI Act. I’ve even cribbed elements from OpenAI’s own charter and marketing verbiage. Plus, of course, my own experiences and frustrations using AI tools on the market as they are now. I think we’re on the wrong track with these products as they stand now, and almost nobody is taking a radically “human first” approach to thinking about our ideal future. 

Some specific elements of my proposal that are very different from anything I’ve seen:

  • Guarantees that for any official purposes, people will always have non-AI human alternatives available
  • Demands for radical direct democratic control, user ownership of AI technologies, and broad distribution of benefits derived from AI: including free & public-owner alternatives to these technologies, profit-sharing, etc. 
  • Requirements that AI systems not invent or hallucinate false critical information & integrate their systems with fact-checking & verification
  • The ability to turn off all the pseudo-ethical judgements and moralizing that current AI systems do, while simultaneously lacking a real comprehension of human norms and value systems
  • Extremely strong protections against collecting and training on personal data
  • Require all AI decisions (including content moderation decisions) be explainable and comprehensible in plain language
  • The ability to escalate any complaint or dispute with a company to a qualified outside body
  • Safeguards for the preservation of human autonomy in the face of ever-increasing reliance on AI systems
  • Safeguards against human behavior manipulation for profit
  • Safeguards to make sure AI systems are sustainable and not destroying or needlessly consuming natural resources
  • My document is also written as Agile user stories, and describe specific actionable product features that software teams can use as templates for building compliant solutions

There’s a great deal more to be said here, but given that this is a new evolving situation, this is probably a good place to stop and take a breath.

Quoting Anil Dash on how unreasonable generative AI outputs are

I still like Anil Dash, especially these observations about the many many failures of generative AI at a deep fundamental level to adhere to the most basic design & engineering principles:

Amongst engineers, coders, technical architects, and product designers, one of the most important traits that a system can have is that one can reason about that system in a consistent and predictable way…

The very act of debugging assumes that a system is meant to work in a particular way, with repeatable outputs, and that deviations from those expectations are the manifestation of that bug, which is why being able to reproduce a bug is the very first step to debugging…

Now we have billions of dollars being invested into technologies where it is impossible to make falsifiable assertions. A system that you cannot debug through a logical, socratic process is a vulnerability that exploitative tech tycoons will use to do what they always do, undermine the vulnerable.

So, what can we do? A simple thing for technologists, or those who work with them, to do is to make a simple demand: we need systems we can reason about. A system where we can provide the same input multiple times, and the response will change in minor or major ways, for unknown and unknowable reasons, and yet we’re expected to rebuild entire other industries or ecosystems around it, is merely a tool for manipulation.

The whole post is worth a read.

Page 45 of 177

Powered by WordPress & Theme by Anders Norén