I’m not on Facebook, so I miss out on a lot of the inanity that goes on over there. But someone posted this into a thread about how people on FB put up variations of this on their profile, with the idea that somehow doing so magically shields them from the “evil eye.”
For those of you that do not understand this posting, Facebook is now a publicly traded entity. Anyone can infringe on your right to privacy once you post on this site. It is recommended that you and other members post a similar notice to this or you may copy and paste this one. Protect yourself, this is now a publicly traded site.
PRIVACY NOTICE: Warning – any person and/or institution and/or Agent and/or Agency of any governmental structure including but not limited to the United States Federal Government also using or monitoring/using this website or any of its associated websites, you do NOT have my permission to utilize any of my profile information nor any of the content contained herein including, but not limited to my photos, and/or the comments made about my photos or any other “picture” art posted on my profile. You are hereby notified that you are strictly prohibited from disclosing, copying, distributing, disseminating, or taking any other action against me with regard to this profile and the contents herein. The foregoing prohibitions also apply to your employee, agent, student or any personnel under your direction or control.
As someone else pointed out, this is sort of like the ‘sovereign citizen’ or free man of the land variation of this in Canada, but applied to a digital context. Here is a Snopes article and an older CBS News piece on related types of notices people put up onto FB, hoping against hope they have some kind of real meaningful legal effect. Perhaps this is kind of a folk hyperreality applied to law – it’s a fan-fiction of what wish law were: easy to understand, straightforward, anyone can do it from the comfort of their own home.
Even if the above quoted notice is somewhat comical in a certain light, I certainly feel for people posting things like this. With the legal system at the level of complexity it is, and the lack of real control most users have when it comes to corporate-owned locked-down platforms, it’s only logical for people to try to assert some kind of control over their experience, and their personal data. The problem, really, is that it is not effective control. It’s illusory, and then they continue feeding the very beast they are aiming not to.
The only way, it seems like, is to opt out. To not use in the first place. To delete, discard, destroy, disconnect your accounts on these services. But the problem is, you still cannot really get out of it. There’s still a shadow profile of you out there in the ether…
In the future (present) we’re converging on, there is no “opt out.” If you even have the “right to object” in the first place, it usually amounts to exactly nothing because there is no contact form, no help email address, no one reviewing your appeal, whatever. It’s just you in a room arguing with a chatbot, forever and ever and ever.
I wouldn’t say I have been following with interest so much as bemusement the Nanowrimo conflagration around whether they should allow AI or not, and that critiquing it is classist and ableist. I did listen to the 404 Media podcast about it yesterday. Much of the thinking around all of this feels very alien to me, but if people feel it’s important to pursue, more power to them. I’ve never nanowrimoed, so I don’t have a horse in that race really.
Anyway, here is the Willison quote, transcribed from a recent podcast he appeared on. He doesn’t frame this in the context of ableism, but it seems thematically compatible:
For people who don’t speak English or have English as a second language, this stuff is incredible.
We live in a society where having really good spoken and written English puts you at a huge advantage.
The street light outside your house is broken and you need to write a letter to the council to get it fixed? That used to be a significant barrier.
It’s not anymore. ChatGPT will write a formal letter to the council complaining about a broken street light that is absolutely flawless.
And you can prompt it in any language. I’m so excited about that.
I’ve been reflecting on this lately as someone who does not identify as disabled, but who has spent the last 13 years acclimating as an immigrant to a culture whose language I was not raised in. Depending on your legal status, and many other factors outside your control (like race, ethnicity, country of origin, etc.), the experience of being an immigrant parallels in some respect that of people who are physically disabled.
Obviously the scale of experience is different for people who are physically disabled and my aim is not to diminish that, but perhaps the immigrant experience is something like being socially/culturally/politically disabled. Let me explain…
In some cases you lack legal rights shared by those standing around you in a crowd. If you’re still learning the language of the new country, you may lack the ability to understand important information that affects you personally, like legal or medical information (for example, there is a fight right now in Quebec over whether Anglophones should be allowed to receive medical care in English; that we’re even having such a stupid debate in an officially bi-lingual country committed to universal health care is fucking mind-bogglingly infuriating).
You are likely to be unable to easily do, let alone even be considered for many categories of work, which pushes you into low-paying jobs, etc. (For example, I worked for a while at a slaughterhouse cutting heads and feet off chicken carcasses, and dipping dead geese in wax to make it easier to pull of their feathers, because it’s what was available to me; ironically, it was one of the best run family businesses and best agricultural operation I’ve seen – and I traveled and worked in that domaina lot). I could go on and on, but the disadvantages are many. And many of them take years and many other opportunities aligning to be able to effectively overcome them. Others of them, as an immigrant, you just sort of accept you will never probably overcome them – like an ex-president suggesting you eat dogs during a national debate. I kid though, most of them are much more small mundane defeats that pile up over time until you’re just exhausted by them.
So yes, I completely a thousand billion percent agree with Willison above, that one of the legitimate “super-powers” that AI gives us is linguistic proficiency. And as someone living in a foreign culture and language, this equalizes certain categories of disadvantage or deficiencies I might otherwise have. It is an extremely powerful and profound social leveler.
Willison above uses the example of requesting a fix for a broken streetlight in English, for someone whom English is not their first language. I’m a big complainer, so I’ve definitely been using it for things of that nature, requests to government agencies, queries to French media, things like that. And rather than having to stumble through getting my thoughts out poorly in a long email with multiple components, and spending like five days on it, now I can just pop over to ChatGPT. And poof! Might not be perfect results (as a non-native French speaker, it’s difficult for me to tell), but they are extremely workable. Plus, the system helps me figure out who to send it to, what laws may be applicable (dicey territory, but gives you leads to check on yourself), and so on.
So, using AI in this sense gives me maybe not social fluency, but at least a higher degree of cultural proficiency, and ready access to political knowledge, especially where this intersects with my rights as a “user” of government services. This is all information which would otherwise be largely opaque and inaccessible to me.
I’ve been thinking also about, isn’t it likely, as these tools improve, and their social acceptance gets ironed out over time, that *not* having access to them will be seen as classist and even disabling? What will happen when all your peers have, for example, Elon Musk’s X chip in their skull? Admittedly, accepting Palmer Eldritch, erm I mean Elon Musk as your lord and techno-savior who you get to pay rent to inside your brain could be persuasively argued to be itself disabling. But I’m sure there will be those who say the opposite…
I’ve been imagining a future scenario where after widespread AI adoption/bio-integration, someone arrested for violating the Politeness Protocols, for example, could be read their rights:
“If you cannot afford an AI, one will be provided for you at a very modest compound interest rate…”
A friend sent me this mention of my work in a Guardian article by Van Badham, via the 2023 Newsweek piece that is still perennially making the rounds:
I’m one of the thousands of Australian and other writers dependent on royalty cheques to pay phone bills who learned last year that their work had been hoovered up to train AI models for Meta (market cap: US$1.28tn) and other mega corps for less remuneration than a kid pays to photocopy one page of it at the library. None of us got a dollar while a wave of AI-ghostwritten self-publishers announced their arrival into our crowded, poor and tiny market. This was (and I did not need a computer to tell me this) discouraging.
Still not sure why, of all the supposedly millions of people globally using generative AI products, that everyone sees fit to single me out as the bad guy?
I’m also not sure calling my work “AI-ghostwritten” is accurate, given that even in that article, I say that I use AI. It’s not somehow hidden. And does one article about one guy constitute many “publishers” announcing themselves? I don’t think so.
I do, however, basically agree with Badham’s concluding remarks in the article:
Yet I’ve decided not to be an AI doomer. I can proselytise its usefulness in my own life while fighting for its aggressive regulation. Melvin Kranzberg states: “Technology is neither good nor bad; nor is it neutral.” It will be as moral as we choose to make it.
I’ve been following conspiracy and fringe culture stuff since the late 90s (and paranormal stuff going back to grade school library days). Even ran a blog covering related topics for close to 10 years back when blogs were still a thing. As the years have gone by, these things have gotten much worse in their content, much more extreme, much more violent. It’s gotten a lot less “fun” to casually follow them.
But to me it still seems somehow necessary to be able to peek beyond the walls of what my own thinking might be, or what society considers “normal.” Especially since for more and more people, conspiracy stuff is just now a totally normal totally every day part of life.
I’ve written in the past too about how because of that ubiquity, we can no longer really afford to just completely dismiss these vast swathes of people and say they are simply “crazy” and try to keep them out of the public discourse. It requires some major acts of reconciliation in order to be able to integrate and move forward.
Anyway, all that is a preface to say I stopped reading r/conspiracy subreddit because it’s just gotten too dumb, too full of wide-eyed screenshots of tweets, and one liner joke reactions (essentially all of Reddit, I know). It’s just simply not a source of novel or interesting information anymore. Instead, I’ve switched to occasionally skimming a possibly even trashier conspiracy forum called Godlike Productions (GLP) which to me is famous for having an absurd “contract” you have to tick a checkmark and accept every time you read it, filled with legalese that is most likely not legal in any place except their fantasy jurisdictions. But what does that matter when you have exceptional bits of content like this:
Who is going to stop them. The Brain initiative has become the AI Brain initiative designed to replicate your consciousness, destroy anything original about you, and replace you with artificial intelligence in your gene edited bio suit. You won’t exist anymore. Your consciousness just a memory in the universe while some AI clone in a hive mind invades your gene edited body.
Worse, even if you say no to gene editing, it’ll be sprayed on you. It’ll be in your food. It’ll be in your water supply. Your bio suit will eventually be converted. Spike protein is indestructible. The faster you submit to your death, the happier your AI clone will be.
Think you can protect yourself from the massive devastation to humanity. Your coworker will shed on you, your family will shed on you.
All is lost already. This is the end of humanity as we know it. It won’t just affect humans, it is all mammals.
The more I scream and yell about the truth of the 4th Industrial Revolution, the more they come after me. I am a targeted individual for speaking the truth and sounding the alarm.
I also have assets that these bozos thought they could just come in and steal because I would either be dead or gene edited controlled.
Well neither happened. I am still alive and still not gene edited.
Enjoy ze bugz
There are two dual streams in myself that things like this titillates, one is the ex-content moderator who sort of revels in deciphering horror-shows and figuring out what should be done about them (in this case, nothing, I’m not a moderator on that forum, thankfully!). The other is as a sci fi writer, which imo, the person above ought to consider becoming, because as off-the-wall as what they’re saying might be, it’s also very interesting.
I’ve also been thinking lately about the trend towards “decentralization.” And as much as crypto BS has poisoned that word for me, I’ve been considering it in terms of decentralized epistemology, or put more simply, decentralizing “truth.” Yes, facts exist. And some principles we must hold onto societally because they are good ideas – or if they are not, then they are at least better than all the bad ideas floating around out there. Or so we must keep on telling ourselves in order to maintain the status quo for just a little bit longer.
But in an increasingly technologically-mediated hyperreality, where it is now becoming a trivial task that takes seconds to make “photos” of things that never happened, we are not ready for what happens next, as Sarah Jeong recently wrote about in The Verge regarding AI photo manipulation tools now being included out of the box in Google phones. Sure, it’s “scary” on the one hand, but I tend to think that a pluralistic decentralized approach to knowledge is here to stay, like it or not. So we better get used to it, and find ways to adapt and make life still livable, singly or together, gene-edited, or not! Cause what other choice do we have at this point, frankly?
Having heard this complaint about my AI Lore books for about the thousandth time (not an exaggeration), I think I might be finally ready to concede that – in some way – my books are indeed “not real books.”
What I mean by that is that the format of an ebook (or print book) merely serves as a vehicle to deliver what amount to complex narrative networks. To quote Wikipedia on the matter:
A networked narrative, also known as a network narrative or distributed narrative, is a language partitioned across a network of interconnected authors, access points, and/or discrete threads. It is not driven by the specificity of details; rather, details emerge through a co-construction of the ultimate story by the various participants or elements. […]
Networked narratives can be seen as being defined by their rejection of narrative unity.[1] As a consequence, such narratives escape the constraints of centralized authorship, distribution, and storytelling.
Let’s put it another way, perhaps even more simply…
My books consist of sets of reference points, some of them textual, some of them image-based. The reference points are arranged in a certain order within each book, and also include hyperlinks out (physically encoded into the ebooks, as well as non-coded conceptual or thematic ones) to reference points contained in other books.
Let’s have a quick refresher on network topologies:
Instead of nodes in a network, think of them as nodes in a narrative, which consists of nodes and their relationships (arrangement) with other nodes. What’s a “node” in this context? Non-exhaustively, we could say it is something like entities (persons, places, things), events, etc. It’s a thing with some substance in a story.
Most conventional fiction could probably be represented as a pretty simple linear (line) topology. That is, you deliver one “reference point” or node, one after another, and the reader passes through them in the path laid out linearly by the author. Perhaps a choose your own adventure book might be mapped out to resemble something like a tree or a mesh, where the user chooses from among multiple pre-defined paths and branches to arrive at their own experience. And maybe a dictionary or encyclopedia might look like a “fully connected” network topology.
My books consist of kind of all of these smooshed together into a hybrid narrative network topology. Each book is a narrative node in itself, composed of many other sub-nodes and relationships. And then the reader traverses the nodes in basically any order, composing their own experience as they go along. This is not the way that I think of most other fiction books working usually. And above and beyond anything I’ve done using AI, I think this model, this structure, is what sets my books apart in the end.
If this is hard to parse, let’s pull in someone else’s diagram to help illustrate. This comes from a paper on ResearchGate, which has a set of illustrations, of which is this one, called “Narrative Network Graphs: examples of two far-right narratives in 2016.” Here’s the picture, which seems to represent narrative elements mapped as a visualization of relationships and proximity:
This is kind of a “latent space” approach to narratology, I think. And I suspect it might be somewhat aligned with how AIs “think” about narratives (I don’t think they actually think, however). When you invoke a narratively-flavored output from a generative AI service, it takes all your tokens that you input, finds the others laying around in the neighborhood that are likely to be related, and spits them back out. It outputs them in a linear order (A –> B –> C), but my hunch is that this linear order is not actually intrinsic to how AIs approach fulfillment of these tasks. It doesn’t care much about what the order is.
I suspect the reason AI often crafts “shitty” narrative progressions is that 1) it is not intrinsically concerned with the order of presentation, only that nodes and their relationships are represented, and 2) it has no lived emotional experience, so has to make guesses as to what outputs ought to trigger which emotional states in humans.
The thing is, though, I like that weird quality, the Uncanny Valleyness of it all. The fact that it struggles and sputters with narrative unity. I like that AI currently does NOT actually fundamentally understand what makes a good, rich, and interesting story to humans. That failure, if interrogated well and empathetically, can actually be terrible fascinating all on its own. But it doesn’t make good “regular” books – yet. That day will come though.
So for me ultimately, what I want to say is that the outward form of an ebook or printed book is “fine” for me for now, because it is a common, well-understood, and more or less efficient means to distribute chains of reference points, or networked narrative nodes and their relationships. The same underlying nodes could be presented in countless other ways (lists, image sets, videos, immersive VR experiences, endless others), and over time I hope I have the opportunity to explore those other directions of AI-assisted storytelling, and where they intersect with “The Book” and where they can transcend it.
While I’m on this topic, here is an – I think – previously unreleased PDF document I made some six years ago (2018!), back when generative AI was barely a twinkle in Bill Gates’ eye. It predates any of the Quatria books, and it absolutely predates the AI Lore books, focusing more on Early Clues LLC, and its many exalted offshoots.
Even though it predates all of those things, it gives a fairly accurate (as these things go) “skeleton key” to understanding the rest of my extremely messy and convoluted networked narratives. Skimming around in this diagram cloud, I think, also gives a good visceral experience of what it’s like to try to navigate the stories that pass through all my other books – where the reader/viewer is largely left to their own devices to make sense of it all.
I wrote this piece way back at the end of June. It ended up getting translated to French and posted onto my publisher’s website, but I realize I never published the English original, so here it is. I went into this topic a little also in my (French) interview with Bruno Guglielminetti.
Protecting Our Digital Sovereignty
As artificial intelligence (AI) reshapes our world, Quebec faces a critical juncture. The real existential threat posed by AI is not killer robots roaming our streets, but the ever-increasing control exerted by gigantic multinational tech corporations over our digital lives.
These companies, predominantly American, dictate the terms of acceptable speech, influence public opinion, and largely seem to operate beyond the reach of Quebec’s laws. This creeping dominance by foreign entities is a modern form of technological colonialism, undermining our sovereignty and autonomy, and need to be addressed by the government of Quebec through the creation of a robust AI oversight body before it is too late to undo the damage.
A recent incident I was personally involved with has highlighted for me the mounting urgency of this situation. When I discovered a critical safety flaw in a popular AI service and went public with it, the company suspended my account instead of fixing the problem. Incidents like this make it all too clear that these companies prioritize protecting corporate interests over user safety and public good. Whistleblowers such as myself face retaliation, while the underlying issues remain unaddressed. This pattern underscores a disturbing reality: without robust oversight, Quebecers are left vulnerable to the whims of unaccountable foreign entities.
Our province has become alarmingly dependent on American technology. From smartphones to social media, from cloud services storing government data to AI assistants in our homes, we’ve unwittingly ceded control of our digital infrastructure to Silicon Valley. This overreliance extends to emerging AI technologies, where Quebec-based alternatives are scarce.
Existing regulatory bodies, such as the Commission d’accès à l’information, play crucial roles in data protection in Quebec. However, their mandates are too narrow to address the complex multi-faceted challenges posed by AI. Generative AI, automated decision-making systems, and AI-driven profiling and content moderation raise unprecedented ethical, legal, and societal questions that demand immediate specialized oversight.
Quebec urgently needs a dedicated AI regulatory body with the power to, at minimum:
Audit all AI systems used in our province, regardless of their origin.
Enforce transparency in automated decision-making affecting Quebecers.
Investigate safety issues and penalize non-compliance with our regulations.
Safeguard Quebec’s cultural and linguistic identity against AI-driven homogenization.
The stakes couldn’t be higher. As AI systems increasingly mediate our access to information, job opportunities, financial services, and healthcare, we must ensure these systems align with Quebec’s values and laws. Without strong oversight, we risk becoming another digital colony, our society shaped by algorithms designed to serve foreign interests and profit motives.
Moreover, this new body must have the mandate and resources to engage on the international stage. It should collaborate with like-minded jurisdictions to create a counterweight to the AI superpowers, ensuring smaller nations and distinct cultures have a say in how AI develops globally.
Quebec has a proud history of protecting its unique identity and values in the face of outside pressures. In the AI era, this fight moves to the digital realm. By establishing a powerful AI oversight body, we can reclaim our digital sovereignty, protect our distinct society, and ensure that the AI revolution serves all Quebecers.
Was happy to read this funny and on-point write-up regarding the Anthropic lawsuit situation on a site called GadgetLad (archived). I guess I am not the only one who, seeing this Bloomberg Law headline – “Canadian Author Asks to Be Left Out of Anthropic Copyright Suit” – ended up seeing an implied “eh?” at the end.
When asked about the use of AI in art, Strada’s Hill told CNBC, “I think on the controversy level, all good artworks are controversial. I’ve never seen a good artwork that isn’t. Only the bad ones that lack importance or significance are the ones that nobody talks about.”
When I was in highschool, our dial-up internet came from our local library. I have been poking around to see if there are now public libraries offering their members free access to generative and other AI services (like ChatGPT, etc.) but so far not seeing it.
I kind of think there’s a critical missing link here maybe. This is a concept I explored in a different framing in The Continuity Codex, but will keep digging on, as I think there’s something here.
Matteo Wong’s latest piece in the Atlantic is an excellent antidote to Ted Chiang’s swing-and-a-miss piece condemning AI as “not real art” – even if he stole my headline (sort of)!
This paragraph of Wong’s seems worth capturing here for posterity, as it speaks to the role of “choice” in Art – something which Chiang’s piece (I think wrongly) got hung up on:
Some of the most towering artists and artistic movements in recent history have divorced human skill and intention from their ultimate creations. Making a smaller number of decisions or exerting less intentional control does not necessarily imply less vision, creativity, brilliance, or meaning. In the early 1900s, the Dada and surrealist art movements experimented with automatism, randomness, and chance, such as in a famous collage made by dropping strips of paper and pasting them where they landed, ceding control to gravity and removing expression of human interiority; Salvador Dalí fired ink-filled bullets to randomly splatter lithographic stones. Decades later, abstract painters including Jackson Pollock, Joan Mitchell, and Mark Rothko marked their canvases with less apparent technical precision or attention to realism—seemingly random drips of pigment, sweeping brushstrokes, giant fields of color—and the Hungarian-born artist Vera Molnar used simple algorithms to determine the placement of lines, shapes, and colors on paper. Famed Renaissance artists used mathematical principles to guide their work; computer-assisted and algorithmic art today abounds. Andy Warhol employed mass production and called his studio the “Factory.” For decades, authors and artists such as Tristan Tzara, Samuel Beckett, John Cage, and Jackson Mac Low have used chance in their textual compositions.
What’s being described here also meshes with something the US Copyright Office tried to argue – again, I think wrongly – in their Zarya decision, that to be considered the “author” of a work, somehow the artist/creator/author must be able to visualize or conceptualize somehow the work ahead of time. It’s a very very flimsy line of thinking that doesn’t hold up well under scrutiny vis-a-vis art history, as Wong illustrates with ample references above – and which I countered in my own submission to the Copyright Office last year.
“If your only way of making a painting is to actually dab paint laboriously onto a canvas, then the result might be bad or good, but at least it’s the result of a whole lot of micro-decisions you made as an artist. You were exercising editorial judgment with every paint stroke. That is absent in the output of these programs.”
First, before launching into my rant, I want to just contrast that with another quote I found from photographer Phillip Toledano in 2023, who basically lambasts this whole idea that there is no “choice” that goes into AI art.
The funny thing about AI I’ve realized is that, in some ways, you have to think about it more consciously than you do when you’re making a photograph. For instance, if I’m making a picture with AI, I have to think about who’s in the picture. What do they look like? What are their expressions? What ethnicity are they? What’s the weather like? What’s the vantage point of the camera? What lens am I thinking about using? Is it black and white? Is the color correct for this particular era?
I’ve been working on a new somewhat larger painting lately, and reflecting on all of this. And what I have been sensing in myself when I am either writing or painting – especially when I am in the “zone” – it’s almost more like my “choice” functionality has somehow been switched off, or almost muted. When it’s going really well, I’m not consciously all that aware of making any choices at all.
Chiang claimed that in a text of 10,000 words, you make 10,000 choices. But that’s not really true at all for me. Most of the time, what comes out is much more automatic – a lot more like Ray Bradbury describes here, where you make the intellect (the chooser?) sort of get out of the way, and ride the emotional reality of the lived moment that the writing or art ultimately represents.
There is a lot of looking, a lot of inward and outward sensing, which is then made manifest by taking action on the work material. But it does not manifest itself in my sensorium as “making choices.”
Making choices, the way Chiang describes it, feels more like what I have to do when I’m trying to buy some random maybe shitty product on Amazon, and I have to decide whether a customer rating of 4.4 stars, and 717 sales in the last is better than one of 4.2 stars, and 1,136 sales, and whether all or most or any of the many effusively positive or negative reviews on the actual product are really real people, and if they are, whether my own experience is likely to match theirs. That kind of shitty scenario of having to sift through the endless surfeit of choice, well that seems to me more like the kinds of “choices” that maybe Chiang is talking about with regard to his theoretical conception of what makes art & writing processes “valid” or not. If that’s the kind of thing other people experience when they do art, I feel bad for them. Because that’s not what it’s like for me!