Tim Boucher

Questionable content, possibly linked

Mozilla AI Safety Open Letter

Since I’m into signing open letters now, I just signed this one from Mozilla about the importance of encouraging open AI development.

“The idea that tight and proprietary control of foundational AI models is the only path to protecting us from society-scale harm is naive at best, dangerous at worst.”

I marked my job title down as “Full-Time Complainer.”

Notes on The Continuity Codex

The Continuity Codex is volume number 118 in the AI Lore Books series.

This book is, simply put, an AI-assisted roman a clef about what happens when “you know who” comes back into power, who in this book is called Hyperion Storm.

More specifically though, this book is the actualization of the sidequest tale first teased in this blog post, and this one, and this one. Namely, what if all public libraries on earth band together to train an AI on the entirety of their mutual collections, making the resulting Codex both fully open, and fully publicly-owned? And what if it could fit on a thumb drive?

For sure, that would drive someone like Hyperion Storm utterly crazy… erm, crazier, anyway… and it just so happens that the global release of this Library AI coincides with Storm’s re-election nightmare. And he subsequently enacts a campaign of brutal bombings and suppression against the League of Earth Libraries, the group responsible for the Codex. Shades of the destruction of the Library of Alexandria permeate this volume…

The tale gets pretty “quantum” and surrealist/smooshy at times, and dissolves narrative unity at times across a few hundred miles and light years. But don’t let that phase you. (There are definitely some VOMISA-style elements inserted here and there.)

It is intended to pair with a book about 101 volumes back or so, The First Days of Panic, which has a similar narrative slide/drift effect, and depicts roughly the same set of events, though perhaps at a slightly different point in the matrix of their eventual unfolding. If you read this one, definitely read Panic also.

I wrote this book out of frustration and anger because I’m basically positive Storm will “win” again (barring exfiltration to some other plane of existence), and that all the little piddling attempts otherwise along the way won’t change the shitty sweeping course of history’s rising tide something something stupid. “Prove me wrong kids, prove me wrong!

Here’s a taste of the art:

I’m a library junkie, which is the other reason I made this book. I haven’t had easy local access to an English library in close to a decade though, and that fact eternally bums me out. Of all the things on earth, public libraries in my eyes are one of the absolute best, and everything they represent is worth fighting for.

This book consists of a fair amount of purely human-writing, especially at the beginning, followed by a lot of AI thread-spinning across Claude 2, and also the newer open source model offerings available on TextSynth, such as LLama2, Mistral, Falcon, and GPT-J. I used to use that site a lot in the early volumes, and it’s fun to go back to that style of chunking completions instead of prompt-based stuff, because it gives you much weirder and driftier results than you can get out of supposedly more advanced systems like ChatGPT or Claude. Depends what you’re looking for I guess. I like that with many of those systems, the texts generated often end up leading you over cliffs narratively without explanation. It’s much better to my weird eyes than the trite artificial corporate wrap-ups baked into Claude & ChatGPT.

This book uses Midjourney again with a little Dalle3 (and some old school Stable Diffusion, and newer school SDXL), but I think I’m off Midjourney again for a while until they can get their shit straight. The UX has gotten even worse, if that’s possible. And it’s getting expensive to fund my monthly AI habit across all these services. At the same time, I don’t trust OpenAI too much to not further destroy the UX and utility of Dalle3 more than they have. Though, I’m not gonna lie, I’m okay WorldCoin guy is out. That shit is a travesty. Annoyed me so much, I made a book about it

Notes on Deliriant

Deliriant is the 117th book in the AI Lore Books series, by Lost Books, a Canadian AI publisher.

The book is comprised of around a dozen or so flash fiction vignettes telling the story of a generation starship called Deliriant, along with a related fictional encyclopedia entry to round out the offering. The ship is hurtling from an unknown past to an unknowable future, and the occupants of the ship are a collective of artists tasked with keeping alive their culture, and the bio-technological ark that is the ship.

Occupants of the ship are all fitted with neural lace and a system called the Weave which connects them psychically with one another and with the Deliriant itself, and its super-intelligent AI called Core. The artists of the ship work tirelessly on personal and collective works using the Weave, and a tool called the Mirror, which is able to draw from the collective unconscious memories of all prior inhabitants of the ship, called the Vault.

This world-building ultimately arose out of an idea I had years ago and have been nursing on the back burner ever since after watching a great of Star Trek: what if we could have starships without captains and without hierarchies? What if the Enterprise were a worker-owned collective?

Image-wise, the book mixes Dalle3 and Midjourney v 5.2. I’d let my MJ sub lapse for some months, until I heard about the new Style Tuner, which I thought was worth trying out. Like so many things in MidjourneyLandia, I found the Tuner UI/UX to be pretty much terrible, and very difficult to tune in any kind of direction that made sense to me. I won’t do a full product tear-down here, because who cares. It was an interesting experiment, but I’m not sure worth the $30 USD I blew on it for the month. (I’m also sad that they made Dalle3 much suckier after the initial burst of awesome when it came out…)

The brainstorming sessions for the book took place in ChatGPT v4, but then I dumped the contents of that into Claude, which has a bigger context window, and I think better fiction-writing capacity still. I know how to work with it to get adequate results for these kinds of quick scattershot world-building books. It’s not earth-shattering fiction by any means, but taken as a whole, I think these make for pretty fun and readable ebook products. This book also thematically references another volume, which is told in a similar style, The Song Drive, as the generation ship Deliriant is equipped with one (a space engine that runs on musical energy inherent in the universe).

Here is the art preview for the book, via Gumroad:

Dystopian: Definition

I’m going through the old notebook from the past year and found this definition of what makes something dystopian that seems still apt later on:

“An essential underlying recognition of wrongness, multiplied by the familiarity of accumulated errors, oversights, and unkindnesses endured, passed on, and compounded over time.”

The Shimmering Flame

Been meaning for months to link & quote this excellent article by Amanda Gefter in Nautilus about plant cognition. Here’s one of many good parts:

“Wild cognition,” as Barrett puts it, is more akin to a candle flame than to a computer. “We are ongoing processes resisting the second law of thermodynamics,” she says. We are candles desperately working to re-light ourselves, while entropy does its damnedest to blow us out. Machines are made—one and done—but living things make themselves, and they have to remake themselves so long as they want to keep living.

And the stuff about 4E cognition is really amazing find for me as well:

From a 4E perspective, minds come before brains. Brains come into the picture when you have multicellular, mobile organisms—not to represent the world or give rise to consciousness, but to forge connections between sensory and motor systems so that the organism can act as a singular whole and move through its environment in ways that keep its flame lit.

“The brain fundamentally is a life regulation organ,” Thompson says. “In that sense, it’s like the heart or the kidney. When you have animal life, it’s crucially dependent for the regulation of the body, its maintenance, and all its behavioral capacities. The brain is facilitating what the organism does. Words like cognition, memory, attention, or consciousness—those words for me are properly applied to the whole organism. It’s the whole organism that’s conscious, not the brain that’s conscious. It’s the whole organism that attends or remembers. The brain makes animal cognition possible, it facilitates and enables it, but it’s not the location of it.”

Here’s a quick good supplementary video about 4E/5E/6E:

And the article quotes a 1991 article by Rodney Brooks:

“Explicit representations and models of the world simply get in the way. It turns out to be better to use the world as its own model.”

This article is great in that it elegantly puts a pin in and deflates a lot of the nonsensical schoolyard arguments about AI being conscious and alive. I won’t try to rehash all of it here…

“The mistake was to think that cognition was in the head,” Calvo says. “It belongs to the relationship between the organism and its environment.”

This all to me is huge, and I need more time to digest and properly process all of this, but it just feels so intuitively right. And also it’s one more argument strongly in favor of doing everything one possibly can to enhance biodiversity, so as to strengthen the resiliency of “mind” affixed to place. Genius loci.

Just wanted to capture one last thought, relative to this idea that life is a candle flame trying to keep itself from blowing out. Namely, Gandalf on the bridge at Khazad-dum in LOTR, speaking to the Balrog:

“I am a servant of the Secret Fire, wielder of the flame of Anor. You cannot pass. The dark fire will not avail you, flame of Udûn. Go back to the Shadow! You cannot pass.”

Exciting Announcement

(As promised)

I am pleased to announce I have just returned from [very|extremely important] [event|conference|mountain summit] where I spoke with [world leaders|robotoid replicants|fellow thinkfluencers|god-himself] on the importance of [AI|human|worker] [safety|complaining|ignoring issues & moving forward at all costs] and it went very very very [well|awesomely|horribly terribly badly] and we are [excited|lukewarm|completely put off] by the possibility of opening up future [summits|timeshare demos|jail time] to “regular people” from “all walks of life.”

My representative from the LEL assures me that the Continuity Codex project is all systems go. And that I will be summoned bodily to the next Plenary Session where the College of Bards will begin preparations for the Opening Invocation of the Inauguration of the Universal Library. I am excited by this amazing opportunity which we all hope will become another step in the infinite Strophariad across all worlds.

League of Earth Libraries

Following on the heels of my last post about why not engage *all* public libraries to build a public AI, I took the idea and some other bits that have been floating in my headspace and had old CG whip up a quick fake news article supporting the concept:


“LEL Unveils ‘Aletheia’: The Dawn of a New Era in Knowledge Sharing”

In a historic move for the democratization of information, the League of Earth Libraries (LEL) has launched ‘Aletheia’, an advanced open-access AI designed to provide the public with unfettered access to the sum of human knowledge. Named after the Greek goddess of truth, Aletheia represents a significant milestone in the global effort to ensure that education and information are free and available to all.

Aletheia, a powerful artificial intelligence, has been engineered to curate, organize, and disseminate an expansive range of data and scholarly work. From ancient texts to cutting-edge scientific research, Aletheia grants real-time access to a vast digital repository without the traditional barriers of paywalls or subscriptions.

The League of Earth Libraries, a coalition formed by academic institutions, public libraries, and non-profit organizations worldwide, has worked tirelessly to establish a platform that is both user-friendly and expansive in scope. With a mission to “unlock the potential of human collaboration,” the LEL has emphasized that Aletheia is more than just a tool for study and research; it is a catalyst for innovation and a foundation for building a more informed society.

Critics of the proprietary nature of knowledge distribution have lauded the release of Aletheia. The AI’s deployment aligns with increasing calls for transparency and equity in the realm of information technology. By prioritizing the public good over private profit, Aletheia is set to reshape the landscape of learning and information exchange.

As Aletheia becomes integrated into educational systems, research facilities, and homes around the world, the LEL hopes to bridge the information divide and empower individuals with the freedom to learn, explore, and create. With this unprecedented access to knowledge, the possibilities for global progress and understanding are boundless.


Don’t really like the name Alethia, but letting it stand in this v1. We also landed on the name Continuum Codex, which I rather like for its sci-fi qualities.

The thing I’ve realized about “actual reality” is that outside of things which you can physically get your hands on, manipulate, and modify, the rest of it changes awfully slowly. Often impercetibly so. So I’ve been thinking, apart from these things that are specifically outside the ‘sphere of the moral purpose’ (or manual purpose, as implied above), why not just… give up? Let go of “actual reality” and all its stupid foibles and inability to change. Why not just put the pedal to the floor on the fictional narratives, and let those become the high-speed open playground that I really need, instead of moaning about how xyz aren’t doing it right? Fuck reality. We’ll make our own. Let everyone else try and keep up.

Who should run public AI? Why not Public Libraries?

In thinking more about this question of who should run a “public option” AI, it seems obvious that the state is likely not going to be the best actor (depending on the state), though they would surely have some role in it. What’s the model then to follow? Trust corporations to benevolently something something? Yeah, good luck with that.

Given their experience in public administration of commitment to storage and universal access to knowledge though, what about public libraries?

The first dial-up internet I had as a teenager came from our local library. Why can’t we do the same thing with AI?

There’s no default that says this all has to be run by corporations and national security interests. That might be the arc we’re stuck in at the moment, but I don’t believe that’s aligned with the long arc of time, which is probably the best scale for us to measure AI against in the end…

It might be a hare-brained idea, but it might just be that we need the outsider angles to get through the coming impasse…

Partnership on AI’s Advice for Releasing Open Frontier AI Models: Don’t

I was somewhat interested in PAI’s recent release of deployment guidelines for new foundation models, despite an awareness that this organization doesn’t seem to consider everyone equal partners.

So I went over to check out the section about generating custom guidelines to fit your scenario. I didn’t select anything crazy, just frontier model as type and open access. And the advice it gave me is basically a slap on the wrist that says more or less, in fancier terms, “don’t do that.”

We recommend providers initially err towards staged rollouts and restricted access to establish confidence in risk management for these systems before considering open availability.

These models may possess unprecedented capabilities and modalities not yet sufficiently tested in use, carrying uncertainties around risks of misuse and societal impacts. Over time, as practices and norms mature, open access may become viable if adequate safeguards are demonstrated.

I’m not convinced this is good guidance that will make sense in all situations. I’m not sure people constantly freaking out about AI models and trying to forcibly apply entirely squishy and undefined “safety” concepts onto everything is going to result in the kinds of technological progress that improves human lives. I guess that is probably a seemingly incongruous opinion for someone to hold with a Trust & Safety background, but here we are. I’m just not seeing the kinds of results rolling out of AI Safety as a field that I intuitively feel are right and useful. I’m seeing instead mostly a lot of hand-wringing and constipation that results in products which seem somehow magically to get shittier over time instead of better. And it’s frustrating af as an end user.

I guess at this point, I’m feeling more and more in the camp of “let ‘er rip!” and let the communities that are served by and users of these models to determine their own roadmaps about safety and affiliated concepts, as they are likely to be more open-ended, flexible, and most likely more innovative than bottling up all those decisions in one organization.

Thunks for Everything

Futurism recently covered a “tech guy” on Twitter saying something to the effect of books would be eventually replaced by AI-conglomerate fine-tuned entity-dealies called “thunks” that could auto-generate books.

“Thunks” is not the greatest name ever for this phenomenon. I might connect this more to the idea of the underlying hypercanvas, the landscape of choices recorded together as a creative set by an artist.

Futurism:

Rather than publish books, Wang predicted, humanity will soon begin to publish inventions dubbed “thunks,” which he describes as “nuggets of thought that can interact with the ‘reader’ in a dynamic and multimedia way.”

“There can still be a classic linear ‘passive read mode,'” the developer added, “but that can be autogenerated based on the recipient’s level of existing context and knowledge.”

Based on my test results just now of uploading only three of my 116 AI Lore books, the technology so far is not actually up to this task Wang outlines above in his use case.

I wrote this in my Newsweek piece earlier this year:

I envision also a future where AI-assisted storytelling becomes the norm, and readers transform into co-creators, as AI enables authors and readers to generate highly specific content rapidly on demand.

I’ve been on this train of “thunks” for quite some time with my books, that they could themselves be used to train AIs on this specific worldbuilding & legendarium, and the AI could be queried for specifics, referencing correctly across many volumes, correlating correspondances, pointing out inconsistencies. Then conceivably interacting with characters and moments within the storyworlds through multimodal interactions.

It doesn’t sound so crazy to me, having been where I’ve been the last couple years working at the edges of technology and storytelling using AI.

I don’t find Futurism’s takes to be all that insightful, and this time is no exception. As the author writes, panning this concept:

In other words? It seems that according to Wang, gone are the days of humanity expressing its creativity, engaging in the quest for understanding, and fulfilling our need to catalog our existence through the thousands-year-old tradition of literature. Instead, all we’ll do is think a little thunk, and allow AI to generate a multimedia choose-your-own-adventure experience.

The hypercanvas is a contiguous extension of and continuation of “literature” in that literature has always been embedded in storytelling across all modalities. “Humanity expressing its creativity” will always and should always expand to include all possible tools and approaches. Because that’s what we are as a culture. Seekers, explorers, tool users, trying to make the outer like the inner. Exercising our imagination and will in the cold light of day.

Continue the tradition in whatever way you care to, makes the most sense to you as an artist. If that involves a “thunk,” and you’ve got something genuinely new, and interesting, true, and truly “you,” why is that something worth pandering right out of the gate? I don’t think it is, personally.

Page 2 of 170

Powered by WordPress & Theme by Anders Norén