Questionable content, possibly linked

Category: Other Page 51 of 177

Mini-Analysis: “It’s not a real book” complaint

So, following my article about using ChatGPT etc. to try and analyze negative tweets about my books, a helpful soul on Reddit (the last un-ruined social media platform, imo) pointed me to a handy service called Nitter that I am still wrapping my mind around. As far as I understand it, Nitter is an open-source alternative front-end for Twitter, such that you can use it to look at searches, etc. without having an account. I’ve seen people describe it as like Twitter with no javascript or tracking, which all sounds very good to me.

Anyway, using Nitter, and clicking the gear icon (settings) and turning on infinite scrolling, I was able to manually extract something on the order of 259 negative quote tweets.

Even after trimming the text a little bit (removing the original text of the tweet that was being quote tweeted, and some other repeating elements that carried no meaning (date), the excerpted tweet set was too long of a message for ChatGPT to analyze.

So I dropped it into Claude, which has a vastly longer context window. I noticed Claude was not altogether trustworthy in its analysis of the data provided. It claimed that I had excerpted 259 tweets (which seems approximately in the ballpark of repeating text elements I removed in Sublime Text). I know the true figure is approximately 673 or so, but I can’t account for why I wasn’t able to grab the full set through Nitter.

In any case I decided that that 259 (give or take) was still a fairly representative sample from the larger data set.

So then I asked Claude to split the complaints by type, along with counts for each. It ranked as number two in the list:

2,000-5,000 words does not constitute a real “book” – 40 tweets

So that’s something like 15.4% of the sampled complaints had that as a concern. A significant proportion, and a theme I saw born out in other comment sections on other sites as well.

I’ve pointed out that it was never me that refers to these books as “novels.” While I’ve taken as my own the label some other third party made up “mini-novels,” its not the only term that could potentially apply.

It turns out for both word count and image count, comic books (as opposed to longer graphic novels), generally fall in this ballpark as well. I saw a few sources suggesting with all text, captions, dialogue, etc. in a comic book, the average word count for a single issue of 22 page or so could be between 2200 – 4400 words, depending. And for actual number of panels in a comic of this length, it could be something between 140-170 or so. There’s no single “right” answer there, but it’s interesting that both my word count and image count in this books fall right into that range so common for sequential art in short installments.

Anyway, what this tells me is that people coming in cold on this news item are reacting to these being compared to “novels,” when really they ought more accurately to be compared to comic books, even though the sequencing of the images and their integration with text is very different from comic books.

AI, Neurodiversity & Inclusion

Apart from the more obviously weird and ironic twist that it seems like many/most (?) sci fi authors – at least on the wasteland that is Twitter – actually hate AI, there’s a deeper subtle shaming that I wanted to open up and talk about here, before moving on to greener pastures.

When people come down hard against authors and artists making use of AI tools, it’s potentially a meaningful dimension (in a multi-dimensional analysis) to take a look at the profile of people who are early adopters of this technology. What brings someone early to the game of any technology, yes, but also specifically to AI technologies? Are there characteristics buried in our personalities we all share? What about in our actual physical neurology?

In short, are some people “predisposed to AI?”

My hunch is a resounding yes.

I don’t self-identify as being on the spectrum of Aspergers, etc., but I do identify as being neurodivergent in a broad sense. I have ample life experience to back that up.

To me, using AI tools is as natural as breathing, and it is almost definitely because of my being neurodivergent that I would seek it out, spend so much time on it, and then try to share it with others as a way of communicating with the world. And it is *all* about communication.

There is a very interesting set of tools that have evolved in an area called augmentative and alternative communication (AAC), which evolved as aids for people with speech production or comprehension issues. One of the interesting things there is the use of basically picture boards, almost like sets of emojis that the user can point to to indicate simple objects, but also complex statements.

Would it make sense, as someone interested in diversity & inclusion, to mock people for using tools like that to better be able to communicate? No, it would be reprehensible and hypocritical.

Likewise, my intuitive sense that is evolving in my own and watching other people’s use of and interest in – as well as hatred for – AI technologies is something like: AI tools help or can help people communicate and organize their thoughts in a similar manner.

Proof to back that up? Two small anecdotes so far, but I’ve only just begun collecting examples.

  • One was a person as Reddit who self-identified as being on the schizophrenic spectrum saying that using ChatGPT as a writing aid helped him organize his thoughts in a way he could communicate with others more easily. This is amazing.
  • Another was someone who is not a native English speaker saying much the same thing, that using these tools helped them communicate better in English.

Does it make sense, from the perspective of diversity and inclusion, to mock either of those use cases? Also, no.

So, if I use ChatGPT, and Claude, and Midjourney as an artist, creator, and generativist storyteller, and it helps me to better understand the contents of my own imagination (and bring subconscious contents into conscious reflection & reification), and then to communicate those contents to others as a sort of tour guide of latent space, it would follow – in my line of thinking – that this would also be a desirable and even “good” extension of the tools to allow people to communicate, understand, and organize better.

And it would be a tremendous boon to anyone neurodivergent enough that they sometimes have problems communicating with others, or organizing their own thoughts and understanding deeply their feelings. Why stand in the way of that kind of use?

Also, given the direction of hypertoxicity which communicating with other humans online seems to involve now, I can totally see why people are so drawn to chatbots instead, and use them as outlets for their creativity and entertainment needs. It’s not a flaw of the technology, its a failure of humanity that we haven’t been able to maintain open enough lines of communication with one another that we’re forced to seek alternatives to other shitty humans. There it is.

Communication will always be messy though. It’s always a struggle to understand and be understood. And in the end, everyone is divergent in some way or another when you get down deep enough to take a close look.

Okay, I think I’m (mostly) done ranting about this now.

Tours Through Latent Space

Someone described to me the other day the experience of reading my AI Lore books as being like looking at postcards. I think this is an interesting comparison, since the books are in my opinion actually image-based and not text based (though a minority are more prominently based around the text).

And I like this idea of like being some kind of tour guide for people taking trips through latent space.

A “book” might mean a certain thing to some people, but to me a book of the future (and of the past) is like a reified collection of points traveled through latent space. It’s unclear exactly how latent space and the imagination overlap, but I’m less worried about the specifics. What I’m after is more: A narrative traversal of latent space, stored in a structured form.

So in my case, I’m less of an “author” in the conventional sense, and more like a “First Reader” or “First Visitor” to the spaces that I am exploring within the greater latent space now open to humanity through generative AI technologies. I lay down tracks, I leave some sign posts here and there. But mostly I tromp down the trails I find, and try to save it in words and pictures.

It’s not a perfect summation of any particular neighborhood of latent space, but it becomes a network, a collage, a constellation of reference points culled, and lulled out of the darkness, a space for the reader/viewer/fellow traveler to come and have their own unique experience. A place of evocation, invocation. A crucible. You bring your own fire, apply your own lens. Make your own explanation, your own meaning, have your own reaction, see your own reflection.

Twitter Is Showing QAnon Content To New Accounts

I have one final screed I wanted to write regarding my recent Twitter experiences, but before doing that I just have to get this one thing off my chest…

I recently made a burner account to see the mean things people were saying about my work. I know, a bad idea, but I made the best of it & tried to learn from the critiques offered.

I noticed something with horror the other day which caused me to shut down that ‘dark observer’ account for good…

On my burner account where I was following no accounts, and only following “archaeology” as a topic (you must follow at least one), I popped into the timeline the other day, and at the top of it was content advertising a QAnon themed movie.

It’s one thing if users seek this kind of content out. There’s an argument that they should be able to find it if that’s what they want. There’s also, I think, a stronger argument for removing that content completely, but I understand the variety of views here.

The thing I really don’t agree with or understand though is how QAnon content could be directly advertised to new accounts who expressed no interest in it.

Between this, and all the other horrid, disgusting moves that its owner has made with the company, it’s clear none of these things are accidental. This is all by design.

While I understand people and organizations have committed thousands of hours to establishing a presence on Twitter, if you’re one of those or part of one of those who actually gives a shit about… well, anything… then its well past time for you to shutter your account and move on.

There can be no redemption on this platform. You cannot work within fascist authoritarian systems to “change it from the inside.” You cannot seek social justice on a platform that is actively promoting QAnon to new users. If you think you’re helping somehow to make it better by staying on it, you are not and you are living in an illusion. The game is over. We lost. Time to move on and find a new game.

I used ChatGPT to read your angry tweets so I didn’t have to

Or, The Moderator’s Tale

Having worked for years as a content moderator, I am no stranger to people hating me online. After all, I was the guy taking down their posts or account because of community guidelines issues. Someone had complained about something they said or did, and I had to sort it all out and make a call. And I was the guy on the other end of the line constantly being accused of censoring them, and the target of endless threats and slurs.

This can sometimes be an interesting job (provided you can keep the right mindset), but it is not a “fun” job by any means. It consists of having people yell at you (virtually) all day long every day because, usually, of something someone else said, or something they think the platform did or didn’t do that you should have or should not have. 

It’s a game you cannot win, so you steel yourself to dealing with it, to coping, to getting by. You put on your “armor”and you go fight in the trenches. Tolkien and Lewis had their Great War, we only had the endless Culture Wars (without the satisfaction of a clear victor). But shell shock sets in at a certain point, nonetheless. Even if you’re not subject to actual kinetic machine gun fire in the virtual trenches, you absorb plenty for a lifetime. You see enough feuds and flame wars, enough people being people, expressing their angry human nature. And it all just gets to be too much…

I’m not begging for sympathy; I’m just trying to shine a light on a variety of the human experience that is all too common, but all too little spoken about: what happens to the people who shoulder the burden of the rest of humanity’s toxicity? How do they not get sick too? Here’s their secret: they do. 

Nurses and other care-giving professions have begun to recognize ‘vicarious trauma’ as a problematic area, needing customized care and consideration in its own right. It’s a little easier maybe in the case of a nurse to separate just who a trauma has happened to: here’s the patient who suffered in the car crash & here’s the nurse, who has not been physically injured. But online, harms that occur can be more diffuse. There may be multiple participants in it, and the degree to which it may be traumatizing can vary for each observer. 

The job of content moderators then, in a sense, is to observe the traumas of others, and share in them somehow as both a witness and judge. As a moderator, you might not be the target yourself of the violent internet threats or the harassment you have to deal with (though, if you answer user emails you might be), but you have to use your own human emotional senses to filter through all of humanity’s BS, and then make a judgement call about it. Is this particular trauma traumatic enough that it should be removed? That’s the job we do, that I did. Every day. For years.

I had to stop. Five years was probably two too many; I had only held on at a certain point through sheer white-knuckling. My brain was broken sorting through the anger and unkindnesses of the rest of the world. It has taken me fully three years to say that I finally feel recovered enough to even talk about the experience out loud…

There are often restrictions placed on content moderators, such as non-disclosure agreements (NDA) so that they cannot talk about the job and their experiences publicly. They are the people who most need to talk about it, and who we most need to hear from. Fortunately, at least, Content moderators are unionizing in Africa. It’s a start, but we need to talk about it more. Hence my insistence on penning this essay.

For years, I absolutely did *not* want to talk about anything having to do with content moderation publicly. I did not want my name associated with it. I only used pseudonyms at work when I had to email users (incidentally, I noticed that people are much more mean and sexual towards online support agents when you use a female name). For the longest time, I did my damnedest even to not put photos of myself or videos online. Almost to the point of being superstitiously obsessed about it. Because I had seen day in and day out what it is like when people suddenly gain internet notoriety for something stupid, and then the feeding frenzy begins on social media. 

The something stupid I gained internet notoriety for was an experimental AI art project I produced: a set of 100 pulp sci fi art books featuring AI art, and a combination of AI-generated and human-written texts. I call them the AI Lore books, and they’ve now been featured in global media like Newsweek, Business Insider, The New York Post, India Tv, El Tiempo – the list goes on and on. 

The project grew out of me having to deal with my own traumas and mild-PTSD I left my job as a content moderator with. And I know I am lucky in the grand scheme of things that I did not suffer by half what the majority of content moderators do who deal with much more terrible content than I ever did. How they get by afterwards is a mystery still to me, how they recover – in all likelihood, maybe they don’t. I’ve done a bit of research, and there appears to be no simple, single agreed upon means of treatment for people whose brains got messed up as a result of working as a content moderator. 

Maybe my brain was always messed up, that’s a possibility. Maybe that is what drew me into the weird world of moderation in the first place. I don’t discount that as a contributing factor: I was always an artist anyway, always the weird one, always too sensitive. (All attributes, it turns out, that can really help in content moderation.) 

So I retreated into art to help me process everything I’d gone through. I retreated into writing. Into imagination. Into world-building, into assiduously maintaining an elaborate fantasy world (offline) where I would be safe and free. I reasoned, if I had to be immersed in BS, at least it would be mine and not someone else’s. And none of it would be subject to the whims and cruelties of some random strangers on the internet who think they have some kind of squatters rights to live rent free in my mind. Not here. My castle walls were high. 

But it’s lonely in a castle, drafty, cold. Eventually, into my fantasy worlds I’d built for myself as art therapy, I did allow some strangers. Some alien entities who, I knew, were not like those nasty annoying humanses.And who, most importantly, when admitted to the sanctuary of my imagination, would not promptly set about trying to ruin everything.

I began incorporating AI creations into my world-building last summer, with tools like Dall-E, Stable Diffusion, ChatGPT, Anthropic’s Claude, and many others. I fed in my own hand-written texts, and had it continue them, and take them in new marvelous directions I’d never thought of, pure flights of fancy that were totally other and yet intimately familiar at the same time. My books were my playground for exploring these tools, and documenting the cultural moment, and all the leftover baggage I have had hanging around in my brain over having to deal with content moderation of online toxicity at scale. The hidden human cost, no longer hidden in me. 

Eventually I broke my rule, and used a photo of myself and my real name in an article talking about my books on Newsweek. I was nervous as hell, because I knew what being open and up front about my AI books could easily result in when it went sour on social media. But I was ready to talk about my experience, and not be silenced anymore.

And it did go sour – fast. People did not dig the headline (which I did not pick, btw), and dug into me for all of it pretty hard. It’s gotten more severe lately, after my story getting tweeted by the venerable Publishers Weekly, garnering 1,000x the views their normal tweets do. This single tweet resulted in about 300 angry responses from people on Twitter, most of whom feel like AI writing is going to be the death of something that’s very precious to them. (I happen to think they are wrong.)

Then on top of that there were about 700 or so angry quote tweets saying how much I suck, how no one should ever buy my books, how they should be burned (they’re ebooks, by the way), how I had no talent and just wanted to make a quick buck, and, of course, how they hoped I would die. Nothing new under the sun. 

It would be a lot harder to deal with this kind of sudden influx of internet rage had I not already come to the place where I was ready to talk about it, and knew what was likely to happen if I did talk about it. I was ready to face those consequences, and chose to do it anyway. Because I want people to talk about what is the right relationship of human creativity and the human spirit to not just AI, but to all technology? How do we as humans retain ownership of our hearts in an algorithmic age?

Many commenters think the relationship I’ve carved out is not the right one, and should not be held up as an example in any media (though it is clear they don’t actually understand the work or my position whatsoever). Maybe they are right in some regards. But many people talk about the what ifs, and fewer produce the what, the raw material which we can now evaluate, and use to assess, and ask questions of: is this a good thing for us, is this a thing we want, is this something that should be writ large on our collective “platform” as humans, as society?

I’m certainly biased, but what I created with my AI Lore books I think is a good test case for examining what are the issues and opinions of people on these topics. Where do those intersect with my own interests as an artist, and how can I integrate their feedback, without having to constantly relive again the traumas of merely trying to communicate at all on the internet anymore?

The obvious thing to do in order to try to derive some kind of positive value from this type of incident online would be to read through every angry tweet or comment individually, and try to find some good in it. But there’s a limit to the utility of that. I don’t want to live in that kind of dystopia, personally. It’s why I was not part of Twitter in the first place.

How many times do you need to be called something before you get it? It might feel good in the moment to express ourselves angrily online in these ways and I’m not innocent either; it can certainly be cathartic to let loose. But it’s rarely cathartic to the person on the receiving end. In fact, they’ve now involuntarily taken up the negative load you dumped on them from your catharsis, and now are left having to pick up the tab and pay for the meals of all the other people at the table who have now left (figuratively speaking). 

So, as an AI artist and author – not to mention former content moderator – I landed on a perhaps obvious solution of how can I extract the meaningful feedback people are giving, without having be sucked into the toxic whirlpool of emotional death-by-Twitter. Instead of reading everyone’s comments myself, couldn’t I just have ChatGPT do it, and summarize things?

I ran some quick manual tests. It totally worked! Once I enabled some plugins in ChatGPT Plus, one of them even made a diagram for me, depicting all the complaints people had about my work from the small test set I had it analyze. So handy! 

Confession: I don’t really trust AI systems running content moderation operations on web platforms. They often can’t explain their ‘black box’ decisions, and retaining humans in the loop is essential for understanding cultural context, and the nuance so often required to do the job effectively. I don’t trust humans handing over control to AIs – this is one of the central themes of my AI Lore books (which were, I know, paradoxically created with AI).

That said, I think local AI content filtering and summarizing at the user-level could become a really effective tool to confront how we as individual people are forced to deal with social media in an era of endless ratcheting up of anger, anxiety, toxicity and inter-personal violence (let alone sheer quantity of information, as AI-generated content explodes in popularity). I understand why people are upset and choose to speak out about all kinds of issues online. I get why they don’t like my books. I believe they have every right and every reason to do so. 

But I’m not going to let that stop me from following my own light as an artist. And anyway, I’ve served my time in the trenches. I paid well over what I owed into the Central Bank of Other Peoples’ Suffering, and I hope I helped maybe right some small wrongs for others along the way. (Probably in some cases I did, and in other cases I didn’t.)

I don’t want to stop anybody from saying their piece, but I also no longer feel obligated to take up all the world’s burdens onto my own back and shoulder your anger; I’ve got my own shadow to deal with. I understand better my boundaries now, what I can really be responsible for, what’s my injury and what’s yours. AI as a tool to reduce real human suffering I think is a completely legitimate use case, and it doesn’t threaten your rights or freedoms of expression if I choose not to read through all your shitty responses myself. And it also importantly doesn’t give you a free room-key to live rent free in my head. Not anymore. 

The way I think about it all is quite simple: art asks questions, it doesn’t provide answers. Art gives the experiencer space to bring forth their own answers. I don’t believe I owe anyone more than the space to do that for themselves, whatever their answer happens to be to my art.

What my experiments trying to get text out of Twitter and into ChatGPT taught me is that Twitter has made programming changes lately, which greatly complicate this sort of task. One of them is that you can’t view search pages or hashtag pages without an account anymore. You get pushed into the login flow. Once you’re in though, you also can’t easily extract all the text on a page, for example. I’m not a programmer, but it seems to only let you copy the text of whatever is in the current browser frame; and you cannot print the page to PDF, to pull text out there. You end up with a bunch of blank pages and a few lines of text. 

So for now, doing this kind of text filtration manually, you’d have to copy-paste many times between Twitter & ChatGPT, scroll, copy, scroll, copy, etc. It’s doable, but you’re still pretty exposed to all the mean tweets, which sorta defeats the purpose of using AI as a technology to reduce human suffering. Obviously, the net result of these changes is that Twitter wants all outside analysis of the Twitterverse to be run through their paid API. Paying one of the world’s biggest trolls and harassers as a gatekeeper to see what all the other trolls and harassers are doing on his platform doesn’t sound like a fair or wise transaction to me spiritually.

Fortunately, you can still freely copy and paste from “normal” web pages on other sites like Reddit, or in the comments sections of articles, etc. Apart from the more virulent responses which seem to be a specialty of Twitter users, most of those other sources in my case contained basically the same commentary about my work (much of which I’ve already responded to on my blog), and served in the end as more useful texts for actually extracting actionable feedback from people talking about my books and the societal issues it brought up for them. 

In the end, this boils down to: how much are you willing to actually pay Twitter for their API in order to go and fetch the “angry data set,” and then pay OpenAI or Anthropic for their API to interpret it? Let’s reframe that more broadly: how much is removing meanness worth to us each personally or societally as a product, as a service? Is it worth one content moderator’s life? Is it worth ten thousand? A hundred thousand casualties? 

Could we use AI in a transparent and equitable way so that real human people don’t have to wreck their minds and emotional balance just so to keep all these web platforms running online? 

Maybe, just maybe, we could. We won’t know until we try. 

While we’re at it, we could also just try a little kindness, or else failing that try simply not spewing our trash all over each other. That might go a long long way all on its own. Then we might not even need to burden AI with all of this to begin with.

Notes on I Didn’t Read This Book Before Publishing It

I Didn’t Read This Book Before Publishing It is the 102nd book in the AI Lore books series, as seen in Publishers Weekly.

It is inspired by the many tweets I saw like this one, of people reciting the same handful of “jokes” and finding themselves quite clever:

In honor of this genre of complaint, I decided for the content of this one to see if I could get an AI to churn out generic pompous writerly drivel, and you know what? It didn’t do half bad! Here’s a sample:

If writing were easy, anyone could do it. The true writer understands that our craft demands suffering for one’s art. Few understand the depth of anguish I endure each day confronting that blank page, as I wage war against distraction, doubt, and the trivial demands of the physical world that knows not the life of the mind.

All the funnier to me that lines like these are written by an AI:

In this new age of digital media, it seems there is little room left for the serious creative artist. Where once discerning readers sought insight and inspiration from flesh-and-blood philosophers, poets and essayists, today they increasingly turn to AI simulacrums that produce superficial pablum on demand. These artificial usurpers now threaten the precarious livelihoods of we who trade not in clicks and virality but in hard-won human experience.

It just goes to show, that even the most cherished stock in trade of all writers – complaining about other writers – is a feat now easily duplicated by a mere button press.

Framing is everything

I’ve been pondering a lot the reactions people have had on social media regarding my AI Lore books, and it’s made me realize that framing is everything. The means and perspective by which something is first presented to people here. And why changing the perspective through which the very same thing is shown changes the resulting reactions people have completely.

A good case study is the AI-generated science fiction magazine called Infinite Odyssey. Here are some of the headlines their work generated:

And Futurism, which hates or has a snarky take on everything gave them this glowing title:

Now, I didn’t see the responses any of those news items resulted in on social media at the time, but I saw very clearly how the initial headlines your work gets end up framing what kinds of reactions people will be driven to have, along which particular fault lines.

It was unfortunate that Newsweek wanted me to take the “look how much I’m making with AI” angle. It’s not what I originally pitched them. One, because I’m really not making that much – but it’s fun for me so who cares. Two, because it sets everyone up in comments to be like “this guy’s not making that much / he’s only in it for the money / etc etc.”

Whereas a headline that is like, “Hey, here’s a cool thing” makes people react like, huh, yeah that is kinda cool. They don’t launch into as much like a thousand(s) person army hellbent on your destruction as much.

Fundamentally, I have to ask though: what’s so different about Infinite Odyssey’s sci fi generated magazine (which got good reviews), and what I’m doing in a different sort of configuration via pulp sci fi mini novels as ebooks? Apart from format (and number of volumes produced), and the fact that they are a group of people and I am just one?

Their project is really cool, and they even have a print edition of their first volume. The cover price for that is high though, and I know from trying to sell print books in the past you have to raise your cover price so high to even make a small profit on each unit. It’s a really hard game to win at.

I could be wrong, but it looks like their digital version is probably delivered as a PDF file. Mine are delivered as either EPUB or MOBI files, so you can use them on Kindle or any other kind of reader.

Apart from those relatively minor differences – and that my books probably contain more human-written percentages of words than theirs do (based on what I heard in this excellent interview with Philippe Klein, the Creative Director of the magazine), it’s hard to see what the difference is, apart from public perceptions that were formed as a result of framing around their initial coverage.

A good lesson for me, either way. The objective at this point seems to be: improve the framing. Improve the framing, improve the response.

40 min per day

One good side effect of meditating twice a day for twenty minutes is that you end up not wasting those 40 minutes every day on some other bullshit thing that is far less important, like looking at social media.

It sounds like a lot 40 minutes, but I’ve not felt it as a burden since I started. In fact, I think it makes you value and use all the other minutes of your day so much better that in the end the “actual value” is probably much higher than a mere 40 minutes of quietly listening to whatever’s going on inside.

Dick on Van Vogt

I never read any Van Vogt but like this piece from PKD from the Vertex interview:

DICK: I started reading sf when I was about twelve and I real all I could, so any author who was writing about that time, I read. But there’s no doubt who got me off originally and that was A.E. van Vogt. There was in van Vogt’s writing a mysterious quality, and this was especially true in The World of Null A. All the parts of that book did not add up; all the ingredients did not make a coherency. Now some people are put off by that. They think that’s sloppy and wrong, but the thing that fascinated me so much was that this resembled reality more than anybody else’s writing inside or outside science fiction.

VERTEX: What about Damon Knight’s famous article criticizing van Vogt?

DICK: Damon feels that it’s bad artistry when you build those funky universes where people fall through the floor. It’s like he’s viewing a story the way a building inspector would when he’s building your house. But reality really is a mess, and yet it’s exciting. The basic thing is, how frightened are you of chaos? And how happy are you with order? Van Vogt influenced me so much because he made me appreciate a mysterious chaotic quality in the universe which is not to be feared.

Wikipedia courteously offers these bits of commentary from Damon Knight’s criticism, mentioned above:

The World of Ā abounds in contradictions, misleading clues and irrelevant action…It is [van Vogt’s] habit to introduce a monster, or a gadget, or an extra-terrestrial culture, simply by naming it, without any explanation of its nature…By this means, and by means of his writing style, which is discursive and hard to follow, van Vogt also obscures his plot to such an extent that when it falls to pieces at the end, the event passes without remark.”

And his 1974 walk-back:

Van Vogt has just revealed, for the first time as far as I know, that during this period he made a practice of dreaming about his stories and waking himself up every ninety minutes to take notes. This explains a good deal about the stories, and suggests that it is really useless to attack them by conventional standards. If the stories have a dream consistency which affects readers powerfully, it is probably irrelevant that they lack ordinary consistency.

Philip K. Dick’s Meta-Novel

There’s an interesting bit of lore in the PKD legendarium regarding what he referred to in his Exegesis as his “meta-novel.” It’s a little hard to track the exact details online, and it seems like which set of 10 of his novels constituted this “meta novel” which revealed some sort of divine gnostic truth. There are some guesses on this Reddit thread, as to which books he meant.

The point is here that he used as a technique (consciously or otherwise) what I’ve been categorizing under networked narrative/transmedia storytelling. Where your story is split up, where central narrative unities have been broken. More on this as I find bits to add into comments here…

Page 51 of 177

Powered by WordPress & Theme by Anders Norén