Well, I took that device, and fabricated using a Wen scroll saw and Dremel tool and some experimentation a set of handles to drive the syringe actuators, to be able to more smoothly control the drawing motion. I’m calling them ohara style controllers because I think the will-scarlet >> scarlett-ohara semantic bridge made some sort of intuitive sense, with shades of thematic connection maybe to… flying and O’Hare International Airport? I don’t know – I only work here. I’m just making this shit up on the fly, and trying to keep it all straight in my head.
The ohara controllers anyway came out great, you can see the video sample of the motion and some other images over on this Imgur gallery. Here’s a closeup of the controllers – hopefully Butlerian Jihad-safe:
Mysterious Plasmoids is the 124th installment in the AI Lore Books series. It is the first book I’ve done in the “Mysterious…” series in quite some time, and is another ‘ripped from the headlines’ hot take on what is happening with the drone/orb situation that is supposedly happening globally (I’ve not seen any anomalies first hand, myself). It also continues in another thematic subset of my books that relate to various aspects of UAP/UFO phenomena. This one heavily references other books in that cluster, if non-linearly.
There’s a vivid dream description of mine which fellow blogger Ran Prieur documented way back in 2005 here. In it I dreamt of a hyper-nationalist/fascist future US where police sirens played the song “America, the Beautiful,” and aliens had invaded in the United States… Excerpt below:
New York City had been divided into northern and southern zones, via a gigantic wall and forcefield. The southern half still had people living and working in it. But the northern half was completely off-limits. The official story was that aliens (space, not illegal) had taken over the northern half of the city, and the rest of the United States northward.
We knew, however, that this officially story was largely a fabrication. But that was all we knew. We had to roam about the lower half of the city, trying to find a passage to the north. And we had to do so without arousing any suspicions, which was an extremely difficult task. No one in the city would answer questions or help us.
And the police presence was total. You had to keep moving at all times. Any group of people who were stopping to talk or otherwise congregate was quickly spotted and broken up by patrolling police. […]
The police also had flying discs which they sent out after you. They were autonomous electronic devices which hovered and would track you as you ran. Once they were within range, they would fire an electric bolt at you to incapacitate you until officers arrived. The discs were called “temperplexes,” and they were all apparently controlled by larger motherships which flew higher and basically looked like UFO’s.
I actually continued that dream and spun out more variations using AI and published it in an earlier volume called, The First Days of Panic. That book, however, takes it visually in a much more fascist police drone direction (which, hell, I wouldn’t rule out just yet), whereas this book more explores the notion of plasmoids as heretofore unrecognized forms of life, which have interacted with us in myriad ways throughout history and prehistory: something more like John Keel’s ultraterrestrials. Are we living in the timeline now of that dream? Maybe?
Whatever the true nature of the “real” drones/orbs/plasmoids/UFO/UAP stuff that is or isn’t going on in our skies is, I think, a little besides the point; the point is the search itself. The point is the looking, and trying to understand all possibilities, and fit the best bets that seem to match evidence from reality itself.
Or, you know, in this case, hyperreality. Images in this one were mostly made with Ideogram and Recraft, with some dabbling in Grok’s image gen, and screegrabs from Sora videos, plus some remove tool in Adobe Lightroom. Text is majority ChatGPT with many human edits and improvements, told in alternating chapters between “first person” accounts, and quasi scholarly essays. Art preview below:
I might experiment in a subsequent volume with trying to embed animated gifs or even short videos from Sora if I can get the technology working adequately to share them. Ebooks don’t seem well-suited to that kind of thing, due to file sizes, though. So we will see what’s actually still feasible.
I’ve been following with great interest the current “flap” around drones/orbs/UAPs being sighted in both the night sky and broad daylight around the United States and the world. A lot more to say there, obviously, but one experiment I did was trying to get OpenAI’s generative AI video tool Sora to create “convincing” fakes in this genre which would cleanly match what we’re seeing from users on the street.
I uploaded about a dozen or so video results to Imgur, which you can see at the link as they can’t be embedded easily here.
These videos end up being interesting in their own right, and even – dare I say – “artistic” at times. But what I found is they look pretty much nothing like the videos we’re seeing uploaded regularly to platforms like Reddit & TikTok, etc. Doesn’t necessarily mean nobody is using gen AI to enhance (or even generate) some of the effects we’re seeing in these videos, but if they’re doing it, I have yet to see one that could be easily explained away as “Oh, that’s just AI.”
I believe billions of people are in active combat with their devices every day, swiping away notifications, dodging around intrusive apps, agreeing to privacy policies that they don’t understand, desperately trying to find where an option they used to use has been moved to because a product manager has decided that it needed to be somewhere else. I realize it’s tough to conceptualize because it’s so ubiquitous, but how much do you fight with your computer or smartphone every day? How many times does something break? How many times have you downloaded an app and found it didn’t really do the thing you wanted it to? How many times have you wanted to do something simple and found that it’s actually really annoying?
How much of your life is dodging digital debris, avoiding scams, ads, apps that demand permissions, and endless menu options that bury the simple things that you’re actually trying to do?
I’m leaving Slack, so I have been leaning heavily on ChatGPT to help me set up possible open source alternatives like Rocket Chat :thumbsdown: and Matrix :thumbshalfwayup:. I’m not much of a Terminal wizard, but from this experience having ChatGPT guide me through using the command line, I’ve learned a lot. One of the things I’ve learned is you basically always get stuck – eventually – down one or several blind alleys when explicitly following its instructions. And then it just runs you down them again and again (but still did better than Gemini in the one time I tried it for an intractable Matrix Synapse server settings issue).
Anyway, that’s why I’m quoting this Dave Winer bit here, cause I’m apparently not the only one:
As a programming partner, ChatGPT is encyclopedic but is not good at strategy. It will drive you down blind alleys. It’s also really irritating that it rewrites your code to conform to its standards. And it has a terrible memory. Forgets things you told it specifically not to forget. It does not keep promises.
Also, because it does not suffer from human impatience, it has no problem telling you to repeat the same 5-6 checks again and again, no matter how many times you say it didn’t work and yes you already triple checked that config file in nano. Frustration, in a way, is actually valuable. It tells you when things really aren’t working and make you question whether it’s actually valuable as a human to continue down a given path. But you can’t rely on the program itself to bring that kind of guidance to you – you have to rely on your human faculty of annoyance. Which, the more I think about it, might somehow be connected to intuition: knowing when to fold and try something else.
Still though, I would not have ever learned so much so quickly about the command line without ChatGPT backing me up. And, of course, if the programs I am trying to run were not so finicky and buggy. I finally got Matrix up and running (accessing via Element), but never did sort out the correct subdomain issue I messed around with solving forever and ever. And who the hell knows if my m.room.retention settings are going to be honored. At least it will be encrypted if it’s not deleted in a purge job eventually (though hard deletion is always best policy for stale data, imo)…
Having to type out “Willow SCARA Robot” a number of times for this post made my brain finally collapse those together into a nice name for this machine, “Will Scarlet” (will-scarlet). It seems to fit somehow as a moniker for this particular configutation of bits and bobs that becomes, through effort, a machine for creating “art.” A sort of nascent physicalized proto-AI without anything “artificial” really about it. It’s all ‘real’ – and all really controlled by me, using two syringes attached to tubing, one held in each hand.
Here are some process photos of conducting will-scarlet to make a piece of “art” using acrylic paint markers on canvas. But more important than creating “Art” or any semblance of it, simply trying to mechanically, and even scientifically observe what are the capabilities and limitations of this particular device. That was the goal of this initial exploratory session, and its documentation. Some highlights, but I’ll leave most of it on the Imgur linked above. There are also some raw unedited short videos you can watch there too (but which always refuse to embed in my blog posts).
Here is will-scarlet drawing the types of arcs and bows that his range of motion will allow him. I had really no idea what to expect on building this first prototype, so just winged my placement of different components, their relative lengths, etc.
And my components are all as primitive as possible. The arm segments and their pivots are all from willow branches that I grow here locally for basketry. So this robot is kind of an industrial byproduct of the basketmaking trade, in a way.
I’ve written about this in prior posts, but part of my exploration here is to make “biobots” which are as “natural” as possible – whatever that means – or that intentionally incorporate organic (and eventually living) elements. But beyond that, or alongside it on the third rail, I also want to figure out what kinds of “thinking machines” could survive a Butlerian jihad, a la Dune universe. Or like a robotically augmented art studio that could survive some kind of EMP/SHTF scenario. Not so much because I’m worried about those things beyond a kind of thematic motif, but because it’s a ghost world/Uncanny Valley of art that doesn’t seem too well-explored. I found plenty of people coming in from the opposite direction of turning to Arduino code and electronics and motors to produce art-making robots. But I didn’t find anything really like I’ve been doing: strip out the motors, pull out the code and electronics to the greatest extent possible, and put back the human at the helm. A more fluidly organic humanist cyborg augmented. Something that doesn’t rely on someone else’s cloud, API, and infrastructure: art robots for the here and now.
This is as far as I was able to get in one session. I don’t find it super-beautiful yet as a finished piece (even if that’s not expressly the point, I’d like to get it there), but was a good experiment. The plan is to continue layering on top. Hopefully after I can rig up some better controllers, as having one in each hand is rather awkward to smoothly control.
In fact, in the image above, you can see two sections of black marker in the topmost layer. At right is me using it with one in each hand to draw: angular, constrained by having to work the plungers up and down each using only the hand already holding they syringe body. Then my kid came home, and we each controlled one syringe kind of … independently together. And those line tracks are on the left side, and appear as much more fluid. So that’s very interesting.
In the current setup, since the range of motion is so constrained (make a note of this), I end up having to flip the canvas all around on the backing board in order to reach new areas. Not the worst experience in the world, as constraints like this tend to force you to discover new creative ways to get around them… But still, it will be nice to do a bigger better version of will-scarlet – possibly/obviously to be named robin-hood – where I’ve mastered a bit more the placement and sizing of components, allowing for a free range of motion, and a better manual method for controlling it.
ChatGPT suggests turning this into a four-bar linkage, which isn’t a bad idea, but will look less like a “cool wooden arm.” I’ll probably try that anyway. I also want to do a version with the arm in this configuration – minus the motors, ofc.
Also, going back to range of motion, I noticed that as the machine worked for a while, its range changed, and in one case became rather limited. Some of the tubing had popped off, and we’d lost fluid (water) in the hydraulics. It really reduced how the arm segment most affected could move. Once I got that sorted and refilled, it improved again.
Plus there was an issue where, toward the end of one of its poles of available movement, the paint marker would lift off the paper. Looking closer at the machine, it was obvious that it was caused by the irregularity of the willow branches of which the thing is composed. I’m not using machined metal tubing and smooth turning bearings, etc. in all this. I’m using pieces of my garden. So, since that’s an intentional choice, I see those irregularities as a good thing, as “proof of life” rather than as evidence of some failure that needs to be eradcated by ever more regular objects. This is what embodiment looks like. This is how we can tell it’s really “Art” with a capital A – not strictly the painted canvases that the machine outputs, but the entire socio-technical assemblage from which it is born, and through which it depicts in a very clear but also roundabout way, some essential thing about my life in this time and pleace.
As unexplained aerial phenomena (UAP) form a major part of the inter-locking narratives of the AI Lore books, I thought I’d post a round-up of the books that are the most relevant for the events unfolding in New Jersey, and evidently around the globe IRL. In chronological order according to publication date (maybe I’m forgetting some – highly possible):
A reader wrote in recently with the following question, which they gave me permission to post a reply to here, as I thought other readers might have similar questions. I also think it could be funny to have an “Ask An AI Guy” advice column. If anyone else has AI-related questions they want to ask and have me answer publicly, drop me an email at the contact form here.
The question:
I came across your article in Newsweek about using AI to write books. I found it incredibly interesting how open you were about using it as your creative partner in a sense. I’m currently enjoying using chatgpt to help form my ideas into slightly more coherent plot points. And it gives amazing feedback on my writing, possibly better than what a technical editor could give. But I can’t help feeling like a fraud. There’s a nagging feeling like, shouldn’t I be able to do this on my own? Before AI, didn’t writers have nothing but themselves to brainstorm, write, and rewrite? Have you faced this issue? Do you have any thoughts on this? Thanks so much for any insights!
I answered part of this question I think in the Register interview here, but will recap that part briefly: working with AI has made me a better writer. Simple as that. It’s made me more objective about what’s a decent piece of writing, and does a given piece answer the particular need its intended to fill, and does my argumentation take the reader logically from A to B to satisfy that need?
“Before AI, didn’t writers have nothing but themselves…”
First, I would say that’s not true on its own: writers have always had other writers, editors, and other readers. I think the idea that writing is this heroic totally solitary activity is a bit incomplete, as it has always had a very social side.
Second, whatever happened ‘before AI’, we’re no longer living in that mythical before-time, just like we don’t live in a time before computers, etc. While doing things without it is entirely valuable and worth mastering for mastery’s sake, we now live *with* AI. One way to live in a time with AI is certainly to totally reject it. Another is to partially reject it for certain things, and use it for others. I think it’s just about finding what that fit is for you, the writer, and for your audience. For example, I wouldn’t want AI to take over the “fun” parts of writing – but your mileage may vary as to what is actually fun for you, and what can benefit from outside eyes – even if those are digital eyes.
Despite hundreds (thousands, really) of people calling me a fraud for using AI to tell new kinds of stories, it’s a feeling that I have never once myself actually shared. Sorry, I just don’t feel like a fraud. I don’t feel sorry, or cowed or threatened even when thousands of people tell me I’m wrong and bad. Maybe that makes me an asshole, but I don’t think so. It means really I’m able to listen to the intuitive voice and the creative light that guides invisibly my work, and much of my life, and unswervingly devote myself to that, even if others can’t see it or don’t quite get it. That light might tell some people to restrict how they use certain technologies, but mine tells me to find out by doing, to dive in and see, and talk about it with others. Following that light has not served me wrong so far – and if anything, it has made my life better and my work richer. If I stopped following that, that’s where I think I would get into trouble and start feeling like a fraud, because I will have given up on what’s actually actively true for me.
I guess my point is: if the journey is authentic, then so will be the end-product.