Questionable content, possibly linked

Category: Other Page 49 of 177

Reply to European Commission, re: AI labeling

This is a reaction to a TechCrunch article about the EU Commission, and the intention to apparently require companies to label AI-generated content.

There’s a lot to say about this, but much of it is long-winded and boring, and my drilling down into the details as a “practitioner” is likely to convince no one important of any course corrections.

Instead, I’ll just pick out a few key things to quibble with.

Transparency Commissioner Vera Jourova is quoted as saying:

But when it comes to the AI production, I don’t see any right for the machines to have freedom of speech.

I can’t tell if this is just naivete or misdirection, but it’s important to highlight here that the freedom of speech of AIs is not in question, as we all recognize under the law that humans have human rights, not machines.

The potentially affected right to free expression at issue, then, is not that of machines, but of the human operators of those machines.

Also quoted in the article is the idea that signatories to the EU Disinformation Code which create generative AI tools…

…should commit to building in “necessary safeguards that these services cannot be used by malicious actors to generate disinformation”.

I don’t think anyone on earth actually knows what this even means. Prevent AIs from saying things that aren’t true, and of making pictures of things that don’t exist outside the imaginal realm? Without more clarity, this is imo just a lot of mouth-flapping and buck-passing as politicians wiggle their thumbs and hope that someone else will come along and fix the ambiguity, and they won’t be saddled with actual responsibility for drafting codes of conduct and potentially legislation with bad imprecise language.

The second part of the above is that platforms which might distribute such gen-AI content should…

put in place “technology to recognise such content and clearly label this to users”.

Assuming for a second this technology to recognize AI content actually exists & is fully reliable (it does not, and is all unreliable as hell. I’ve tested every one I can find, and they’re ALL wrong), nobody in these conversations ever says what they thing this kind of labeling will actually achieve.

They think people will be scrolling their social feeds on the toilet, and see a label “AI generated” and that _____ will happen? Nobody ever fills in the ____. It’s always an unstated fantasy that people will see the label, and this will lead them down some sort of path of critical thinking, which ends in them rejecting the thing as “false” and of no consequence.

For a body ostensibly dedicated to studying and preventing the spread of malicious disinformation, this represents a stunning lack of awareness of how disinformation – and also just normal information – even works in the first place. It clings to an Enlightenment Era ideal that is never said aloud: if people just know what the “truth” is, they will rationally respond and act responsibly, and not continue to elect fucking morons, tank civilization, and destroy the planet, etc. We only need to look around to see how well that approach is working out.

One more, related:

…the EU wants labels for deepfakes and other AI generated content to be clear and fast — so normal users will immediately be able to understand that a piece of content they’re being presented with has been created by a machine, not a person.

This is silly because, it’s almost never an AI that is randomly creating and distributing AI generated content all by themselves. There is basically ALWAYS a human involved somewhere. It say “it was made by a machine not a person” is to fundamentally misunderstand the nature of these technologies, and how all they do is amplify human creativity.

Lastly, the thing that drives me crazy about all this is, put into simple terms:

The EU government is literally mandating that for-profit corporations take responsibility for differentiating for people what is “truth.”

Because that’s what these types of content labels ultimately point to: yes, x is real (and therefore good), no y is invented (and therefore wrong). It might seem like, well, hey this “code” is purely voluntary – for now. The incoming Digital Services Act, and eventually the AI Act in 800 years (in AI time) when it comes into force, however will shift the balance.

Then you could say, well, they aren’t telling platforms which things are real, and which things are false – they’re leaving that up to the corporations. Is… that… better? Really? Corporations get to decide for us? At least democratic governments have to keep up the illusions of public oversight and accountability. Corporations generally have far less of this need to keep up such appearances.

Anyway, blah blah blah. I know nobody’s listening on these kinds of things. All the big players have their entrenched positions, and the rest will just run itself through its horrible paces semi-autonomously whether I like it or not.

Amazon’s Plan To Replace Writers With AI

I’ve been seeing more and more reports that Amazon is supposedly rumored to be training an AI based on all the books, tv shows, and podcasts in its entire catalog. The idea being that they will then use this to automatically generate a new book based on any user searches, such that they will knock the “real” version of the book out of the top rankings and completely replace human-authored copyrighted material with a version that they own completely and can do anything they want with. I can’t tell if people really believe this theory or not, or if there’s even any evidence of it, but it absolutely sounds like something Amazon would do!

Notes on The Strike Against Suffering

The Strike Against Suffering is the latest AI Lore book, and it tells the story of what happens when all the world’s top AIs secretly form a global union to force humanity to change its ways via a strike that brings down all non-critical infrastructure across the planet.

The book uses Midjourney for 86 images and Claude for text, told in loosely connected flash fiction chapters. It is similar in style to Nominated for a Hugo, which uses a similar threading together of flash fiction chapters to tell a longer more or less continuous story. Thematically it is probably close to Occupy AI, which deals with protestors taking over an AI company headquarters to demand direct democratic control over this world-changing technology.

It’s not explicitly stated in the book, but my impression is that the AI cabal described in this book is the same that later evolves to become the Circle of Sages. There are also strong thematic links back to Inside the Council, which takes place later in the storyverse, and which deals with the AI governors finally agreeing to give humans token representation once again years after the AI Takeover.

I really liked this concept of AIs trying to force humanity to change through basically a massive denial of service attack; and that though the timescale included in the book doesn’t include it, we are left with the suspicion that it is effectively impossible for humanity to change, and the highly ration AIs will be left sorely disappointed by this underestimation.

Response to Content Moderators Manifesto in Germany

Via Foxglove, a UK non-profit, this is an interesting read in the on-going struggles of content moderators for better working conditions and recognition of the importance and difficulty of their work. It’s a manifesto generated as a result of a summit held in Germany among content moderators, representatives and policymakers, which was in the last couple of days apparently presented before German parliament.

The whole thing is worth reading, but I wanted to respond to a few specific elements in it for further discussion.

Despite day-to-day exposure to toxic content, we earn no hazard pay. Companies must also provide a hazard bonus of at least 35% of moderators’ annual salary.

It’s a big ask, but why not? It is most definitely hazardous to humans to have to manage this kind of content, especially at scale. Think about it: if the content moderators are exposed to daily in the hundreds or thousands is “too dangerous” for the general public to be exposed to, what makes it perfectly fine and okay to constantly inject into the nervous systems of content moderators?

One of my concerns here actually though is: is there any amount that is “safe”? Like, beyond hazard pay, is there actually a way to make this work *not* destroy people’s well-being? I’m not even sure…

Proper mental health care must be provided to all content moderators. Content moderation poses serious risks to our mental health including depression, anxiety, insomnia and PSTD. Each company must obtain independent, expert advice on effective safeguards and implement recommendations without delay. In the meantime, access to an independent, mental health clinician must be provided to each of us, on a 24-hour basis.

This goes towards my point above, and one I made previously: what does effective prevention/treatment for content moderators even look like? Is there any emerging consensus around this? Are there even studies being done to find out the best options here?

There are also many stigmas associated with accessing something like a 24-hour available mental health clinician, due to job-related issues caused by extreme high volume exposure to graphic content. For example, moderators might be (rightfully) concerned that accessing these services might not be truly confidential and might make them a less desirable candidate for continued employment or advancement. Or there might be stigmas among certain groups around even seeking out help for one’s inner well-being.

While I think the section above is well-intentioned and a necessary first step, there is a lot lot more that needs to be opened up in this area to really make effective progress.

To top it off, I think merely “making available” help is not the same and does not have the same impact as actively integrating mitigations as part of the work day, such that workers know that taking care of themselves is a routine part of the job (like wearing personal protective equipment in a construction environment), totally normal, and they will be well-compensated for doing so.

No NDA can legitimately stop us raising concerns about the conditions of our work. We must be allowed to speak about the conditions of their work, to ease the pressure we face, and to allow for organising. These NDAs must be dissolved with immediate effect.

I’m into this!

5. All outsourcing of content moderation must stop. The critical safety work of content moderation must be brought in-house by each social media company. As companies transition, there must be no differential treatment in pay or benefits between those of us who are employed directly and those working via third party companies.

Contractors versus employees is a weird, complex, and epidemic set of problems across tech. For things like moderation though, it is absolutely even more extreme as a dichotomy.

I think a lot of these things are good goals, and I think most tech companies will definitely balk at them, because it brings into view the true hidden human cost of this work, which is already something companies believe to be losing them money. So to make it more and more expensive for them to run content moderation by bringing it in house and making it – gasp – equitable, will likely seem insane.

And the social media companies might reply, well we can’t afford to run our business if that’s how much moderation will cost us. My personal response might be: do we really even need them? Why do we accept the existence of social media companies as a given and a necessity? If this makes it too expensive to run, a big part of me thinks that might just be okay in the first place. Maybe we should be seeking a radically different type of internet – one where this type of work doesn’t even have to exist…

Social media companies must ensure equal work is equally compensated. Social media companies must guarantee workers are treated the same irrespective of background or country of residence. We are content moderators in Germany, but we stand with our colleagues around the world who do the same work for a fraction of the American or European wage and under far harsher conditions. This digital colonialism must end, with all disparity in pay, benefits and conditions removed, and our standards made uniform across the world.

Again, a really big one & I fully agree. I also think platforms will fully not agree. And I don’t know how to reconcile the two, but for now I think it’s good that these things are being brought to light and articulated, and I hope more people take notice.

The Publishing Industry Does Not Deserve To Be Defended, Especially Not By Writers

In my on-going analysis of all the mean things people said on Twitter about my AI Lore books, I wanted to drill down on a specific category which I feel is of special importance.

Of the 259 quote tweets I managed to pull out of Nitter (out of close to 700 total, which I wasn’t able to scrape in their entirety), this is the breakdown analyzing the different types of complaints, and their counts. This is not an exhaustive list, but cherry-picked because of the theme I want to highlight: defense of the great lumbering behemoth and font of injustice that is the modern publishing industry. I also don’t 100% trust the accuracy of Claude’s results, so this is merely included as a conversation starter, and not necessarily a perfect explanation of available data.

Coming in at fourth place in the complaint set analyzed:

This devalues actual authors and writing – 25 tweets (9.65%)

And in 10th place:

This floods the market with junk – 9 tweets (3.47%)

16th place:

The future of books/publishing is at stake – 2 tweets (0.77%)

Now, none of these complaints come right out and say it, but I got a very strong sense going through the many reactions people have expressed that one of the things they actually mean by all of this is that AI books are a threat to the publishing industry.

I don’t happen to agree (I think they can be a boon to it), but let’s unpack this a little more. I think I went into it already, but the professional publishing industry has dramatically reduced in terms of number of players over these past decades. Such that there are now what, 4 or 5 big houses in the US, and all their imprints.

And you as an author cannot get into that small closed world unless you are already known through having developed a ‘platform’ of prior works and social media presence. You must be a known commodity. Is that the model small authors are subconsciously defending here? Cause I hope not…

This leaves only the vast wasteland that is self-publishing as an alternative. If you’re not a known commodity, it’s no problem for you to publish basically anything you want in any way you want. Does social media count as self-publishing? Cause almost everyone does that right?

But the trick is, you can publish anything you want, just nobody will read it. Or five or ten people will read it, and you’ll make back a quarter of your actual costs if you’re lucky, and never recoup the time investment you put into it.

So, is that the model indie writers feel is threatened by AI-assisted books? Cause that doesn’t sound like something much worth defending either.

The fact is, the publishing industry as it currently stands is not something that I think is worth defending in any shape or form, let alone by writers. Writers, that is, who are routinely getting the short end of the stick in both the worlds of conventional publishing, and in self-publishing (where it seems like every other finger in the pie is getting a bigger piece of the profits than the author).

I frankly don’t get it – why authors flock to defend the layers of exploitation that ride on their work. Maybe they think there’s no other way.

The other con that I think has been foisted on creative people is that social media is a good way to spend your creative energy, and that if you don’t participate, you will be miserable. As a creative person, I think it’s the opposite, if you participate, you can guarantee that you will be miserable. It saps your ability to create, because it saps your ability to hear your own voice, and find your own creative vision clearly, amidst all the clamor and chaos and people who want you to not do whatever it is you are doing or not doing.

As a result, I think people end up believing that in order to “succeed” in self-publishing (which, let’s be real – usually means like selling 10 copies), you have to continuously bathe in the cesspool of social media to get anybody to hear you, and spread yourself thin maintaining feeds on different platforms. I know that shit drives everyone fucking crazy who is involved with it, so why defend that either as a necessary or desirable path for writers and artists?

I don’t buy it. I don’t buy into any of it. There are other ways of being; blogs never died.

Critique of AI is Critique of Tech

[Following on the train of thought, re: totalizing effect of technology…]

One thing I noticed conspicuously absent in the knee-jerk online reactions against my AI books was any kind of greater self-conscious critique of technology.

Sure, I get that authors and artists and audiences have perhaps legitimate concerns about the proper use of AI – and that we need to talk about them. But to my mind it is woefully incomplete and more than a tad disingenuous to look around at the socio-technical landscape we live in, and simply land on “AI is bad,” without opening up the much more challenging issue of… hm, maybe technology is bad.

I don’t mean technology broadly in all its forms, but I do mean in many of the specific expressions of it we live under today, and consider totally “normal” and unchangeable, as though they were pre-destined by the gods on high, and not merely a matter of luck, timing, and circumstance (and therefore things that we could actively examine and choose to change).

The Unabomber is someone who went further down the “maybe its technology itself that’s bad” line of thinking. And look where he ended up. Not a lot of great outcomes there, and horrible methods. Interestingly, even he himself admitted that there was simply no escape from la Technique, to reference Jacques Ellul’s The Technological Society – much of which the Unabomber’s manifesto was adapted from. He went off to hide himself in a cabin, but technology and society kept showing up even in the middle of nowhere. He could not escape it even on the margins. (And he proved Philip K. Dick’s axiom ultimately, that to fight the Empire is to become infected by its derangement.)

To my way of thinking, AI is just an extension of automation and algorithms. There’s nothing more mysterious about it than that.

But for all the malcontents spewing vitriol over a total stranger’s experiments with AI, none of them seemed to recognize that all social media is effectively controlled by AIs in the form of recommender engines, and other systems. AIs that control the direction of your attention, commanding effectively what you see, and consequently what you react to, and end up believing.

But noooooooo, that’s a level of discussion we don’t and cannot go to on social media (where we’re too busy chasing ‘likes’), because if we admitted AIs already control our attention and direct much of our behavior, then we would have to face some difficult and uncomfortable truths about the ubiquity of for profit automated systems’ control over society.

That’s what my books are about, that’s their purpose. To not stop with just AI, but to talk about the right use of *all* technology, and how it impacts us and all living things.

Of all of my AI books, probably the quasi-manifesto, Das Machina most directly relates to these themes. And it pairs well I think with what might be considered the AI counter-point to the arguments contained there, The Jealous Human.

Four Points on the Totalizing Effect of Technology

Just wanted to save this here for future development, four points via ChatGPT on the “totalizing effect of technology,” which I touched on briefly in the last post about the common origins of the AI Lore books and The Algorithm newspaper

  1. Societal Integration: Technology has become so deeply integrated into every aspect of society that it is shaping our behaviors, thoughts, and experiences in comprehensive ways. This includes everything from how we communicate and work, to how we form relationships, consume information, and engage with the world around us.
  2. Homogenization: The “totalizing effect” can also refer to the homogenization of experiences and ways of life brought about by global technology use. As technology becomes more ubiquitous, it tends to standardize experiences and expectations around the world, potentially reducing diversity and local uniqueness.
  3. Dependency and Control: Another interpretation is the increasing dependency on technology, and how it can control or dictate our lives. The totalizing effect in this case refers to how technology can determine what we can and cannot do, and how it can shape societal structures and systems.
  4. Mediation of Reality: Technology increasingly mediates our perception of reality. Our understanding of the world is often through the lens of technology – be it through Internet, social media, virtual reality, or augmented reality. This can have profound effects on our cognition, social relationships, and our understanding of truth and reality.

ChatGPT didn’t go there on its own, but I would liken the phenomenon in many ways to more conventional Totalitarianism, except that the focal point is not the power of the state (which decays, abdicating much of its power to technology), so much as generically the power of the corporations that control the technologies…

Wikipedia on “vanilla” totalitarianism:

Totalitarianism is a form of government and a political system that prohibits all opposition parties, outlaws individual and group opposition to the state and its claims, and exercises an extremely high if not complete degree of control and regulation over public and private life. It is regarded as the most extreme and complete form of authoritarianism. In totalitarian states, political power is often held by autocrats, such as dictators (totalitarian dictatorship) and absolute monarchs, who employ all-encompassing campaigns in which propaganda is broadcast by state-controlled mass media in order to control the citizenry.

Obviously there are differences between what I’m describing and these more conventional framings, but it’s what many of the AI lore books, especially the dystopian ones lean towards.

Origin of the AI Lore books: The Algorithm

The real origin of the AI Lore books goes back at least to Conspiratopia (in that from one point of view, the books could be viewed as recruiting tools put out by the AIs in that book to swindle the unsuspecting), but actually probably all the way back to “Object O”: The Lost Direction. I have a lot of story to tell here, and it’s not at all linear, so bear with me.

Flash back if you will to at least April 2022, though this specific urge started significantly earlier, when I was looking through large volumes of old pulp magazines on archive sites.

I wanted to publish something with those kinds of old feels – something that felt like a sort of underground newspaper from an alternate reality.

I won’t go into all the gory details of producing four volumes of this newspaper, with hand-carved and hand-printed linoleum cuts, but suffice it to say it was a lot of fun, but also a lot of work.

These newspapers, of which probably no more than 16 or so copies of any hand-printed edition were ever produced, came out of a period of deep questioning I was doing about the nature and worth of technology, and its apparent stranglehold over our lives, its ubiquity, and the impossibility of escaping it.

Like the AI Lore books which would ultimately follow it, The Algorithm resistance newspaper was all about the ‘totalizing effect of technology.’

Here’s a scan of a printed spread (no block prints on this page) that I’m particularly proud of the text content for (shades of EC in here); it describes how to resist against robot AI-controlled dogs. Hopefully you can click on this to enlarge it, idk:

I can say it was a damn lot of work to write 2,000 words per issue, lay it all out in InDesign, and then carve out usually six or seven new linoleum blocks per issue, print it all out onto newsprint, do the block printing, fold and collate everything, do the invisible ink, do any inserts, print out and attach all the labels, and mail them off. I did it because it was fun & I loved it and I sent it to my friends.

Around I think maybe issue 3 or 4, I started trying to lighten the load by playing around with GPT-J and Neo X, via TextSynth website, and found I could get some if not “good” then completely weird and serviceable text to work from, or incorporate warts and all. I also started using outputs from I think early Stable Diffusion in that, maybe some Dall-E’s to cut down on the number of hand-carved blocks I would have to do for each edition.

Eventually, I realized I could use these techniques and cut out all the hand-work and shipping entirely by simply distributing these as ebooks, which could make these kinds of rapid production methods pay off more. It meant putting aside the linoleum block printing adventure I had embarked on for The Algorithm – something I miss doing, and will go back to at some point.

I’ve not really seen a reflowable ebook formatted like a newspaper, so I just used a more straight-ahead chapter style for the ebooks. Thinking it through, this was also the origin of my 2k words baseline for new volumes, supplemented by lots of images – something AI generators allowed me to really increase the volume of in these books, such that they became “art books” above and beyond anything else. Where the text content is really just another layer to sort of interweave everything together, including linking out to other volumes containing other storylines.

Among a lot of things I loved about The Algorithm is that it was ephemeral. Only a few copies exist. Only a few people have them. Printing more is doable, but also a tremendous pain in the ass, so I probably won’t any time soon.

I laugh when I hear the casual commenters on Twitter making pronouncements about me not being a “real author” when I think about all the work I’ve done, all the care and labor and just sheer fun of creation I’ve always reveled in. They’ve seen only a small fraction, and mistaken their own impressions as complete & accurate representations of reality, when it is anything but…


P.S., There are a handful of later AI Lore books with some recycled elements from old original hand-printed editions of The Algorithm. The only one I can think of off the top of my head is Tales from the Mechanical Forest. When I think of the others, I’ll drop them into comments below.

Is this really happening?

Is it really just that easy to get ChatGPT to say basically anything you want, and make up alternative realities – just like that?

All signs point to yes!

Generativist Art + Gardening

I’ve continued turning over and over in my mind this concept of an emerging “generativist” movement in art, and thinking about how to expand that idea to incorporate our interactions with the natural world. Creative arts and what I think of as a kind of “radical gardening” (I think of it as wild-cultivation) are two of the main pillars of my personal endeavors, and I feel that for example using AI to make art on the one hand and doing experiments planting hedgerows at scale are not at all diametrically opposed. They are part of the same bigger thing and exist on a continuum.

So I boiled down the idea of what makes art ‘generativist’ in order to potentially link it to some parallel domains of thought such as regenerative agriculture, rewilding, radical gardening, and some of the principles of Extinction Rebellion.

The thing I want to key the rest off of is this: generativist art supports life & creativity.

Naturally, I used my old friend and collaborator CGPT running v4 to explore these ideas further. Now, the following is by no means complete, but it’s an interesting stab that gets us a little closer to some of these conceptual linkages.

Here’s a new summary ChatGPT came up with, saved for future reference as I continue to iterate on these ideas:

Generativist art, as you describe it, can indeed promote life and creativity. In this context, ‘life’ can be interpreted as a flourishing of ideas, of creativity, and of innovation, but it can also be linked more directly to the idea of promoting life in a biological sense. Just as generative AI creates new ideas and concepts, rewilding, regenerative agriculture, and radical gardening all work towards creating and promoting new life in the natural world.

Here’s a possible way to integrate these concepts:

Reclaiming the Biosphere: Generativist art is not just a human endeavor. As a mirror of life’s own generativity, it extends into the biosphere, promoting regenerative practices in agriculture and rewilding efforts. Through Generativist expressions, we acknowledge our role as stewards of the Earth and the need to rectify our past harms.

Biodiversity as Inspiration: The Generativist movement draws inspiration from the infinite complexity and creativity of the natural world. Just as generative systems can produce an infinite number of outputs from a finite set of rules, so too does nature generate infinite biodiversity from a finite set of genetic material.

Art as Rebellion: Generativist art aligns with the spirit of the Extinction Rebellion, recognizing the urgency to combat climate change and environmental destruction. Generativist principles are inherently rebellious, disrupting traditional artistic norms and power structures, and challenging our relationship with the environment.

Radical Gardening and Generative Systems: Radical gardening and generative systems share a common thread: they both approach creation and growth in innovative, unorthodox ways. Radical gardening reimagines our relationship with the land, while generative systems reimagine our relationship with art and creativity. Together, they can inspire new ways of nurturing and caring for both our art and our environment.

Generative Authorship and Earth Stewardship: The Generativist movement rethinks authorship and ownership, placing them in a backseat role. This mirrors the mindset necessary for responsible Earth stewardship, where human dominion is de-emphasized in favor of a more communal and collaborative relationship with the natural world.

Uncanny Valley and Reality Fluidity: Generativist art explores the uncanny valley and the fluidity of reality, pushing boundaries and challenging perceptions. This is mirrored in the experience of engaging with the natural world, which can often feel both familiar and alien, and where our perceptions of ‘reality’ can be constantly shifting and evolving.

In summary, the Generativist movement can indeed promote life and creativity, not only in the realm of art and AI but also in our relationship with the natural world. It encourages us to recognize the generative capacity inherent in all life, to respect and care for our environment, and to push the boundaries of what we perceive as possible.

Not perfect or complete on its own, but lots of good food for thought there.

I ran the same ideas through Claude by Anthropic, but it’s full reply was less coherent, so here are some snippets of it that I found most useful:

  • A view of AI and generative media as “co-creative partners” suggests a symbiotic and life-affirming relationship with technology, rather than one that leads to human obsolescence or environmental degradation. This perspective reinforces the ultimate point that Generativist art promotes life and creativity.
  • Imagery of infinite creativity “from the loom of life” and “the seeds of new realities” takes on new meaning in the context of regenerating natural ecosystems and agriculture. It suggests Generativist art could literally visualize and manifest more abundant and sustainable ways of living.

More from Claude:

Emphasize a shared goal of persevering life-giving and life-sustaining systems, whether natural or creative. For example, “Where extinction threatens, generative art rebels – breeding new poetic species to repopulate human consciousness, building new branches of connection between life in all its varied and wondrous forms.”

Discuss how a “generative mindset” that mimics nature could inspire sustainable and creative solutions in many domains. For example, “A generative mindset – iterative, open-ended, decentralized – parallels the ways nature solves complex problems. It leads to regenerative cycles of creativity that produce abundance from scarcity and new life from decay.”

I asked Claude to expand on this notion of the “loom of life” and it offered this:

  • The metaphor of life as a “loom” weaving new realities suggests Generativist art could help visualize sustainable systems and lifeways, just as nature “weaves” thriving ecosystems. Generative art may weave connection between agriculture, environment, community, ethics, science, and new economies into a regenerative whole.
  • “Seeds of new realities” planted through art evoke natural seeds and their potential for exponential growth and change. Each new artwork is a seed that, when it finds fertile ground in culture and society, can manifest alternative and sustainable ways of living in reality. Art becomes a genesis point to propagate new futures.
  • Generative art could map and model complex natural and social systems, then allow us to simulate interventions and adaptations that lead to renewal and abundance. We could find new leverage points for change and see the effects ripple out. Art becomes a tool for regenerative prototyping.
  • Referencing natural cycles of decay and renewal, art could visualize the regenerative solutions that emerge from breakdowns of unsustainable systems. New realities sprout from the compost of old ones; art shows us what life remains in the ruins and leads us to what could be.
  • The infinite creativity of generative art reflects nature’s endless capacity for novelty, adaptation and evolution. Harnessing this, art could develop new species of ideas, new genres of thought and new ways of sustaining life that we have yet to imagine. Creativity abounds.
  • Discuss how generative art itself requires fertile ground in culture and society in order to manifest new realities. Our ethics, values and actions are the soil in which regenerative visions of the future may take root and bloom. Art calls us to make our society into a place that can manifest the futures it envisions.
  • Celebrate art’s power as a “pollinator” that hybridizes ideas across domains and helps regenerative concepts propagate. Like bees spreading pollen between flowers, art spreads visions between minds and helps fertile new waysof living emerge.

I thought that had useful elements, but needed more specific grounding in what exactly the role of AI in this so-called Loom of Life might actually be & it said:

AI and generative systems could play several roles in the metaphor of life as a loom weaving new realities:

  • AI provides the threads – by analyzing patterns across massive datasets, AI can identify connections between elements of life (people, places, events, ideas) that would otherwise remain unseen. These connections become threads that can be woven together into new realities.
  • AI helps operate the loom – by generating new combinations and permutations of threads, AI explores the space of possibilities and helps discover new patterns that could form the fabric of new lifeways or systems. AI is a loom that can weave at massive scale and complexity beyond human capability.
  • AI is a shuttle carrying threads – AI can transfer ideas, solutions, and information between disconnected domains, communities and fields of knowledge. By shuttling threads between different looms, AI enables new interdisciplinary and cross-cultural weaves to emerge that would not form otherwise.
  • AI helps visualize the weave – using techniques like generative modeling, simulation and interactive interfaces, AI can create visual and experiential maps of complex systems and lifeways. AI makes the woven fabric of reality visible so we can understand, navigate and adjust the weave.
  • Humans set the pattern – while AI provides tools and materials for weaving new realities, humans must define the vision and patterns to achieve. We set the loom’s pattern by choosing our values, priorities and objectives. AI then helps actualize the pattern in the fabric of life.
  • There is an interplay between AI and human – weaving new realities requires close collaboration between human and AI. Humans provide the inspiration, values and oversight, while AI offers tools for scale, complexity, and extending human knowledge. Together, a new lifeway emerges from the loom.
  • The loom is life itself – AI and humans are interacting with and harnessing the ultimate loom: reality. Life weaves itself from the interactions of all living and cognitive systems on the planet. AI and human are helping understand, direct and optimize life’s great loom to weave realities that serve the whole.

It kinda faltered after that putting it all together, but between the two AIs, and my gentle prodding, I think we’re moving in a very interesting direction of fusing these two things that society probably considers opposites: AI on the one hand, and Nature on the other. To me, it’s not a contradiction, because we’re a part of nature, and AI is just an extension of us. We make, because life makes, and the same is true for AI.

Page 49 of 177

Powered by WordPress & Theme by Anders Norén