Questionable content, possibly linked

Category: Other Page 19 of 177

Quoting Sougwen Chung on Machine-Aided Art

Via The Observer:

In an art world where the presence of the human hand is increasingly the ultimate currency and a badge of honor, Chung has met those who shudder at the thought of a machine-made painting. “I get excited when people think I’m not making ‘real art’ because it means I’m doing something different and writing my own story,” they say.

TED talks are dead to me, but here is one by the artist that is pretty interesting, from 4 years ago:

And what they’re saying here about “hybridity” seems to get to what I was aiming for in the earlier centaur/cyborg post.

New Painting: Family

I discovered a marvelous thing called “acrylic ink” about mid-way through this one, which I will be exploring with greater, ahem, fluidity in the paintings which follow after thus one.

There are some early Modernist/Cubist influences here for me, because I’ve been looking closely at French and Spanish paintings circa ~100 years or so ago, give or take. But I’ll let the viewer decide what those are, rather than spell them all out.

This one is on unstretched canvas, and lays out to approximately the same size as my dinner table.

Identifying themes in contemporary generative AI art

I guess the reason I got here was in thinking through what makes my work with AI “unique” or else part of a common cultural idiom of people who are using these tools. What characteristics describe us? What are the common themes?

The ones I’ve found so far, are these:

1) centaur/cyborg operator/authorship;

2) networked narratives (entities + relationships + characteristics + changes over time);

3) hyperreality, meta-commentary, disorientation, deconstruction;

4) oscillating polarities of meta-modernism, closeness/distance, sincerity/irony/inauthenticity.

There’s another possible fifth theme I just identified that might make sense to unpack more separately, what I’m calling the “prima materia” after the alchemists, but in this context could take several meanings from the training data which was ingested into the AIs. Whether or not that data included copyrighted materials is only one small part of a much bigger struggle we will need to grapple with in terms of what do we actually want these models to model, of all the things that human nature can be, all the good and bad? What are the cultural, social, political, historical, technological constructs which informed all of this technology in the first place, and what is it merely perpetuating?

Anyway, I’m sure there are more out there, but those are the broad themes that I’ve struck upon myself and seen supported in the writing of others that is emerging on these topics.

On Centaurs & Cyborgs in AI

A friend sent me this piece recently, in which the author posits two types of I guess power users of AI? They are called here the centaur and the cyborg:

In AI technology, Centaurs refers to a type of hybrid usage of generative AI that combines human and AI capabilities. It does so by maintaining a clear division of labor between the two, like a centaur’s divided body. The Cyborgs by contrast have no such clear division and the human and AI tasks are closely intertwined.

I’m not sure I really agree with this, but the set-up goes something like:

A centaur method is designed so there is one work task for the human and another for the AI. […] The lines between the tasks are clear and distinct, just like the dividing line between the human and horse in a Centaur.

They then go on to explain trying to create an image of a centaur in AI image generators, and having a lot of difficulty applying what they’re framing above as a ‘centaur’ approach.

The point of this story is that the Centaur method failed to make the Centaur. I was forced to work very closely and directly with the AI to get the image I wanted, I was forced to switch to the Cyborg method. I did not want to, but the Cyborg method was the only way I could get the AI to make a Centaur with a robotic top. Back and forth I went, 118 times.

The more I thought about all this, the more I felt like, there’s not really any pure “centaur” use of generative AI that I have ever found. It is always without fail a negotiation, a conversation of iterations and tweaking, selection and improvement. It’s always a back and forth. So does that make it a cyborg activity? Weren’t centaurs in classical mythology also tutors, at least in Chiron’s case?

I’d love to believe that AIs “learn” from our interactions, and that I could be their centaur tutor (remind me to tell my AI tutor story some time) but in my experience really they don’t or don’t seem to in short increment time periods, anyway. Perhaps in aggregate over longer periods of time, but that basically doesn’t help me in the moment to break through whatever knowledge-gates I need to achieve whatever it is I am trying to.

I like the idea of cyborgs, but I am a biologically-biased humanist in the end. I like Haraway’s notion of the cyborg as a being or way of being that breaks down boundaries and bridges borders. I think there is something to that at any rate.

I guess that’s a long way around to say that there’s not really any distinction in my eyes between a cyborg and a centaur user of current generation gen AIs. If there’s any centaur AI out there right now, maybe it’s more like something along the lines of Harold Cohen’s AARON robots autonomously generating art at the Whitney. (And yes, that’s still art.) But as they say in that video, that is rules-based and not statistics-based like today’s crop of gen AI commercial tools. I suppose in a more rules-based situation, you could employ more of the “set it and forget it” method (or if you for example set your local Stable Diffusion install to continuously generate images of [….] and have it run 24/7 without intervention. I don’t know, I’m just exploring the idea space around all this to see where there might be usable ground or tools to employ.

It seems this idea of the centaur computer user dates back at least to Garry Kasparov, as there is an anecdote which is always repeated around how it got its name. This 7 year old article by Nicky Case on MIT Press Journal of Design and Science has a lot of great stuff in it, but I’ll just clip some interesting bits. After losing to IBM’s Deep Blue:

However, Garry couldn’t help but imagine: what if a human did work together with an AI? The next year, in 1998, Garry Kasparov held the world’s first game of “Centaur Chess”.undefined Similar to how the mythological centaur was half-human, half-horse, these centaurs were teams that were half-human, half-AI.

Later:

In 2005, an online chess tournament, inspired by Garry’s centaurs, tried to answer this question. They invited all kinds of contestants — supercomputers, human grandmasters, mixed teams of humans and AIs — to compete for a grand prize.undefined

Not surprisingly, a Human+AI Centaur beats the solo human. But — amazingly — a Human+AI Centaur also beats the solo computer.

… The old story of AI is about human brains working against silicon brains. The new story of IA will be about human brains working with silicon brains.

And this is fantastic:

a tool doesn’t “just” make something easier — it allows for new, previously-impossible ways of thinking, of living, of being.

Of course, there will be a fair amount of the new ways of thinking, living, and being that will be abhorrent, but there will also be many that are beautiful, true, and interesting.

Doug Engelbart envisioned that the computer would be a tool for intellectual and artistic creativity; now, our devices are designed less around creation, and more around consumption. Forget AI not sharing our values — even non-AI technology stopped supporting our values, and in some cases, actively subverts them.

And this, I think, is where this starts to get really interesting:

At first, Garry wasn’t surprised when a human grandmaster with a weak laptop could beat a world-class supercomputer. But what stunned Garry was who won at the end of the tournament — not a human grandmaster with a powerful computer, but rather, a team of two amateur humans and three weak computers! The three computers were running three different chess-playing AIs, and when they disagreed on the next move, the humans “coached” the computers to investigate those moves further.

As Garry put it: “Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.”

This idea that multiple non-expert humans using multiple AI tools, and carefully sifting through results could arrive at conclusions that other supposedly “better” thinkers might not.

AIs are best at choosing answers. Humans are best at choosing questions.

And that’s how the winning Human+AI team of the 2005 online tournament chose their “+”. The two amateur humans gave questions to their three weak computers, and when the computers gave back differing answers, the humans gave them even deeper questions. […]

In all these examples of centaurs, the human chooses the questions, in the form of setting goals and constraints — while the AI generates answers, usually showing multiple possibilities at once, and in real-time to the humans’ questions. But it’s not just a one-way conversation: the human can then respond to the AI’s answers, by asking deeper questions, picking and combining answers, and guiding the AI using human intuition.

Anyway, more to say here, but those are all the loose ends I had hanging in my brain about this for the last week and a half or so…

Quoting Tumburu Wikipedia

I love this:

A South Indian legend records that Tumburu once performed severe austerities and pleased Shiva. Tumburu asked Shiva to grant him a horse-face, immortality, liberty to travel the universe, skill in music and singing, and the ability to reside with and serve Shiva. Shiva blessed him and granted the boons that he sought.[5]

I feel like I need to find out why he wanted to have a horse face?

Perplexity put together a not bad reply to this, citing sources that I haven’t checked, and probably won’t:

According to Indian mythology, Tumburu asked Shiva for a horse face for the following reasons:

  1. As a result of severe austerities: Tumburu performed intense penance to please Lord Shiva. After successfully gaining Shiva’s favor through his devotion and austerities, Tumburu was granted the opportunity to ask for boons.
  2. To obtain a unique appearance: By requesting a horse face, Tumburu sought to distinguish himself with a distinctive and memorable appearance. This unusual feature would set him apart from other celestial beings and musicians.
  3. To gain special powers: In some interpretations, the horse face may have been believed to grant Tumburu enhanced musical abilities or other mystical powers associated with his role as a divine musician.
  4. As a sign of devotion: Asking for such an unusual boon could be seen as a demonstration of Tumburu’s complete surrender and devotion to Lord Shiva, showing that he valued Shiva’s blessing over conventional beauty.
  5. For immortality: Along with the horse face, Tumburu also asked Shiva to grant him immortality. The horse face may have been seen as a small price to pay for the gift of eternal life.

It’s worth noting that while the horse face is a defining characteristic of Tumburu in many accounts, the exact reasoning behind this specific request is not always explicitly stated in the mythological texts. The horse face has become an integral part of Tumburu’s iconography, often depicted in Indian visual art and described in various legends related to this celestial musician.

Reading that with my mythical goggles on, the part about demonstrating complete devotion makes a lot of sense, if you think of Tumburu offering himself basically to be the mount of Shiva, and to be driven by Shiva’s will for all eternity, etc. “Cool story, bro.”

Quoting Charlie Engman on AI Art (Artnews)

I thought this piece by Charlie Engman on AI art in Art in America was pretty alright, as far as these things go. Tired of reading most of the empty diatribes out there on this topic, but there were some bright points here. Will just pull out bits & pieces here:

AI art is often labeled derivative, seen as inherently secondary to the superior creativity of humans. True creative expression is posited as a sudden and novel rupture or disruption, a big bang of creativity, rather than a cumulative, collaborative process. […]

Alan Turing, the grandfather of artificial intelligence, predicted that, in the face of the mechanical reproduction of their roles, the “masters [experts with specialized knowledge or skills] would surround the whole of their work with mystery and make excuses, couched in well-chosen gibberish, whenever any dangerous suggestions were made.” Creative labor has long been shrouded in such mystery. In the context of capitalism, art has always had to appeal to mysticism to justify its fundamentally unproductive, experiential nature. It is seen as an ineffable sacred act that supersedes the other labor that attends it. This has led to a personality cult of the individual creative genius who holds exclusive ownership to some magical artistic impulse. We celebrate Jeff Koons, not the assistants and fabricators who construct his work.

It makes sense, then, that some artists would be skeptical of a technology like AI that appears to be attempting, rather successfully, to lay bare the constituent parts of expression, potentially undermining the mystique that has long protected the authority of this individual creative genius.

I like that angle around Jeff Koons and his “helpers” too. Does Koons actually fabricate much/any of “his” “art?” I have no idea, but just like I argued here with writers, most professional writing is pretty much all the time collaborative in the end. Editors, proofreaders, layout, marketing, etc. The same has been true for the workshops of major artists throughout much of history. It’s a collaborative deal with many unsung helpers, whose contributions are negated by focusing on the cult of the Heroic Artist-Capital-A above all else. It’s the wrong thing to focus on in art, and it’s part of why I don’t give that much of a shit when people sling the now familiar insult, “He’s not a real artist.” Boohoo.

More from the piece, I also super appreciate this as someone who spent years doing content moderation:

While much attention is given to the exploitation of artwork in AI training, less focus is placed on other forms of labor. For instance, OpenAI employed Kenyan workers to label harmful content such as pornography, violence, and hate speech in order to train its content moderation AI systems. These laborers were paid less than minimum wage for their challenging and often traumatizing work, which was crucial to making the AI models commercially viable. The role of such labor practices in shaping AI algorithms has been largely overlooked, while the exploitation of artwork and other expressive data in training sets dominates the AI discourse. The disparity suggests that the labor of these workers is less valuable—less inherently “human”—than the labor of art-making, even as their critical role in the machine learning process demonstrates otherwise.

Well, with how much of it I’m quoting, you may as well just go read the original. But here’s more:

To the extent that AI diminishes creativity, it is that, in the eyes of the algorithm, the output of a conventional artist (a photographer, say) and the output of anyone else (a meme shit poster, say) have the same value; they differ only in register. AI is accelerating an ongoing institutional collapse of authorship and taste. The high-culture museum has been exploded into an open-air county fair, and the elites—the masters—are scrambling to retain their special status.

A Short Story About A Cat Where Every Word Is Capitalized and Trademarked

The Cat™ sat on the Couch®, staring at the Window™. Outside, a Bird™ fluttered by, catching the Cat’s® attention. The Tail™ flicked with excitement as the Cat™ crouched low, eyes locked on the Glass™. Suddenly, the Wind™ blew, and the Door® creaked open. The Cat® saw its Chance™ and pounced off the Couch®, racing towards the Garden™. The Grass™ felt cool beneath its Paws™ as the Cat® chased the elusive Bird™. But just as the Cat™ leaped, the Bird™ flew away, disappearing into the Sky®. Disappointed, the Cat® returned to the Porch™, curling up for a nap in the Sun™.

ChatGPT


I’ve long held this pet idea: that eventually every word individually will be a registered trademark, and you’ll have to make micropayments to rights holders every time you try to use them. If you can’t afford to pay, you won’t be able to communicate using them.

Harold Cohen AI Art Video 1987

And a video from 2024 from the Whitney Museum on Cohen’s art.

Xenobots & the Third State Between Death & Life

Yikes!

Cells come alive after death

In this third state, certain cells — when given nutrients, oxygen, bioelectricity, or biochemical signals — have the capacity to transform into new multicellular organisms, exhibiting new functions even after death.

The researchers reviewed recent studies showing the incredible capability of cells to reorganize and take on new forms after the death of the organism. 

Skin cells become xenobots 

In 2021, U.S. scientists found that skin cells from dead frogs could adapt to a lab environment and spontaneously form multicellular organisms — actual living machines called “xenobots.” 

While most machines are constructed from materials like steel and plastic, which can degrade or break over time and have harmful side effects, living systems made from self-renewing and biocompatible materials would avoid those negative consequences.

These xenobots displayed behaviors far beyond their original biological purpose, using hair-like structures called cilia to move through their surroundings.

They also proved adept at material collection, information recording, self-healing, and limited replication.

Lung cells become anthrobots 

Similarly, other researchers discovered that human lung cells could self-organize into tiny multicellular organisms known as “anthrobots.” 

Anthrobots range in size from the width of a human hair to the tip of a sharpened pencil. Remarkably, these multicellular robots are designed to self-assemble and have demonstrated a pronounced healing effect on other cells.

These anthrobots could not only move independently but also repair themselves and heal damaged nerve cells nearby.

Freaky, from the Wikipedia page on xenobots:

The first xenobots were built by Douglas Blackiston according to blueprints generated by an AI program, which was developed by Sam Kriegman.[3]

Xenobots built to date have been less than 1 millimeter (0.04 inches) wide and composed of just two things: skin cells and heart muscle cells, both of which are derived from stem cells harvested from early (blastula stage) frog embryos.[7] The skin cells provide rigid support and the heart cells act as small motors, contracting and expanding in volume to propel the xenobot forward. The shape of a xenobot’s body, and its distribution of skin and heart cells, are automatically designed in simulation to perform a specific task, using a process of trial and error (an evolutionary algorithm). Xenobots have been designed to walk, swim, push pellets, carry payloads, and work together in a swarm to aggregate debris scattered along the surface of their dish into neat piles. They can survive for weeks without food and heal themselves after lacerations.[2]

Other kinds of motors and sensors have been incorporated into xenobots. Instead of heart muscle, xenobots can grow patches of cilia and use them as small oars for swimming.[8]

Multi-Concept Addressing for Latent Space Navigation

Preface: How I got here

I’ve been hanging out at the library lately, and realized that my Dewey Decimal System (DDS) knowledge has gotten pretty rusty. I found a “concise” summary of it here, and printed off the First, Second, and Third Summaries, which cover The Ten Main Classes, The Hundred Divisions, and the Thousand Sections, respectively.

I knew there was some controversy about the DDS, but I hadn’t checked in on what it was these past couple decades, I guess. But in skimming through the classes, divisions, and sections, it became apparent how lop-sided its distribution of identifying numbers is towards all things European. We see it again and again that it literally marginalizes entire cultures and their achievements by sticking them into “grab bag” left-overs like:

  • 290 Other religions (where 220-280 are all overtly dedicated to Christianity, and 200-219 are no doubt heavily influenced by that tradition)
  • 490 Other languages (420-4880 are all European languages)
  • 890 Other literatures (810-880 are all Euro or American lit)

I won’t bore the non-taxonomically inclined among you by going line by line through The Thousand Sections (though I am strongly tempted to, but that would prolong this preface unnecessarily moreso than already). But some curiosities jump out in that Islam, 297 doesn’t have its own number to itself, but also includes Babaism & Bahai Faith. Likewise, Buddhism is not given its own name let alone its own number. It is a subdivision of 294, Religions of Indic Origin. I don’t know the exact numbers, but based on some initial skimming on Perplexity, it looks like Buddhism + Islam combined in terms of number of followers on Earth is roughly equivalent to that of Christianity globally. But the Dewey Decimal System doesn’t represent these other dimensions of social reality.

Anyway, all that is to say, with an eye to not duplicating the failings of the DDS as a metaphorical jumping off point, couldn’t it be an interesting exercise to come up with some kind of flexible, less judgemental addressing system for navigating high-dimensional latent spaces, such as those you encounter as a user of generative AI models and systems? I’ve already experimented in this direction visually in the past, thinking about how gen AI image creation systems like Midjourney could benefit from some kind of hand-held controller, which would let you rapidly assign and modify dimensional values on the fly, in order to traverse neighborhoods and relations in more or less real time. Latent space as navigable VR, if you will.

I took this problem to three different AI genies, of, paraphrased, give me a dewey decimal system for latent space. The first genie, whose angelic name is ChatGPT-4o, gave me answers that were mildly insightful, but not adequately interesting to pursue in depth. The second genie, whose moniker is Claude, gave me results which were promising, and a UX response that was riddled with errors and hobbled by rate limits. The third genie, named for the Winds brought with it clear thinking, and an in-depth ability to solve the problem through interrogation. That genie’s failing is its refusal to follow custom instructions or the equivalent at a prompt level (“code only, no explanation”), and it’s slow speed. But what it lacked in those areas, it made up for in its ability to guide me to towards a tentatitvely adequate V1/MVP, which is presented here without further ado after this absurdly long preface. My sincerest sorry/not sorry.


Introduction

Disclaimer:

The Multi-Concept Addressing system (MCA) is an attempt by a non-technical author to develop a preliminary schema for one way of potentially addressing locations within latent spaces. It may not prove to be the “best way,” but seemed good enough to at least put out to get the conversation started.

Much of the rest of this text that follows comes directly from Mistral, with light edits from myself.

MCA: The “Dewey Decimal System” for Latent Space in Generative AI

Multi-Concept Addressing, or MCA, is a proposed addressing system designed to navigate and interpret high-dimensional latent spaces in generative AI models. It provides a structured and interpretable way to represent complex scenes and images, much like the Dewey Decimal System organizes information in libraries. (*See: Preface)

Key Components:

  • Base Concepts: High-level concepts that define the broad categories of elements in a scene.
  • Sub-Concepts: Detailed information about specific elements within the base concepts.
  • Relations: Relationships between different concepts, capturing how they interact.
  • Context: Additional contextual information that provides nuance and depth to the scene.

Operational Principles: MCA operates on membership degrees or intensities (that is, whether an image, for example, contains members of a particular concept, and how much), allowing for precise control over the presence and importance of various concepts and relationships within a scene.

Problem Solved: MCA addresses the challenge of navigating and understanding high-dimensional latent spaces in generative AI models in something that approximates a human-readable format. It provides (hopefully) a holistic and flexible solution that can potentially be adapted to various contexts, including image generation, semantic analysis, and data retrieval.

High-Level Example

Consider the following natural language prompt:

a cat riding a bicycle wearing a football helmet playing a banjo in outer space

This prompt contains multiple concepts and relationships that need to be represented in a structured and interpretable way. Let’s see how MCA can achieve this. [Back to my text with Mistral excerpts included below.]

The first part of an MCA address consists of a string like this representing base concepts and weights:

Ani90Obj80Env90Act80Acc70

Where the name values for high-level (or “base”) concepts represented in this query are:

Ani: Animals
Obj: Objects
Env: Environments
Act: Activities
Acc: Accessories

Using only this for addressing just leaves us in a very fuzzy general vicinity… maybe something like a room in a given library, or a big shelving unit. We might be able to find what we need, but we’re most likely to stumble around looking for it without more specific information.

Base Concepts Sample List (Provisional)

As an aside, I had Mistral work up a set of what might be the top base concept names and abbreviations. I kept saying, do you have any more edge cases, and it kept giving more and more. Eventually I gave up, as this seems like an adequatedly representative step for a v1 of this concept. Here is that list, for completeness (though it also made a list that was much much much longer, and I had to eventually push the stop button. I’ll at least spare you that one). I think there could be better three-letter codes representing each concept, but I left them as the first three letters to make it simple. Here it is:

  • Animals (Ani)
  • Objects (Obj)
  • Environments (Env)
  • Activities (Act)
  • Accessories (Acc)
  • People (Peo)
  • Plants (Pla)
  • Structures (Str)
  • Weather (Wea)
  • Time (Tim)
  • Emotions (Emo)
  • Events (Eve)
  • Sounds (Sou)
  • Text (Tex)
  • Abstract Concepts (Abs)
  • Technology (Tec)
  • Food and Drink (Foo)
  • Transportation (Tra)
  • Art and Culture (Art)
  • Natural Phenomena (Nat)
  • Science and Mathematics (Sci)
  • Health and Medicine (Hea)
  • Education (Edu)
  • Sports (Spo)
  • Mythology and Folklore (Myth)
  • Fantasy and Science Fiction (Fan)
  • Geography (Geo)
  • History (His)
  • Lighting (Lig)
  • Colors (Col)
  • Textures (Tex)
  • Movement (Mov)
  • Interactions (Int)
  • Symbols (Sym)
  • Virtual and Digital (Vir)
  • Celestial Bodies (Cel)
  • Microorganisms (Mic)
  • Chemicals (Che)
  • …and on and on

Navigating Sub-Concepts

Getting back to the addressing for that specific reference prompt results – if we’re looking at all of this in JSON, then the next part of the address will derive from sub-concepts within those broader base concepts which could be represented like this:

{
  "SubConcepts": {
    "Ani": "cat90",
    "Obj": "bike80banjo70",
    "Env": "space90",
    "Act": "ride80play70wear60",
    "Acc": "helm70"
  }
}

So our MCA partial address now has the base concepts, and sub-concepts, which would look something like this, give or take:

Ani90Obj80Env90Act80Acc70::cat90bike80banjo70space90ride80play70wear60helm70

It looks inscrutable-ish, but it’s not really. It’s just a way of compressing the JSON schema into a single line.

Relations & Context

But still in its current form, we don’t necessarily know enough about the different entities and actions to find exactly what we’re looking for. We might get close, but still have major errors. We need to know something more about the relationships between all these entities or values that are named, as well as any larger context not otherwise captured in the address so far. Otherwise our accuracy is going to be pretty low for navigation.

In JSON, these might look like:

  "Relations": [
    "Cat-RidingOn-Bike",
    "Cat-Wearing-Helm",
    "Cat-Playing-Banjo"
  ],
  "Context": [
    "Surreal",
    "Humorous",
    "BrightColors"
  ]

And if we squish that back down into the full MCA multi-concept address, with weights, it might look something like this:

Ani90Obj80Env90Act80Acc70::cat90bike80banjo70space90ride80play70wear60helm70::Cat-RidingOn-Bike_Cat-Wearing-Helm_Cat-Playing-Banjo::Surreal_Humorous_BrightColors

Granted, we could probably also assign weights to the relations and context elements, but I didn’t want to complicate it any more than it already is.

Putting It Together: Full MCA Example Schema

Putting the above extended MCA address all together again as JSON to elaborate its components:

{
  "MCA": "Ani90Obj80Env90Act80Acc70::cat90bike80banjo70space90ride80play70wear60helm70::Cat-RidingOn-Bike_Cat-Wearing-Helm_Cat-Playing-Banjo::Surreal_Humorous_BrightColors",
  "BaseConcepts": "Ani90Obj80Env90Act80Acc70",
  "SubConcepts": {
    "Ani": "cat90",
    "Obj": "bike80banjo70",
    "Env": "space90",
    "Act": "ride80play70wear60",
    "Acc": "helm70"
  },
  "Relations": [
    "Cat-RidingOn-Bike",
    "Cat-Wearing-Helm",
    "Cat-Playing-Banjo"
  ],
  "Context": [
    "Surreal",
    "Humorous",
    "BrightColors"
  ]
}

Is this actually simple and flexible like I had hoped setting out on Today’s AI Side-Quest? Hard for me to be the judge, but so far it is the only thing of its kind that I have found out there (though I did find some adjacent concepts I’m not yet well-versed enough in these areas to explore in depth), and it does to me at least address a real and specific need, whether or not it, erm, completely accurately gives a reproducible address every time in all situations. It is still maybe a bit vague, but at least hopefully narrows down the task of navigation into a more restrained dimensional space, with keys as to values that could be changed in searching for the specific “shelf” that contains what you are after.

Really, what I imagine in all of this is like a bunch of conceptual characteristics mapped to sliders in a UI, where fully on means that a given characteristic/dimension/concept/tag is applied to the max within the desired outputs. And fully off means that attribute is excluded, plus all the values between. Then using a machine mapped to this would be about playing around with different conceptual sliders to emphasize or de-emphasize members of a given group or groups of high-dimensional characteristics in the latent space.

Phew, lot of words to get out here, but I think that brings us to the end for now, if not the “conclusion.”


Post Script

I am trying to find a natural language UI design tool that can output a version of the above as a simple web app, something to the effect of what I described in this prompt:

app for navigating addresses in latent space based on given values (concepts, subconcepts, relations, context) and their weights. the app consists of sliders paired to specific example attributes or concepts which can be adjusted to yield different results in a viewer window that shows that location

This is somewhat janky, but a quick version of that made using UIzard.io just to leave you with something more concrete to consider:

Page 19 of 177

Powered by WordPress & Theme by Anders Norén