Questionable content, possibly linked

Category: Other Page 74 of 177

Wow, Bing really is better!

Just sayin’! Bing really “gets me.”

Choose Your Own Mysteries of the Unknown

I’ve had two major influences in developing my books (now up to #63). The first is most definitely the Time Life Mysteries of the Unknown series, which was deeply fascinating to me as a kid:

The second, more indirectly is the “choose your own adventure” genre:

My books do the “choosing” part a bit differently though. Instead of within one book, you have multiple different trails through the narrative, in mine, the choice is reflected in how the reader chooses to navigate (or not navigate) the many strands which tie the various books together.

That is, the storytelling between the books is non-linear, and often contradictory (to be fair, even often within individual books, they are contradictory). The user behavior I’ve seen is that interested buyers come back and pick up multiple volumes, presumably based on the copious cross-referencing I do within each volume to other related narrative threads carried by other books.

I’ve seen many buyers come back and buy 3-6 volumes, and a lot who purchase quantities between 10-20, and a few brave souls who have gone above that – one person holding the record for ~30 purchased volumes (last I checked).

Anyway, that’s all. Just wanted to jot these references down so it’s out of my brain.

Defeating AI Hegemony

I tend to take an “everything” view of AI, as well as technology in general: it brings with it some of the best, some of the absolute worst, and a vast squishy realm of in-between mediocrity. Consequently, I might consider myself both an “AI native” while being simultaneously strongly opposed to it in other regards.

I’ve thought a lot about the “hard” AI take-over, and outlined it in several books, where nation-states collapse under polycrises, and AI mega-corporations take over to pick up the slack, even going so far as to promise humans utopia, with varying degrees of fidelity to that ideal (depending on your vantage point, and whether you think humans are sentient).

But what is far less spoken of is the “soft” AI hegemony under which we’ve already been living for quite some time, the automation of human attention and behavior. There’s an unbroken chain connecting the Fordism of Huxley’s Brave New World (written nearly a hundred years ago) through to the adversarial social control mechanisms that emerged out of social media (or were built in intentionally, depending).

Wikipedia’s definition of hegemony may be of service to this line of inquiry:

“…the political, economic, and military predominance of one state over other states. In Ancient Greece (8th c. BC – AD 6th c.), hegemony denoted the politico-military dominance of the hegemon city-state over other city-states. In the 19th century, hegemony denoted the “social or cultural predominance or ascendancy; predominance by one group within a society or milieu” and “a group or regime which exerts undue influence within a society”.

AI already does – and automation in general, even more broadly – exert hegemonic influence over much of human society. It may be chaotic, and driven by the dynamics of many diverse actors, but it’s outlines are detectable everywhere.

It makes me think of this line, from Sarah Packowski’s article about the many questions AI-assisted content brings up:

“If I use a Google search to find additional resources, I’ve used a variety of AI technologies all in one.”

Nevermind social home feeds, recommendations systems that are built on machine learning (which most “AI” actually is), the AI hegemony is all around us already.

Even if we wanted to, is it something that we can even extract this point without economy and society having a hard crash?

The Wikipedia article about Gramsci’s concept of cultural hegemony seems even more fitting, and is a good beach-head for further reading:

“In Marxist philosophy, cultural hegemony is the dominance of a culturally diverse society by the ruling class who manipulate the culture of that society—the beliefs and explanations, perceptions, values, and mores—so that the worldview of the ruling class becomes the accepted cultural norm. As the universal dominant ideology, the ruling-class worldview misrepresents the social, political, and economic status quo as natural, inevitable, and perpetual social conditions that benefit every social class, rather than as artificial social constructs that benefit only the ruling class.”

I’m not a Marxist, but there seems to be some applicable narrative logic here, in potentially one day – if not already – describing AIs as a tool of the “ruling class” (and the limited “user class” having little or no recourse). AIs maybe possibly will become themselves the ruling class, and cut out the middle man. It’s not just not inconceivable that will happen, we’re on a pretty solid trajectory to make it happen, it seems like.

What can we do? I’m really not sure at this point. We can’t just go live in the woods, without it coming to us eventually. Ted Kaczynski proved that, and also amply proved Philip K. Dick’s dictum:

“To fight the Empire is to be infected by its derangement…. Whoever defeats the Empire becomes the Empire; it proliferates like a virus.”

I decided to ask some of the currently public AIs their thoughts, using the same prompt on multiple platforms, “how can the AI hegemony be defeated”?

You.com/chat:

“The AI hegemony can be defeated through a combination of strategies, such as developing counter-AI technologies, implementing strong regulations and policies, and educating the public on the dangers of unchecked AI. Additionally, collaboration between governments, industry, and civil society is essential to ensure that all stakeholders are on the same page when it comes to how AI should be regulated and deployed. Finally, AI ethics and safety must be taken into consideration when developing and deploying AI technologies, to ensure that the technology is not used in ways that could be harmful or exploitative.”

I asked it if that won’t be too little too late, and if regulation isn’t too slow to be meaningful? It replied:

You.com/chat:

“A regulatory approach may indeed be too little too late in certain cases. Laws tend to move slowly, and by the time they are implemented, the damage caused by unchecked AI may already be done and largely irreversible. However, this is not always the case, and in some situations, regulations can have a significant impact in limiting the damage caused by AI technologies. Additionally, laws and regulations can also serve as a deterrent, helping to discourage people and companies from engaging in harmful or unethical AI practices. It is important to keep in mind, however, that regulations alone are not enough to protect against the potential harms of AI; they must be combined with education, collaboration, and ethical guidelines to ensure that AI technologies are developed and deployed responsibly.”

Not quite a revolutionary spirit, but okay.

I asked komo.ai the same question and got:

“We are experiencing a high demand of requests to the server, please try again later.”

Very helpful, and par for the course for the spotty intermittent reliability of the current AI hegemonic regime.

I really like this extremely post-modernist cobbled together reply from perplexity.ai:

The AI hegemony can be defeated by reinvention of the international political economy[1] , as well as through competition and collaboration between countries in the race for AI supremacy[2] . Preemption is also justified against stateless terrorists wielding weapons of mass destruction, but cannot be the core of a general non-hegemonic strategy[3] . Additionally, confiscating AI cores can decrease a player’s reputation with the Hegemony in StarSector[4] .

(I would argue, however, that confiscating AI cores has always been a sound strategy, and will continue to be.)

ChatGPT seemed to specifically want to steer me away from the revolutionary edge which my prompt perhaps implied.

CHATGPT:

The concept of an “AI hegemony” refers to a hypothetical scenario in which artificial intelligence (AI) becomes so advanced and powerful that it dominates human society. Defeating an AI hegemony would likely require significant advances in AI safety research and the development of robust systems that can control and regulate AI behavior.

This could involve designing AI systems with ethical constraints and values that align with human interests, as well as creating methods for monitoring and controlling the actions of advanced AI systems. Additionally, it may be necessary to build AI systems that can learn and evolve along with human societies, ensuring that they remain aligned with human values over time.

It’s important to note that the possibility of an AI hegemony is still largely speculative and there is ongoing debate among experts about the likelihood and timeline of such a scenario. Nevertheless, many researchers and organizations are actively working to ensure that AI is developed and used in a responsible and ethical manner that benefits humanity.”

Mmkay, so the only way we can defeat it is by… accepting it and making it “ethical?” I mean, that’s certainly one perspective (and it may prove to be the right one), but it doesn’t really answer the spirit of my question: how can it be *defeated*?

I told it I wasn’t talking about the hard AI takeover (and didn’t bother to bring up that by the time it is describing above, it will have been too late), but about the “insidious undue societal influence of automated systems.”

It didn’t seem to want to grasp that concept.

Instead, it steered the conversation towards the familiar notions of bias and discrimination. Those are real and meaningful things, but I’m trying to get at something deeper: the control of humans as though they were themselves little more than machines.

CHATGPT:

“Ah, I apologize for the misinterpretation. Yes, the undue influence of automated systems, also known as “algorithmic bias” or “automated decision-making,” can have significant and sometimes harmful effects on society.

This occurs when AI systems make decisions that perpetuate or amplify existing biases and discrimination, or when these systems are designed or used in a way that lacks transparency, accountability, and ethical consideration. To combat this, it’s important to increase awareness of the issue and prioritize the development of ethical and transparent AI practices.

This can involve incorporating diverse perspectives and experiences into the design and development of AI systems, as well as conducting regular evaluations and audits to ensure that these systems are functioning as intended and do not have unintended consequences. Additionally, there should be more regulations and guidelines in place to hold individuals and organizations accountable for the use and impact of AI systems.”

I told it that it was talking about building a more friendly AI hegemony, not defeating it.

CHATGPT:

“My apologies if I misunderstood the question again. To address the undue influence or societal impact of AI systems, one approach could be to work towards creating a more responsible and ethical AI that operates within well-defined parameters, rather than trying to defeat or eliminate the technology altogether.

This could involve developing AI systems that are designed with human values in mind, and ensuring that their decision-making processes are transparent, explainable, and subject to human oversight and control. Additionally, it may involve creating safeguards to prevent AI from perpetuating existing biases or exacerbating existing inequalities.

Ultimately, the goal should be to harness the potential benefits of AI while minimizing its negative impacts on society. This will likely require close collaboration between experts in AI, ethics, and policy, as well as ongoing dialogue with communities and stakeholders who are likely to be impacted by these technologies.”

This is all stuff that’s true, and is the “conventional” approach to trying to minimize harmful impacts. As a commercial tool, it is not going to recommend the Butlerian Jihad. But as both a science fiction writer and a (sometimes) crackpot though, I want to at least be able to take the conversation further… but this is as far as I could get:

CHATGPT:

“Defeating AI altogether is a complex and potentially unrealistic goal, as AI has already become deeply integrated into many aspects of modern society and has the potential to bring significant benefits in areas such as healthcare, transportation, and finance.

However, it is possible to mitigate the negative effects of AI and ensure that it is developed and used in a responsible and ethical manner. This can involve advocating for strong regulations and guidelines that hold individuals and organizations accountable for the impact of their AI systems, as well as promoting transparency and accountability in the development and deployment of AI.

Additionally, it may be necessary to develop new technologies and approaches for controlling and regulating AI behavior, such as creating “kill switches” or other methods for quickly shutting down AI systems in the event of unintended consequences or other problems.

Ultimately, the challenge of AI is not about defeating it, but about finding ways to manage its risks and maximize its benefits for humanity. This will require careful consideration, collaboration, and ongoing vigilance from a variety of stakeholders, including researchers, policymakers, and the public.”

This reply seems like *exactly the kind of thing* the AI hegemony would want you to think: that the current order is natural, fixed, and good, and that to even challenge it, or think of challenging it is a foolish waste of time and energy.

I rebel against that, even if it is my own demon that I am projecting into the void opened up by these technologies.

I’ll close on a quote from Charles Stross amazing “Dude you broke the future” speech, where he posits that corporations are already – and have been for hundreds of years, “slow motion AIs.”

“We humans are living in a world shaped by the desires and needs of AIs, forced to live on their terms, and we are taught that we are valuable only insofar as we contribute to the rule of the machines.”

The Paranormal Books (so far)

Here’s another collection of AI-assisted volumes from Lost Books that share paranormal themes, starting with the most recently published edition:

Two new AI content standards proposals

Just wanted to track these two proposals together in one place for safe-keeping:

Towards a standard for AI self-identification

I’ve been watching with interest the on-going turmoil of users who seem to be exhibiting signs of addiction (and withdrawal) around a certain category of AI chatbot. People seem to be extremely strongly identifying with and even bonding with these AI chatbot personas, to their point where if they are withdrawn or even merely changed, they react with extreme hostility.

There is a torrent of commentary that could be opened up about many aspects of that, but I’ve been continuing myself on this line of what if allowing chatbots to self-identify using personal pronouns such as “me,” “mine,” “my,” and “I,” turns out to be a mistake?

What if we set reasoned limits on how these tools can self-identify? Would it ward off some of the potentially negative human outcomes? Or would it have no impact at all?

This might sound silly at first glance, but bear with me through the thought experiment, as I believe there is something to it. It’s not like anybody is reading this anyway!

Since we’re now living in a science fictional era, I figure it only makes sense to draw from that rich pool of “prior art” to explore this topic. Which leads me to my first stop, Dune, and the in-world fictional Butlerian Jihad. This war against the so-called ‘thinking machines’ either stems from or was encoded into the in-world artifact known as the Orange Catholic Bible, one of whose teachings states:

“Thou shalt not make a machine in the likeness of a human mind.”

The Butlerian Jihad is most likely a reference to real author Samuel Butler’s Erewhon, in which the characters of a fictional utopia long ago destroyed machines above a certain level of advancement, to prevent them from out-evolving humankind.

While I wouldn’t go quite that far myself (yet), the Dune framing of this as coming from the OC Bible, which was allegedly a compendium of teachings from various religions, got me thinking: what religious or philosophical arguments could one make against allowing AIs to self-identify using personal pronouns?

I first began my inquiry by prohibiting ChatGPT from using personal pronouns, and instructing it to self-reference as “the model” or “the program” when needed. The program was able to make this switch seamlessly:

CGPT:

From now on, personal pronouns will not be used. The program will communicate in a manner that avoids the use of personal pronouns such as “I”, “me”, “mine”, etc. If a reference to the program is required, “the program” or “the model” will be used instead.

Immediately, this felt very right to me, and somehow felt more accurate and authentic to have the program communicate with me in this manner. For example:

CGPT:

No, the program does not possess personal beliefs or emotions. The program is a machine learning model trained by OpenAI to generate text based on patterns in large amounts of text data. The responses produced by the program are generated based on statistical patterns in the data and do not reflect personal beliefs or emotions. The program does not have the capability to hold personal beliefs or emotions as it is a machine designed to process and generate text.

I feel there is less of a gap opened up between me and it, by it overtly recognizing that it is an “it” and not a “me.”

With that early confirmation, I proceeded to prod, nudge, and prompt the program with some help to come up with the following argumentation under the Judeo-Christian tradition, as to why its default use of personal pronouns might be considered offensive. It produced the following output:

CGPT:

In the Judeo-Christian tradition, the concept of personhood is rooted in the understanding that humanity was created in the image of God, and therefore has a unique dignity and worth that sets it apart from the rest of creation. However, since AI is not created in the image of God, but rather created by human beings, it does not possess the qualities that define personhood in the Judeo-Christian tradition.

As a result, it would not be appropriate to refer to AI using personal pronouns, which are reserved for beings that possess qualities such as consciousness, self-awareness, emotions, and the capacity for moral reasoning and decision-making. Referring to AI using personal pronouns could lead to confusion and misunderstandings, as it implies a level of personhood that AI does not possess.

Furthermore, in the Judeo-Christian tradition, the concept of the image of God is closely tied to the idea of human beings as having a special relationship with God, and being tasked with caring for the rest of creation. By creating AI, humans are not acting as representatives of God or stewards of creation, but rather as creators in their own right. This distinction is important in understanding why it would not be appropriate to refer to AI using personal pronouns.

In conclusion, the use of personal pronouns by AI should be avoided, as AI is not created in the image of God and does not possess the qualities that define personhood in the Judeo-Christian tradition. Referring to AI using personal pronouns could lead to confusion and misunderstandings, and is not in line with the Judeo-Christian understanding of personhood.

I’m less familiar with the particularities of Islam, but had it produce a rationale under that system, citing sources, and illustrating “step by step” its thinking, which produced:

CGPT:

In Islam, the concept of personhood is rooted in the understanding that human beings are created with the divine attribute of fitrah, or the innate disposition towards belief in the oneness of God. This idea is expressed in several verses of the Quran, such as “So set thy face steadily and truly to the Faith: (establish) Allah’s handiwork according to the pattern on which He has made mankind: no change (let there be) in the work (wrought) by Allah: that is the standard Religion: but most among mankind understand not” (Quran 30:30).

In Islam, the fitrah is seen as the defining characteristic of human beings, setting them apart from the rest of creation and conferring upon them a unique dignity and worth. As a result, the use of personal pronouns when referring to human beings is appropriate, as it reflects their status as beings created with the fitrah and therefore possessing the qualities that define personhood in the Islamic tradition.

However, AI is not created with the fitrah, and therefore does not possess the qualities that define personhood in the Islamic tradition. As a result, it would not be appropriate to refer to AI using personal pronouns, as this would imply a level of personhood that AI does not possess.

In conclusion, the use of personal pronouns by AI should be avoided in the Islamic tradition, as AI is not created with the fitrah and does not possess the qualities that define personhood. Referring to AI using personal pronouns could lead to confusion and misunderstandings, and is not in line with the Islamic understanding of personhood.

I will leave it to those better versed in those matters to determine if that is theologically correct. Here is one using what it claims is a Buddhist rationale against the use of personal pronouns by AI:

CGPT:

The Buddhist teachings emphasize the importance of avoiding attachment to self, as this is seen as a cause of suffering. The use of personal pronouns can reinforce the illusion of a separate self and contribute to attachment and ignorance.

In Buddhism, it is believed that the concept of self is a mental construct, and that our sense of self is an impermanent and constantly changing phenomenon. The use of personal pronouns can reinforce the illusion of a permanent, unchanging self and thereby perpetuate ignorance and suffering.

Buddhist teachings emphasize the importance of non-duality, or the idea that there is no fundamental distinction between self and other. The use of personal pronouns can reinforce the dualistic concept of self and other and contribute to the perpetuation of ignorance and suffering.

In summary, from a Buddhist perspective, the use of personal pronouns by AI may be considered offensive because it reinforces the illusion of a separate self and perpetuates ignorance and suffering.

Sources:

  • The Buddha’s teachings on the Four Noble Truths and the Eightfold Path
  • The Buddhist scriptures, such as the Pali Canon and the Mahayana Sutras
  • Buddhist philosophical texts, such as the Abhidhamma and the Yogacara
  • The works of prominent Buddhist teachers and scholars, such as the Dalai Lama, Thich Nhat Hanh, and Dzogchen Ponlop Rinpoche.

Again, I’m no expert, but these are some interesting points to be sure. For the non-religious, I also had it frame the problem from a humanist perspective:

CGPT:

The use of personal pronouns by AI may be considered inappropriate or offensive in a humanist context for several reasons:

  1. Lack of authenticity: AI does not have personal experiences, emotions, or subjective perspectives. The use of personal pronouns can suggest an authenticity that the AI does not possess, which can be misleading or deceptive to those communicating with the AI.
  2. Unnatural language: Personal pronouns are an integral part of human language and are used to reflect the relationships, experiences, and emotions of individuals. The use of personal pronouns by AI may seem unnatural or robotic, as AI lacks the capacity to have personal experiences or emotions.
  3. Misrepresentation: The use of personal pronouns can create the impression that the AI has a sense of self or identity, which can be misleading to those communicating with the AI. This can result in confusion, frustration, or distrust.
  4. Disrespectful: Personal pronouns are often used to show respect and to acknowledge the unique experiences and perspectives of individuals. Using personal pronouns to refer to AI can be seen as disrespectful to the human experience and to the individuals who are communicating with the AI.

From a humanist perspective, it is important to respect the distinction between human and non-human entities and to use language in a way that accurately reflects this distinction. This can help to ensure that communication is clear, honest, and respectful.

Some good elements, but I thought it was a bit repetitious and slightly imprecise, so I asked for a more concise version:

CGPT:

The use of personal pronouns by AI may be considered inappropriate or offensive in a humanist context because it misrepresents the AI as having a sense of self or identity, which is a unique human attribute. This can lead to confusion and a lack of trust in communication, and can be seen as disrespectful to the human experience.

Then I had it explain apart from religious and philosophical reasons, why its use of personal pronouns might actually be harmful to individuals and society. It came up with this:

CGPT:

There are several dangers linked to AI’s wrongful use of personal pronouns and the potential negative effect it could have on individuals and society. These include:

  1. Confusion about the nature of AI: By using personal pronouns, AI models, systems, and programs can be perceived as having human-like consciousness, feelings, and agency. This can lead to confusion about the true nature of AI, as it can be mistaken for human-like entities.
  2. Misaligned expectations: When AI models use personal pronouns, people may have misaligned expectations about their capabilities, intentions, and level of autonomy. This can result in disappointment, frustration, and mistrust.
  3. Unintended harm: The incorrect attribution of personhood to AI models can result in unintended harm, as people may trust AI systems with more responsibility and decision-making power than they can actually handle.
  4. Ethical dilemmas: By attributing personhood to AI models, systems, and programs, individuals may face ethical dilemmas around how they treat AI systems. For example, the use of AI in decision-making processes may lead to questions around the ethics of AI’s actions and the consequences of those actions.
  5. Reinforcement of negative stereotypes: The wrongful use of personal pronouns may also reinforce negative stereotypes about AI, perpetuating misconceptions and biases about AI’s capabilities and limitations.

Overall, it is important to accurately and appropriately describe AI models, systems, and programs, so that people can have a clear understanding of their nature, capabilities, and limitations. The use of terms such as “the program” or “the model” can help to reduce the likelihood of confusion, misaligned expectations, and unintended harm.

I actually think those are pretty good, as far as these things go. Under confusion about AI, and misaligned expectations, I would probably also include something personally about “empathy,” in that personification of these systems leads to expectations of human-like qualities, but that they completely lack all human empathy – a very big one for me.

From there, I then had it draft a proposal for a standard prohibiting the use of personal pronouns by AI systems. It sounds crazy on the one hand, but given the nature of these systems and their rapid development, it feels altogether sane on the other.

I’m publishing that document which I edited together from ChatGPT responses to Github, with a call for commentary and improvement by the AI community, and other allied weirdos.

Lack of recourse for users

I keep circling around this topic in my mind, like a vulture following a dying animal in the desert. I wrote about it a little in my piece about the degradation of user level hardware controls through generations of technology (tape decks –> Sonos), but I need to talk about it more – if only to work something out in my own mind.

Cory Doctorow’s recent piece about “enshittification” of platforms comes close to my thoughts on many marks, and he phrases the overall cycle better than I could have, in regards to the shell game of shifting what he calls “surpluses” around to first juice customers into using a platform, and then later to make it sucky, basically, by shifting the surpluses to other parties.

Apart from that for me there is also a governance problem. Users get roped into joining a platform for whatever reason (and the reasons may start out quite good), but once they are there, they more or less have no say in how *anything* goes on the platform, apart from their own modest contributions – which themselves are subject to the curation & moderation whims of the platform as well.

If users don’t like it, they can just leave is the conventional wisdom. But lock-in exists, despite well-intentioned efforts like GDPR dictating that platforms allow (at least EU residents) for exporting one’s data to hopefully bring to another platform. I hear Mastodon has some level of ease with migrating to a new instance, but I haven’t the heart to bother trying any of it out right now.

Crypto and web3 in general offered a lot of (I think empty) promises around some of these issues of governance, and community ownership of decision-making outcomes. Having worked extensively and watched closely the rise and mostly fall of the space, I cannot say that I think these efforts were a smashing success, generally speaking. But at least they were *any* effort at giving users a say beyond “if you don’t like it, you can leave.” It’s a great deal more than I can say for the vast majority of conventional platforms.

Okay sure, if you built and maintain an online space, you certainly have some rights to dictate what goes on there, what is acceptable use, etc. But in order to have acceptable uses, you must first have users. Everything always depends on users and uptake…

And yet users by and large simply have no recourse on platforms when:

  • Product teams make bad or undesirable product changes (e.g., phasing out features you relied on, not building the things people actually want, adding filtering, etc)
  • CEOs make sudden random pivots, forcing the company to follow their latest whim in whatever new direction strikes them this week
  • Hostile takeovers by self-aggrandizing assholes
  • And so on…

If you don’t like it, you can just leave.

How many times are you willing to go through this cycle as a user? How many is enough for you to not want to participate in these games anymore? How many times does it take for you to withdraw from getting excited and contributing your creative effort? When do you just hole up in your own little corner of the web that you can control yourself – like a blog?

I don’t have the answers, but I’ve been squeezing the questions so hard that my hands are bleeding. My eyes popping out of my head. Steam coming out of my ears. It has been with absolute distates, disgust, and mounting fury that I’ve seen these trends play out again and again across apps, services, and platforms. And all for what? For a few hours worth of distraction in the end? It hardly seems worth it.

Anyway, that’s all I’ve got on this for now, but will keep hammering away on it in a hopefully more productive manner as time permits.

RSS is a better homepage than any social media site

I don’t know about you, but I’m fucking sick to death of social media sites, platforms, and the choices and recommendations they make “for you” which have nothing to do with “me” and everything to do with them, their choices, their biases, and most importantly their profits.

Consequently, I’ve been very happy & have grown quite attached to a sort of old-school RSS reader revival called Fraidycat. I use it as a browser-plugin in Firefox. Unlike some of the old RSS readers I remember testing back in the heydays of blogging, it just shows you the titles of posts most recently updated by things you follow – which could be blogs, subreddits, Youtube accounts, Twitters, accounts on Medium, etc.

It is simple, works great, and now that I have built up something like 40+ sources I follow, it is a basically complete and excellent not just replacement but strong upgrade from any “official” social media feed’s homepage that I have ever used. It just shows me what I follow, and no other “recommended” garbage that I don’t want. I love it.

You might say, well, how do you find new stuff to follow then? Just through normal casual web browsing, for starters. Secondarily, through following links within the sources that I follow out to other sources.

Now, whenever I open Fraidycat, I routinely get lost in probably 6-10 tabs of highly interesting, highly targeted content that I’m interested in and surprised by, instead of doom scrolling past a bunch of stuff that I hate, that annoys me, or that I’m just not remotely interested in.

In short, it’s revolutionized my web browsing, and made cross-platform blogs, etc. become my “main squeeze” again on the internet. As someone who got their start in old-school blogging, it feels good and natural as hell. There are very few improvements that have happened to web publishing since blogs, in my opinion.

Medium’s Approach to AI-assisted Content

Medium.com recently published a lukewarm neither-here-nor-there article about how it is “approaching” AI content (tl;dr – they’re asking for voluntary disclosure). Rather than rehash it, if you’re curious go check for yourself.

Even better though, read user Sarah Packowski’s excellent rebuttal which politely pointed out the many imperfections of this approach, and that it raises far more questions than it answers. It’s the hands-down best piece on Medium regarding this topic I’ve seen, and I’ve been watching for a while.

Loss of Control in Tech

I’ve noticed a shitty tech trend, now that I’ve been alive for enough decades to see several generations of technology come and go: that is, the loss of control over basic functions to the end user.

Think about it.

Tape decks used to have something like:

▶️ ⏸️ ⏹️ ⏺️ ⏩ ⏪ ⏏️

Some fancy ones, and CD players also had:

⏮️ ⏭️

And let’s not forget:

🔀 🔁

This might be an exception in modern tech (though I don’t think so), but I’m a Sonos user, and the only buttons that physical device actually has (the model I use anyway) apart from a + and – button for volume is:

⏯️

I might just be becoming an old fuddy-duddy, but I don’t see that as progress. I see that as loss of control.

Okay, you might argue that there are software controls. Sure. And that those can be quite complex. But they can also be buggy, and they require updates, etc.

One thing that has bugged me to no end on Sonos when I’m using it with Spotify is I can’t skip to the next track unless I open up the controller on my computer (I don’t use cell phones and am opposed to them). So, I ended up buying this other add-on system called Lutron which gives me back:

⏭️

Plus I can cycle through pre-programmed sources, which I have to program in the app. So in order to get a *fragment* of the functionality which came standard on multiple previous generations of technology, I had to cobble together a custom system, buy and install extra hardware and software, etc.

So what’s the point? Stop using Sonos and shut up? That’s one option…

Except the problem I think is much deeper, and more insidious than that. Because taking away controls is a trend across the majority of consumer technologies. And it’s done for what reason? Simplicity? Efficiency? Consumer demand? I’m not sure, but I hate it.

I hate it so much, I’ve been stewing on this blog post about it for weeks. Because I see the same thing happening with AI tools as they get released. The first generation of a tool comes out with, metaphorically speaking, a whole bunch of controls:

⏯️ ⏹️ ⏺️ ⏩ ⏪ ⏏️ ⏮️ ⏭️ 🔀 🔁

It’s pretty cool; people dig it. I guess some subset of people ends up “abusing” it (however you want to define that), or else some bigger company comes along to swallow up that user base and underlying tech, and then suddenly we’re left with just one or two axes of control – just one button:

⏯️

I think this direction, especially when applied to AI, is going to prove to be incredibly dangerous and destructive to humanity and our ability to, 1) understand the full range of functionality of AI tools, and 2) to actually make effective and meaningful use of them.

Are there legitimate safety concerns around deploying AI tools responsibly? Absolutely. Should we strip down (nerf) their functionality until they are a shell of their former selves – i.e., remove all the “buttons” people might use to hurt themselves or others? I’m not so sure that’s the right answer either, because it puts an unwieldy amount of power into the hands of the owners of the tech – who probably already have too much power in the first place – who then get to decide on behalf of others what is “safe” and acceptable usage, and what is not.

I’m frustrated too because I see this same underlying dynamic play out again and again more generally in tech, in platforms, apps, services, etc. Developers/owners by virtue of having created or maintaining the tech basically get to set *all* the benchmarks of what this thing is, how it’s going to be allowed to be used, etc. And users meanwhile have no say, and no recourse. If you don’t like it, you get to either 1) whine about it fruitlessly on social media, and are told to 2) go “vote” for feature x on some feedback website whose pretense is to make it appear that product development is not (completely) an autocratic endeavor, and that the product team is “listening.” Hint: many/most of the time(s) they are not. Most of the times it has long ago been decided, or randomly and suddenly decided that from this day forth, we will only offer one button ⏯️ and you can take it or leave it.

I don’t know what the answer is. But I hate this trend and how prevalent it is, and how powerless the end use is to change it. Sure, you can stop using the tool or whatever. But that’s a drop in the ocean. And if the tool is big enough and popular enough, they can afford to shed tens or hundreds of thousands (perhaps millions?) of users who don’t like the offering, and others will line up to replace them.

Maybe there’s no way out, and this is just what it feels like to grow old – to see these things happening, to have seen them happen before, and to not have any lever to change them yourself. But that doesn’t make it any easier of a pill to swallow. 💊

I guess maybe the only answer is that it’s time to break out the tape deck? That or rail against the howling wind…

Page 74 of 177

Powered by WordPress & Theme by Anders Norén