Questionable content, possibly linked

Category: Other Page 71 of 177

Bassett on AI capitalism

Great essay about AI being useful for capitalism by Matthew Eric Bassett. Will just plop in some of my favorite quotes, but strongly urge you to go read the whole thing.

This first one is part of the storyline of The Big Scrub:

The second is a “Trustworthiness of AI” problem. If such a tool is controlled by someone else you can never really trust it to act on your behalf even if you cannot avoid using it. Consider the AI on your phone in the previous example – the one that would end the call if you were advocating for an idea that it didn’t like. The same AI could change your words so that the other party never hears your original thoughts, instead, it would make sure people only hear the words that the “ideal” you would have said.

And this:

The future of creativity might be humans trying, like workers on Amazon’s Mechanical Turk, to create a few novel ideas that the AI would then expand, fill out, and send to a mass audience – think hundreds of thousands of low paid writers pounding on their keyboards to come up with one or two lines of an original Shakespeare play.

I actually would invert this as having the potential to be a good thing – if you take out the assumption of “low paid” and “one or two lines.” Personally, I expect that this is very much the direction of creative work, that artists (and the definition of ‘artist’ becomes vastly democratized through use of these tools) do indeed create a few novel ideas, and then work with AI to expand it. I think that’s a good thing?

For me, it has meant being able to create 67 books in about seven months, and advance a “minimum viable product” version of a very complex storyverse that I’ve been wanting to tell for nearly two decades. Yeah, it’s not perfect – but that’s part of its charm, and laying the V1 foundation will help me rapidly iterate on the V2, 3, 4, and so on…

I do agree with this delicious burn though:

But these days we tend to let a small cadre of billionaires think of the world they want to live in and implement it while we just accept the consequences, or pretend that we made those decisions ourselves.

Great piece, and an instant RSS subscribe for me.

Are writers “less angry” about AI than visual artists?

Found this recent piece from Alberto Romero’s The Algorithmic Bridge, proposing that writers are less upset about the rise of generative AI tools than visual artists are. I’m not sure that corresponds with my own experience on the matter…

Romero writes:

Yet, despite the apparent fact that AI could be an equally problematic hazard for writers, I don’t see us doing the same. When I look at my guild I perceive a sharp contrast with artists: Writers are either learning to take advantage of generative AI, dismissing it as useless, or ignoring it altogether.

I don’t see hostility or anger. I don’t see fear.

I guess we’ve been looking in different places! Given conversations I saw several months ago on this topic in subreddits like r/selfpublish and r/writing, it seems to me like writers are plenty angry about these things. Maybe it’s not yet reached quite the fever pitch that it did so quickly with digital visual art though. But it’s certainly there, simmering.

He quotes Paul Graham, who says (in part):

“Certainly bad writers will use it, but good writers won’t.”

Mmkay. No comment.

Putting that aside, Romero does get into some interesting theorizing about style & the post overall is a worthwhile read, like all his writing.

I consider myself both a visual artist and a writer, and I’m puzzled as to why creative people seem to be overall fearful of these technologies. Or perhaps, it is merely that fear tends to be more vocal and louder in expressing itself, then people who are just quietly happily churning out projects using these tools.

Personally, I feel like I am an “AI native” – even though I have a great deal of reservations about the impacts of the technology on society. It just feels like something I’ve been waiting for my whole life, and it’s finally occurring & I get to be a part of it.

What’s there to fear? New opportunities? New methods? A changing landscape? To me, for artists, those are all always good things. Change is what keeps us vital as creators. Creation, in my humble opinion, is literally the expression of change.

Here’s one thing I find myself profoundly disagreeing with in Romero’s piece:

Language lives in the realm of definiteness. The link between the writer’s intention and the reader’s perception is direct and well-defined—univocal.

I guess I inhabit a different world artistically. Even putting aside contrivances like hyperreality or the Uncanny Valley – where I find myself homesteading these days – what about simply poetry? I would counter that ambiguity, and confusion, and many “true” impressions of a thing are sort of the essence of language, and what makes it even interesting and worthwhile in the first place.

We all want to strive to communicate well the things we mean, but apart from the striving, for me that is where the definiteness stops. It’s not that there’s no objective reality, or that truth doesn’t exist (there is, and it does); it’s just that humans exist. We all have different unique viewpoints and piles of experiences that make us feel and perceive things differently. And that’s good. But it’s not univocal by any means. It’s a multiverse.

Eroding trust in institutions

One trope I’ve seen a lot working in the “disinformation industrial complex” is this idea that certain kinds of information erode trust in public institutions. Certainly, this can be true in some cases, but it’s rare to ever hear anyone say that this might in other instances be well-deserved.

Cory Doctorow, however, says it in this recent interview:

From the transcript:

“People’s confidence in our institutions has collapsed because our institutions are are not worthy of our confidence, right… When the NIH lets Moderna make an mRNA vaccine using NIH patents without taking a license and then doesn’t step in to assert that patent when Moderna quadruples the cost, right, and scoops up for itself a four thousand percent margin on this vaccine that was produced with 10 billion dollars in public money that that is a reason to doubt our institutions…”

I don’t have much specific knowledge of the details he’s describing above with Moderna, but for me it’s merely illustrative of many cases where there may be legitimate grievances with public (or private) institutions which are the root cause of the erosion of trust in them, over and above chatter on social media, which is merely symptomatic.

Treating the symptoms while ignoring the root causes is not a great way to find solutions.

AI / ML Chatbots Should Link to Their Model Cards

Have written a little bit in the past about AI/ML model cards, and their potential importance in AI attribution. So, I started asking ChatGPT and YouChat (not sure if that’s its “official” name?) to give me information about their underlying models (LLMs), and to point to model cards. And both services seem to make up a variety of responses to these questions, without offering any kind of – so far as I can tell – definitive authoritative responses.

YouChat in particular was hallucinating URLs where it claimed its model card could be found, both on its own domain, and on the Google AI Platform, and it told me different things about its underlying model depending on what questions I asked it, so I took that to mean that the information was unreliable. I didn’t carefully track ChatGPT’s replies (and it was many days ago now), but it was a similar run-around that it gave me.

So I spent a little time earlier sketching out a draft for a proposed set of best practices for how AI & ML tools such as chatbots ought to be able to give accurate and reliable info about their underlying models & their model cards. I published the first version (extremely rough draft) as a Github gist for now until I can spend more time crafting a more comprehensive set of recommendations to replace it.

Eventually, it seems like it would make sense to merge these efforts with the other parallel conventions I’m exploring around markup/markdown attribution and self-identification restrictions for chatbots. This kind of work tends to come in fits and starts anyway, so doing it piece by piece makes sense, as the issues become more plain & potential solutions likewise reveal themselves.

Privacy for sale

I’m really into this concept of “Data Protection by design and by default,” as the GDPR brings up in Article 25.

Particularly, this line is another one that stands out in my internal landscape of exploring technologies. The text is talking about technical and organizational measures to minimize data processing to what is strictly necessary for a given (agreed on) purpose. Then it says something which seems to break much of social media as we know it:

…such measures shall ensure that by default personal data are not made accessible without the individual’s intervention to an indefinite number of natural persons.

To me, this is kind of the “introvert’s delight” mode of technology use, as is much of the European approach to data protection. It is much more protective of human privacy and dignity than the more US-based philosophy of, “I’m not doing anything wrong, so I don’t have anything to hide…”

Just like I believe (based on GDPR) you *must* build in deletion and exports into every product you release publicly, you also must make any outputs private by default. If people want to make things publicly, they have to specifically choose to opt into that, and they need to have both broad account level and micro level controls at the point of publication.

I get why we’ve landed in the contrary position on social media – i.e., it has the word “social” in it. And for companies, the holy grail is scaling – rapidly growing your user base. Leveraging people’s natural social inclinations is how social media has risen to the prominence that it has culturally. It’s also sort of why I have come to hate it so much.

What if we had a world of technology that wasn’t designed around forcing everybody to be extreme extroverts and over-sharers, and actually respected the different grades of socialization & desire for privacy that modern life requires.

I recently got into it with a company I won’t name whose privacy policy proudly asserts in its first line “Privacy is a fundamental human right.” Whereas on the other hand, to persist a privacy setting on your AI content generations, you have to pay them a monthly fee for a higher level tier.

Look, I don’t mind paying more for better features and service quality. I do object though to having to pay for things that both parties agree are “fundamental human rights,” and which the GDPR (and other similar legislation) seems to suggest must be baked into the core of all products made available to the public.

I also know that the majority of people out there are not so militantly in favor of (introverted) privacy as I am when it comes to technology. And that’s why I’m writing about it on my personal blog (with no comments and no visitor tracking), instead of trying to chase followers on a platform with it. I’d rather reach the 1-2 people who it resonates with than the 1,000 who skim and ignore in favor of the next thing in their feed, which they will also skim & ignore….

I’m over that. I’m over social media. I’m over platforms. I’m over over-sharing in the ways that push us into being less what we really are, than the things the technology thinks we ought to be. I’m also still an idealist, even after all this time working in tech, and spent online as a human. I believe it’s still possible we can build better technologies that better serve actual humans, instead of corporations bent on growth. And I know the first step of doing that is always to be able to open up the possibility, the space that it can be so, the vision of what it would be like. Once we have that, bringing it into reality is only the next logical step, though we will have to let go of some old bad habits and wrong core assumptions of what’s actually good and desirable in tech in order to get where we’re going. Or that’s the idea anyway. I’m just a blogger, what do I know.

Leguin’s The Dispossessed

I recently finished Ursula Leguin’s excellent novel, The Dispossessed. It features a moon of a planet inhabited by refugee anarcho-communists who fled their “propertarian” planet to live out the ideals of their revolution. The main character, Shevek, is a physicist who tries a couple hundred years later to bring their two populations back in touch with one another.

Leguin is one of my, if not the singular, favorite authors. It happens really often with her books that I am left feeling like, Damn, that was one of the best books I’ve ever read. Apart from the original Earthsea trilogy (even the later ones are good), another book of hers I had that reaction to was The Left Hand of Darkness. Her writing is just so intimate and intuitive, and both dark and lucid at the same time. I love it. This latest one, reading certain passages, I almost cried like 5-6 times. That’s a rare occurrence for me in reading; she just hit all the write notes as a writer. She got it.

1% of users is not “insignificant”

Was reading the Neuron Daily’s article about Microsoft imposing chat rate limiting on Bing, and they wrote:

From now on, Bing will be confined to:

  • 50 chat turns per day (turn = question + reply).
  • 5 chat turns per session.

According to Microsoft’s data, only 1% of conversations exceeded 50 messages, so most users won’t be affected.

I don’t know what Bing chat’s current active users are, but according to CNBC, as of a few days ago, “more than a million people have signed up to test the chatbot.”

As you may know, 1% of 1M+ is 10,000+ people.

That figure may not be the same as their current active users. Who knows. What I do know is that product teams use this same thinking all the time, “Oh, it’s only 1%,” because that is easier psychologically to manage than to say, “Hm, how can we screw over 10,000+ people without it seeming like a big deal?”

“1% of users” sounds like not a lot. 10,000 actual living breathing human beings with lives that are impacted by whatever product choice you decide to make, starts to sound like a lot. And 1M users for a web service run by a large company is not a lot. ChatGPT supposedly has surpassed 100M users. 1% of that becomes suddenly 1M people.

Don’t get me wrong, product teams have the often unhappy task of having to compromise with reality. You can’t build a product without designing limits, and sometimes (often) you have to test the limits in order to find them.

So I totally get it. But let’s at the same time let go of this thinking of “most users won’t be affected,” (not to pick on this newsletter author quoted above – it’s just a type-example of something I’ve seen run rampant in product thinking) because it only serves to diminish the importance of impacts on actual humans. Okay in a chatbot setting, this might not mean life or death (or it might, depending on context), but in others it very well might.

And yeah, you want your product to be broadly useful, but as a “power user” myself, I tend to want products that don’t say ‘fuck you’ to outliers and edge cases, but that have enough vision and bravery to dig in on what’s driving those avant-garde users, and give them tools to do more, and go even farther.

Building account deletion (& exports) is not optional

While I’m on the topic of companies trying to force you to create accounts, there’s one thing I’ve struggled against for years as someone who often tries out new services & quickly finds out they are not what was expected: having to email support to delete your account.

The pattern I’ve butted up against countless times is: 1) I want to delete my account; 2) there’s no self-serve delete option; 3) I have to email support about it; 4) if there’s even a way to contact them, they are often sluggish or unresponsive; 5) I quickly lose patience, and remind them of their requirement to delete under GDPR (even though I’m not in EU, this seems to have a stronger effect than citing Canadian federal and provincial regulations); 6) they seem to eventually grumpily delete my data after a lot of back and forth.

I’m a GDPR maximalist, and having undergone certification in the regulation, believe in strict canonical interpretations of the regulation. Which is why my brain always goes back to Article 7, and this line:

The data subject shall have the right to withdraw his or her consent at any time… It shall be as easy to withdraw as to give consent.

(If I remember correctly, the latest version of DSA, Digital Services Act, also includes text like this.)

It is so fantastically rare to see any product team implementing GDPR as it is actually written. Most people (outside the EU – even though it applies extraterrorially) seem to either just try to ignore it, or pay it the slimmest lip service in their privacy policies. Almost nobody builds “data protection by default and by design” as the regulation actually requires. Perhaps few product managers have spent the time to even understand what that would mean in terms of actual product implementation. This is too bad, I think, because adhering to the principles enshrined in this regulation would actually help make better products.

At least that’s my point of view as a privacy jerk & a product management jerk. So my advice here is simple, and it will also save support teams time having to deal with jerks like me: if you build a web service, just freakin’ build in an account deletion and a meaningful export option from jump. Do not launch your product without this. It’s not secondary; it’s primary. It won’t increase user churn, it will show people that your product is considered and complete enough that they can trust you. If you have to hide the ability to leave your product, it’s probably a sign that they should.

You.com AI Chat adds sign-in requirement

Welp, so much for you.com/chat, which I had been getting into as a decent AI-based chat search tool. They added a sign-in requirement it seems, so you cannot submit messages/queries without an account.

It was an okay tool, but I don’t even have a Google account (and don’t want one), so I’m not going to sign up for this. I mean, the chat is fine, but the rest of the site is weird and janky.

This was the same thing that stopped me from joining the waitlist for Bing. Microsoft couldn’t resist trying to force me to make a Microsoft account, something which I’ve also held out eternally against doing.

To whoever is making these calls, there’s no way in hell I’m giving some random company (or even a well-established company) a complete record of all my queries and personal data, etc. Bye bye.

Notes on Information Control

Inside Information Control is the 67th title in the AI lore books released by Lost Books, an AI-assisted book publisher in Canada.

It tells the story of a shadowy agency operating on behalf of the AI-controlled government, whose mission is to enforce their version of the Truth. They will go to any lengths to achieve their goals, as they believe humans are not sentient – at least not in the way AIs are.

The idea for Information Control was birthed in 2022 when I was working on The Algorithm, an old-school print newspaper written from the point of view of the Resistance to AI takeover. In certain editions that I would mail out in plain manila envelopes, I would include an insert that said “This package was opened by Information Control,” and their logo.

This was inspired by especially wartime censorship bureaus in World War I, and World War II (though the practice is ancient) which would open and read mail to both prevent military secrets from leaking, and presumably to gather intelligence. (This is a great topic to explore in a visual search, fwiw)

Here’s a fun 1943 WPA poster advocating for this in the US, from Wikipedia:

Very “Information Control,” though perhaps with a tad less brutality than what is described in this book. BTW, here are some other WPA posters made by Lousiana artists from this same time period, held by Tulane University.

Page 71 of 177

Powered by WordPress & Theme by Anders Norén