Questionable content, possibly linked

Category: Other Page 54 of 177

“Earned millions”

Another interesting example of what happens as my story gets re-told and distorted in global media.

Presumably as a result of my books appearing in India TV, I’ve seen pick ups from a number of other sites which seem to be India-based as well.

Much like the one that said I had written not 100, but 1000 books, a few of them seem to have few qualms about exaggerating.

One site writes:

Meanwhile, a shocking news is coming out regarding Chat GPT. In fact, an author wrote more than 100 novels in less than a year using this AI tool and earned millions by selling them.

I looked, in case $2000 USD equals millions in india, but it seems to be “only” 165k Rs in India, which I’ve got my suspicions might go a bit further there than it does here.

India TV Calls AI Mini-Novels “Captivating”

It’s encouraging to see that some large news outlets in India, like India TV have picked up my story about selling illustrated dystopian AI mini-novels.

They called it a “captivating collection.”

They even add this interesting line, which I don’t think I said anywhere in precisely these terms, but okay:

It is noted that the popularity of AI-generated novels has been on the rise, creating a thriving market for this unique form of literature. 

AI ‘Author’ Writes Nearly 1000 Books

It’s fun watching how my story gets distorted in the press through less-than-accurate re-reporting by third parties, such as this piece from Livemint.com, asserting that I’ve now written “nearly 1000” books using AI.

¯\_(ツ)_/¯

What actually is appropriate effective treatment for content moderators?

I was looking a little at reporting around the FB and Tiktok moderators who sued the companies or are trying to perhaps still (unsure) in class action suits, and found this one, Mar. 2022, Techcrunch:

The lawsuit alleges that TikTok and ByteDance violated California labor laws by failing to provide Velez and Young with adequate mental health support in spite of the mental risks of the “abnormally dangerous activities” they were made to engage with on a daily basis. It also claims that the companies pushed moderators to review high volumes of extreme content to hit quotas and then amplified that harm by forcing them to sign NDAs so they were legally unable to discuss what they saw.

“Defendants have failed to provide a safe workplace for the thousands of contractors who are the gatekeepers between the unfiltered, disgusting and offensive content uploaded to the App and the hundreds of millions of people who use the App every day,” the lawsuit states. It alleges that in spite of knowing the psychological risks of prolonged exposure to such traumatic content, TikTok and ByteDance made no effort to provide “appropriate ameliorative measures” to help workers cope with the extreme content after the fact.

What I’d like to know is actually, what the hell are “appropriate ameliorative measures” for content moderators, either while they’re still doing the job, or after they’ve stopped, and are still experiencing negative fallout psychologically?

I’ve looked it into it a bit, and as far as I’ve seen, there’s really no commonly agreed on industry standard of what this means. Is this lawsuit asking for something that doesn’t actually even exist?

How content moderation informs my artistic work

I worked five years as a content moderator for a major web platform and was daily exposed to humanity’s worst impulses at scale. As a result of this work, I developed mild-PTSD symptoms, and needed to find a way to cope with all that I’d experienced. Not having any guidance or access to treatment, I embarked on an unconventional therapeutic project of creative world-building in order to work through the invisible scars left by constant exposure to toxicity and angry web users. Through this creative journey, I aimed to hopefully regain some measure of normalcy, control, and happiness in my own inner life.

Over time, my invented worlds became more and more elaborate, and I began integrating generative AI technologies to accelerate the process of intuitive discovery I’d been using to heal myself. Now, I’ve produced 100 short illustrated dystopian mini-novels with help from AI image generators like Midjourney, and text generators like ChatGPT and Anthropic’s Claude. 

Thematically, the books deal with our feelings of hopelessness in the face of out of control technology, and the need to listen to the authentic human spirit to find a way through it all. My work has recently been featured in Newsweek, CNN, and the New York Post.

I am coming forward to talk honestly about these issues from the perspective of someone who has ‘fought in the trenches’ of the Culture Wars, so to speak. Unfortunately, there are very few content moderators who have had the chance to speak publicly and share their experiences in a candid way, as they are often bound by non-disclosure agreements, or can’t talk for fear of losing their jobs. We must stop ignoring the hidden human costs of our highly toxic social media ecosystems, and start finding new ways forward that don’t exploit and distort human well-being for profit.

Partnership on AI could be more inclusive

Within the past few months, a non-profit called Partnership for AI (PAI, for short), released a document entitled “Responsible Practices for Synthetic Media,” which I read with interest. I then submitted to them a sample implementation of their recommendations within the context of a fictional blogging service, to engage with them and other interested parties about how these principles might be deployed at the level of actual products.

I wrote them a few times, actually, expressing my interest. I’m not completely a nobody; I’ve worked for 8 years in online Trust & Safety, and published nearly 100 AI-assisted books that have received international coverage in the media. I never heard back from anyone I reached out to about chatting more.

Eventually after maybe 6 weeks or something, I got added to a newsletter, the first edition of which included what appeared to be a polite brush-off text, including:

We are currently working with existing collaborators on the next phase of the Framework launch and developing a process for an open call for additional Framework supporters. 

We will share more information about joining the Framework in the near future. In the meantime, we will keep you updated as we collect insights on how organizations are using the Framework as well as on PAI’s synthetic media work more broadly. 

Then I waited another month to get another newsletter, which seemed delighted to inform me that they had held a big meeting with a bunch of important “experts” (of which I was not included nor invited), and that this esteemed group had made a bunch of new vague recommendations that I could read if I wanted to. Something to the effect of:

In this update, we share more about our latest initiative to develop safety protocols for large-scale AI, including top three insights from the kickoff of a multistakeholder dialogue on these protocols at a workshop co-hosted with IBM. Below you can find links to the Version 0 of the Protocols as well as recommendations and open questions from our recent workshop.

I don’t want to blast or alienate anybody, but for a group which includes “partnership” in its title, this is a kind of staid and somewhat boring response to the work and excitement I tried to share with them as an interested & expert third party potential collaborator.

Their email went on to explain:

The meeting convened 40 experts from across industry, academia, and civil society, including representatives from 12 industry model providers. Discussions included lightning talks from model providers who shared insights on their organization’s deployment decisions and challenges, followed by group work…

To me, as a let’s say “AI practitioner,” what this all says is that… we asked a bunch of “experts” who have a vested financial interest in a particular outcome (aka our “partners”), and we excluded anybody who isn’t important enough, or who is not actually… you know.. using the tools.

I know this kind of meeting does not come from a bad place; it is all incredibly well-intentioned, I have no doubt. But I think it is indicative of the state of the industry, and illustrative of the dangers of ONLY allowing companies, a handful of non-profits, and other assorted grab bag “civil society” people to basically… decide things which will impact all of humanity by influencing the direction of regulations and norms in business.

I find it a travesty that other types of potential contributors are summarily excluded from this kinds of convocations; and I think it is a practice that needs to be thoroughly re-evaluated and changed. Because AI is a huge, frickin enormously consequential development for humanity. And we can’t just keep replicating the same old broken patterns that we have always perpetuated, and expect them to yield new, different, or interesting results. From where I’m standing, this looks like more of the same old establishment jockeying and ‘inside baseball.’ Not a true partnership of the kind that we need to face these enormous issues uncovered by AI.

AI Tools Made Me A Better Writer

I know the popular fantasy is that writers who use AI are not “real writers” because they don’t do any “actual work” themselves. This is just so big & wrong of an idea that it is difficult to refute head-on. In fact, I don’t expect any of the people who think that to ever change their minds because of anything I say. So this isn’t for them; it’s for me.

Anyway, it’ll have to suffice to say that, again, my experience has been exactly the opposite of what everyone says about writing with AI. Apart from accelerating my output, the sheer act of producing, checking, and editing such a large quantity of text makes you – wait for it – exponentially better as a writer. It trains you to think in not just words & sentences, but in movable blocks of ideas that can be shifted around, re-purposed, etc. etc. I’m at the point of like, who cares where the actual words come from, so long as they get my text where I want it to go?

What are books publicists so afraid of?

I recently had four different books publicists turn me down for paying work to promote my AI Lore books to book bloggers & podcasters. (3 of them were on Reedsy alone – a horrible platform in all my experiences with it.) This is after my work getting covered by CNN, Newsweek, NY Post, etc.

My ask was simple: find people who will engage with the content of the books, not just the AI techniques involved. (I don’t care if the reviews are positive or negative.) You’d think this would be a cake walk getting some people to do honest reviews… but alas, no.

One person who I contacted off Reedsy (again, hate that site), at least had the decency to explain why. They said they’ve spoken out against generative AI writing in the past, and didn’t want to alienate their contacts by shifting gears on that suddenly. A respectable explanation, even if I’m obviously not in agreement with it.

I know it’s a popular (and I think unconsidered) opinion that gen AI devalues other forms of creativity; my experience has been tremendously the opposite & I think my results speak for themselves. If any publicists are reading this and think they could help me out, please pitch me with an offer (contact form here). I’m aware I could just reach out to bloggers & podcasts myself, but it’s time consuming & I understand the power of emails originating from sources people already know and trust. So if you’ve got a good set of contacts for blog tours & podcasts, let’s talk! Surely, there must be someone out there who isn’t afraid of a little AI.

If ChatGPT says it, then it must be true, right?

Could my latest press release have worked that well?

Response to r/singularity thread

Linking to my NY Post piece on r/singularity netted both a bunch of upvotes and a lot of angry replies. Mostly they have just hardened my resolve to not try to bend to anyone’s creative vision but my own. Listening to all these people would make any sane person give up art altogether. I like this one though as it’s pretty measured, unlike most:

As a person and a human being I do not approve low quality crap on the internet. However, this is just one example of an effect of lowered costs to produce digital assets.

The quality of these digital assets can only go up due to better base models, better prompting and prompting techniques and “smarter” ML methods.

I actually don’t have much of a reply, so much as I agree with it. And wanted to also add that as a creator, if you’ve actually used these tools extensively and gotten deep knowledge of how they work first hand, you’re going to be in increasingly better positions as the technology matures and people quit both hyping & fearing it.

Page 54 of 177

Powered by WordPress & Theme by Anders Norén