When should I reveal that my Newsweek article was partly written by ChatGPT? Perhaps about 60%? But ChatGPT wrote it using my human-written inputs from an old Medium article on a very similar topic I posted last year.
So what percent does that make human-written, and what percent AI-generated? These things rapidly become hard to parse when you start layering and iterating like that.
Stephen Marche’s piece in the Atlantic about AI writing being like hip hop is a very good one, probably the best I’ve seen on the topic of AI-assisted writing, becomes it comes from a place of experience. He actually published a book using a combination of AI tools, as chronicled by Wired, and NYT, among others. There are a number of elements in his piece worth sampling here, in fact.
So little of how we talk about AI actually comes from the experience of using it. Almost every essay or op-ed you read follows the same trajectory: I used ChatGPT to do a thing, and from that thing, I can predict catastrophic X or industry-altering Y. Like the camera, the full consequences of this technology will be worked out over a great deal of time by a great number of talents responding to a great number of developments. But at the time of writing, almost all the conversation surrounding generative AI is imaginary, rooted not in the use of the tool but in extrapolated visions.
This is extremely relevant in AI reporting. I spoke with an editor once who informed me that “we don’t need to know how it works to write about it.” I guess that’s one way to do things when you’re dealing with a lot of volume, but it’s not the kind of analysis that I find very engaging. I much prefer Marche’s “f**k around and find out” method from the Atlantic piece.
Here he talks about how you still have to know something to use AI content generation tools well:
You need more understanding of literary style, not less. The closest analogue to this process is hip-hop. To make hip-hop, you don’t need to know how to play the drums, but you do need to be able to reference the entire history of beats and hooks. Every producer becomes an archive; the greater their knowledge and the more coherent their understanding, the better the resulting work. The creator of meaningful literary AI art will be, in effect, a literary curator.
Marche’s own AI book experiment, Death of an Author, is a shout-out to Roland Barthe’s conception of the death of the author, in that authorial intent no longer drives the show under the shadow of postmodernism. Barthes wrote:
“We know now that a text is not a line of words releasing
a single ‘theological’ meaning (the ‘message’ of the Author-
God) but a multi-dimensional space in which a variety of
writings, none’ of them original, blend and clash. The text
is a tissue of quotations drawn from the innumerable centres of culture.”
Barthes also had this idea of the “scriptor” replacing the author, but I digress (read more at the link above).
To get back to Marche on creativity:
The traditional values of creative composition were entirely alive during my process. That should come as no surprise. The transition from painting to photography required a complete reevaluation of the nature of visual creativity, but the value of understanding form and color, of framing, of the ability to recognize the transience of emotion across a face or a landscape—the need to understand the materials of production and the power of your subjects—stayed. None of that is going away. None of it will ever go away.
I’ve myself noticed a kind of acceleration of my creative and mental processes, and my ability to more clearly communicate complex narrative elements both in text, and writing, and in combinations of the two. Using AI has, effectively, made me a better artist producing & evaluating things on an entirely other level than I was before. And it hasn’t, say, stopped me from breaking out my sketchbook and drawing, or what have you. I can do any of those other expressions of art any time I want to. AI art isn’t some monster stealing things from me. Or, in my opinion, from other people – though I respect that opinions differ on this topic.
On the contrary, I’ve been able to bring incredible light to dark places in my subconscious through using AI tools, & managed to make loose imaginings into tangible things I can share with others. Yesterday, I input the text of an almost 80K word book, my first novel (“hand” written), The Lost Direction, into Claude by Anthropic. Claude’s context window is supposed to top out somewhere around 75K words (100,000 tokens is what I’ve read – whatever that means in ordinary human reality…). In a few hours – though its responses times were SUUUPER slow in yesterday’s experiment for each query – I was able to output over a dozen short stories of decent quality that are spin-off tales about different characters and situations from the original novel.
It lets me make my imaginary worlds that much richer. It’s a force multiplier, and I have gone from being a foot-soldier to being the commander of allied forces. That is for me the scale of advancement that these technologies, properly understood & rightly applied, can bring.
Also from Marche’s piece:
If you make bad art with a new tool, you just haven’t figured out how to use the tool yet. Also, tools are just tools…
Anyway, I’ll close with that. (for now)