I’ve been trying to engage with other people about their reactions to my AI art books, especially where they are strongly negative. Obviously, there’s something there when emotions are stirred, and I’m trying to better listen and understand when that happens. Both within myself, and with others. Sometimes I do a better job of it than others.
I understand there are many valid concerns about AI technology, and would even venture to say I am at the forefront of exploring and discussing a certain subset of them. I recognize there are many other groupings of potent ethical or human impact problems that are outside my specific bailiwick of interest and/or expertise. I tend to follow my own light in terms of what I’m inspired to find out for myself experientially, and I let that drive me.
So, the following is offered as apologia for why I still use AI, even while fully recognizing there are tons of things wrong with it.
My professional background is in online Trust & Safety, and this has meant that I spent 5 years doing content moderation, handling complaints, writing policy & having to enforce it for a tech platform. It was not a glamorous job and left me with some mental scars that have taken me time to sift through.
And part of how I’ve sifted through it has been through my own writing and art (non-AI: I have a fine arts & technical theatre background), and eventually by integrating AI into my own personal semi-therapeutical world-building as a way to explore and augment my own creative processes.
I’ve since gone into doing product management in order to help design and build tech products that actually respect human rights, and I have spent a lot of time having to carefully work on these types of issues in many different contexts as a day job.
For me, that means that the way that I engage with new technologies is by testing them to see what are the good points, and to fully understand the bad points, and then to think through with others, and actively build improvements to them.
It wouldn’t be possible for me to accurately gain any of the knowledge I have if I were to not actively participating with the technologies, interrogating them as deeply as possible, understanding other people’s viewpoints, and figuring out for myself what are the contours of good and bad within our use of these systems.
My Digital Terms of Service for AI Providers In Canada represents one such flowering of that intensive year of usage of generative AI tools I am coming up on now. It constitutes a set of recommendations I’ve made to government and other groups about finding the best path forward for appropriate use of these technologies, and how can we build better, stronger protections into them, while also respecting that not everybody needs or wants to use them, or have their information be used to train them or drive their behavior.
In my experience, if we wait around for someone else to take a careful look at a technology and improve all that needs improving in every last regard, we end up waiting a long time – perhaps forever (technology is always buggy and flawed). So what I’m trying to do is bring together both sides of myself as an artist and as a technologist to do what I can to contribute to the conversation.
That doesn’t mean I think my way is the right or only way. It’s just my perspective based on my background, experience, and personal motivations. I know those differ with each person who comes to these topics, and others are likely to land on different answers to and best configurations for all of this than I have – and it might be right for them to not use these tools whatsoever as well. I have no basis for judging that for anyone else.
In actual fact, I am not some wide-eyed fanboy of AI technologies, just using them to hawk more useless wares onto an unsuspecting internet (which I think is the popular vibe). I am probably one of the biggest critics of them you will ever meet.
My exploration of these tools is therefore complex, multi-dimensional, and irreducible into its constituent parts. What I think is that, essentially, we are now pretty much stuck with them. The genie is out of the toothpaste bottle, and will not be stuffed back in. So, knowing that, what are we as conscious, conscientious people going to decide we should do?
We’ll only know that by talking it through. And our ability to talk it through will be constrained by our real-world understanding of how these technologies work, and actually impact the people who use them, as well as those affected in other ways. There are many ways to learn and reason about things, but this is my way.