I have an idea for a project to carry out next summer, and in the course of one day, I spoke to Jennifer about the need for this project. She gave me ideas about how to work it into other activities going on in the state. Then I emailed my Sally, my accountant, some questions about the bookkeeping aspect of the project. I mentioned the idea to Mike at work who suggested a way to calculate the numbers for it. I spoke with Erin about a partnership with her organization. I texted my three-person board and asked for their input on the project’s viability. I talked with Michelle about marketing it, Chris said he knew some people he could share the project with, and Bill helped me analyze costs.
In a few of those conversations, folks tangentially brought up AI as a way to get their work done. They use it to write correspondence, generate lesson plans, understand dense text. It enhances efficiency, they told me, and helps them refine their approach, to think in ways they couldn’t have thought otherwise.1 They are genuinely grateful for the assistance, and I realize I sound like a grumpy Luddite in the face of all this progress. Get off my lawn!2
Interestingly and fittingly, the Luddites you might have heard of did not actually oppose technology on principle. Rather, they cared a lot about the value of their human contributions. “Luddism stood not against technology per se but for the rights of workers above the inequitable profitability of machines,” according to this New Yorker piece. Sound familiar? Screenwriters’ and authors’ guilds are aggressively fighting against AI for this reason. In a personal example, Antonia Malchik discusses the ways generative AI scrapes other people’s content and makes it available to anyone else, as has happened with her own book.
This appropriation-and-repackaging is nothing short of colonization on a digital level. It’s theft, clearly, as well as a dehumanizing machine. This last bit concerns me absolutely the most. At the start of this newsletter I described a project I’m cooking up and mentioned no fewer than ten people I specifically sought out for a quick conversation about it. Those people helped me think in new ways, refine my approach, and develop a plan, same as AI did for others above. But they also encouraged me, gave me fresh ideas I didn’t have to ask them for, and reinforced my commitment to the project. They colored outside the lines and understood the underlying concepts without me needing to define them. And I didn’t have to enter any parameters into the conversation first.
No AI was used to create this document.
AI erases all of that. It removes the human element, that messy, slow, and sometimes befuddled human element, from creative processes. Then it reintroduces that human element in a metallic-byte form, digested and reconstituted. I don’t recognize its humanity. I reject it. And that is why I have begun to add the following line to the bottom of all my correspondence and writing such as emails and grant proposals: No AI was used to create this document.
Writing those documents might require more of my time. Perhaps I will make an error that AI would have avoided. But I don’t care about efficiency and perfection. I want to connect with my humans, to ask for their ideas and insight and beautiful and imperfect thoughts. And I want to share mine with them. Exchanging ideas with other humans constitutes community creation. And because community is human in its very essence, AI cannot copy it. I want to keep my world that way.
What I hear when they say this is, “it would have been harder for me to think otherwise,” and I’m not sure I like the sound of that. You’ve seen Wall-E, right? No? How about The Matrix?
The last time I wrote disparagingly about generative AI I took some flak about its usefulness to folks who don’t otherwise have skills for writing important texts — the example provided by the commenter was a good one, though I am concerned that without the skills to write the document initially, how did they then read through and ensure it said what they wanted to convey? I’ve also been provided examples about AI’s use in the medical field for assisting in diagnosis. This, I feel, is a tool, like a scope or a searchable index of symptoms. I recognize that my education and skill allows me to sidestep generative AI entirely, and I’m happy to do so and remind folks that AI relies on theft to make its magic work.
A lot of the problems go hand in hand with David Graeber’s “bullshit jobs.” When people give examples of how AI helped them write something, it’s almost always something that if you’re really looking at things from an anti-capitalist or at least skeptical-capitalist lens, doesn’t really need to exist in the first place. Like I worked trail crew with a real estate agent who uses it now for all his property descriptions. Okay. But if we’re looking for a world that’s just and good and whole, the buying and selling of property and descriptions for that purpose is something I’d question in the first place. Mission statements are another example. For more creative work, I don’t really understand why anyone would *want* to use it. Creativity is by nature messy and inefficient and painful and joyful. Why do it at all if you don’t love the process?
Not to mention the energy use. Every time someone uses AI to, say, generate an image, it used as much energy as charging a phone. Scaled up, it’s staggering. But that’s true of the digital world across the board, not something people consider often enough.
Thank you for contributing to clearing the Luddites’ good name! They fought for a better world and deserve recognition for that. 🫶
I think one of the most important points you implicitly made was that when you re counted your process, you noted what each person contributed. AI does not do that. Antonia's point about how much energy is used to generate a piece is something I hadn't thought about. (Being me I will have to check her facts be fore I quote them). I do use some AI to help me write (spell check and Grammarly, the free version) but I always reread my text to see if the suggestions correctly convey my thoughts. Sometimes the suggestions don't, so I ignore them, even if the sentence is long.
As far as the rest of AI, I don't even know where to start on using it, because like you, I can communicate effectively enough to ignore it. So I will continue to live in my advance AI ignorance, taking forever to write my own words.
Once again, thanks for making me think in the morning.