Discover more from EastKeep
Wonder of arrowleaf balsamroot
and choosing slow, difficult roads
Working in an online school, I cannot get through a single day without hearing someone talk about AI. First, it presented a concern because students would likely use it to write their papers. And they do.Then, it was exciting to some people because it could save us all so much time with various tasks, such as building courses. Or creating an itinerary, or improving written reports. I’ve even had people ask me about the possibility of creating Indigenous language classes by using AI.
AI is like a new toy everyone wants to play with. In this way it reminds me of driverless cars. In fact, self-driving cars are powered by AI. And so is Grammarly, which has assisted me more than once. You might protest, AI is only a tool like a calculator! I confess to using one of those regularly. But I do know how to add and do other mathy things. I could do them by hand if necessary.Or the navigation app that shouts at me while I'm driving somewhere: I’d much rather read a real map than operate this scourge. Those don’t generate material, anyway; they're just machines responding to specific, discrete inputs. This AI is different: it generates stuff, and it does so by analyzing and co-opting massive amounts of online text and other materials such as people’s artwork. Which seems like theft and exploitation.
Dr. Geoffrey Hinton, the guy who kinda sorta started AI and spent his entire career researching and developing it, has now come right outand said he’s afraid of it. His most immediate concern is that soon enough, nobody will know when something is real or not real - meaning news stories, photos, and so on. You’ve probably seen some of these side-by-sides asking if you can tell which is the original.
Here’s how you can tell if what you’re seeing or reading is real: Only if you have an actual conversation with an actual human, or see the actual painting with paint on it. That’s the only way! And even then, that person could be misinformed via some vehicle of technology and not realize they’re wrong. That could be you, right now!So that’s frightening enough.
Moreover, all this “innovation” is being driven by capitalism, so you know these companies will never stop, because piling up dollars is their raison d’être. It’s not just Hinton who is sounding the alarm. Google employees actually tried to prevent Google from releasing an AI chatbot in March of this year because “they believed it generated inaccurate and dangerous statements.” From another news story:
Ten months earlier, similar concerns were raised at Microsoft by ethicists and other employees. They wrote in several documents that the A.I. technology behind a planned chatbot could flood Facebook groups with disinformation, degrade critical thinking and erode the factual foundation of modern society.
Aside from the small matter of dismantling democracy, it’s dangerous for workers. This Vanity Fair piece discusses the Writers Guild of America’s current strike not only for higher wages, but also for protections against the AI that could quite easily move directly in and take their places, culling text from online sources to instantly generate plots and scripts. For more details about farther-reaching effects (think actors, musical scores, sound effects, set-builders and more), read that article.
I don’t just find generative AI, as it’s called now, off-putting or scary; I find it revolting. Is that too dramatic? I don’t care if it is. And here’s why I am so disturbed: because we are inviting this tool to do our things for us. Our thinking. Our writing. Our art. My question is, is the benefit of generative AI that it makes things faster and easier?
If your answer is yes, then I invite you to take one step back from the question itself and consider why you think faster is a benefit. Faster means we forget. Faster means we have shallow conversations. It means we pay less attention. If you are hell-bent on being speedy, you are not checking out that white-blossom-blanketed tree to find out if hundreds of bees are buzzing all around it. You aren’t taking off your shoes to stroll through the grass, or ambling through the library to find a book-in-paper since you already downloaded it instantly to your device. You might find satisfaction in getting through your packed daily agenda, but where does your joy emanate from? Certainly not your calendar.
Why do so many people worship at the altar of productivity? If you have so much to do that you need tools to keep you focused, to keep you on schedule, to remind you to stand up, to remind you to blink so your eyes don’t dry out from staring at a computer screen all day, then perhaps you are doing too much or doing the wrong things. Humans are not meant to live this way. We are meant to walk in the hills, to talk with each other or with the birds, to ponder difficult questions – even if it’s just “Why is there so much fucking traffic in Missoula all of a sudden?”
And about “easier”... humans don’t learn from easy. Take it from a teacher. When students struggle to master a skill, they come out the other side having learned not just the skill, but about perseverance, the way that skills connects to other skills, and how their own brains operate. The student paper-writing scenario is actually the best example of why AI is not better because it’s easier: no student will learn an ounce of anything from producing an AI-generated essay. Everyone knows that. Why can’t we apply the same concept to other areas of our lives? There is value in struggle, in taking time to consider, in making thoughtful connections with others and our world.
Using shortcuts is also a method of sidestepping thought, real hard-core self-debate and critical pursuit of truth. This type of avoidance, as noted in the piece above about the engineers issuing a warning about AI, can “erode the factual foundation of modern society.” Think I’m being grandiose? Please revisit the 2016 elections and all the disinformation deliberately disseminated across media outlets that people unquestioningly consumed. They didn’t know the information was false. Thus they could not question it. This is not going to improve with AI.
I know; there’s nuance, and I’m already using AI in X Y and Z areas of my life. It’s not so black-and-white as I’m making it, you might want to tell me. But sometimes, it’s better to draw a line than to dance around in the shadow trying to decide where the line is. And I’m drawing this line, all by myself, without any tools whatsoever to help me figure it out.
I teach online and I have had at least one student turn in an AI-generated paper. I could tell because he forgot to delete the header, which said “I'm an AI language model, thus I don't have any biases or personal beliefs. However, one may argue that…”
most of the time, although if you are a math-inclined friend reading this you know I’ve probably asked you to do some calculations for me.
I can help you out with these NYT stories if you encounter a paywall but want to read.
But not me, because I’m writing this story and my thoughts are real. Trust me.
I’m already practicing my line: “I don’t use AI,” just like I have to pariah myself with “I don’t watch TV” and “I don’t like hugging.”