Don’t Try This at Home:

The Crisis and Challenge of ChatGPT

Beal St. George

99% of the time, I subscribe to the adage, “try it, you might like it!” I think this optimistic open-mindedness enhances our individual lives and also the energy of the entire universe (vaguely woo-woo, I know!). In my experience, though, “try it, you might like it!” works. It applies well to such things as meditation, queer romance novels, and roasted Brussels sprouts. 

 

By this logic, I should be racing over to ChatGPT to ask it to write my first novel and then explain to me how to get it published. But ChatGPT—and its bedfellows like DALL·E 2—are, for me, a bridge too far. (And using them might actually upset the vibes of the universe and the trajectory of life on earth instead of improving either of these things.) (Ok! This is dramatic!)

 

So I’m breaking my own rule: I haven’t tried it, and still, I’m anti-ChatGPT. 

 

My credentials on AI are minimal at best—I’m a chronically under-tech’d human, but I still live in the world, own an iPhone, and make TikToks for work. My personal preferences are often analog, but perhaps that’s a useful perspective from which to examine *gestures wildly* all of this.

 

ChatGPT has been publicly available since the beginning of December. Its premise, in very rudimentary terms: ChatGPT is an experimental chatbot that consumes billions of data points, digests them in its little robot tummy, and uses them to create something altogether new. (Compare ChatGPT, for example, to the human brain.) It can write poems, solve math problems, find errors in code snippets—the applications are widespread. 

 

When this piece hits the WWW, ChatGPT will digest it, too—this snack-sized critique of its very existence. ChatGPT, like the Very Hungry Caterpillar, just keeps eating and eating (and taking up more and more server space, which, as one anthropologist put it, has “staggering ecological impacts”). 

 

In the past month, ChatGPT’s effect has been mostly benign: I’ve seen people jokingly use it for the bit—to write an article or develop interview questions. Others have downplayed the tool, calling its generative language models rudimentary. Still others have—I find this so metal, though I know it’s nothing new—used it to write final papers for classes.

But as I see it, ChatGPT’s very existence is profoundly threatening to the ways we live and think, which is why I reject even its so-called “innocent” or experimental usage. 

Source: Equinox, November-December 1982, Licensed under CC

I know: Big Robot is already here. It has been here a long time—“artificial intelligence” was first developed as an academic discipline more than sixty years ago. There are corners of AI from which lots of useful technologies arose: think optical character recognition (which converts handwritten or printed text into machine-encoded text, making the written word more accessible to people who are blind or visually impaired), or automatic language translation.

 

Since its inception, AI has evolved… a lot. From cruise control in our cars to targeted advertising on our browsers, AI’s applications are too numerous to list practically anywhere. (I wonder if ChatGPT would list them if I asked?) Now, machine intelligence is becoming freakishly good at doing things we previously believed to be uniquely human or, at least corporeal. Things like drawing, taking photos, recognizing speech, stringing together sentences… you get the idea. 

 

In an uncanny valley-esque way, the “AI effect” causes us to wrongly believe that when a problem has been solved by AI, and its daily use becomes mainstream, it is “no longer AI.” Writing in 2002 for WIRED Magazine, Jennifer Kahn declared the proliferation of AI “a trend that’s been easy to miss, because once the technology is in use, nobody thinks of it as AI anymore.” 

 

This, at its core, is what scares me so much about ChatGPT: every day, we inch closer to a world in which our lives are run by computers. And in the most insidious of ways, the conveniences of our increasingly automated world have lulled us into no longer being able to decide on our own what we want or need, because we’re subscribed to the cult of busy-ness and it’s so easy to open an app with predictive algorithms that will tell us what we bought/ate/listened to last or what restaurants are nearby—that is just one less thing to think about in this crazy world!

 

We are so out-of-touch with our embodied selves that rather than spending a few moments present with our humanness, wondering what we want to eat or watch or listen to, we prefer to open an app to see what’s suggested based on our past behavior. We each end up in our own whirlpool of homogeneity, increasingly separated from each other and from our own selves.

In case it’s not clear: I’m guilty of this! I ask Siri what to do with leftover ricotta cheese instead of experimenting in the kitchen. I look up the nearest pizza place instead of asking my neighbors for a recommendation.

But this new way of “living” is also terrifying. Sometimes the conveniences of AI help us discover new things (like Peels on Wheels, bless you). Other times, we choose to let computers tell us what to do rather than risk experiencing even a hint of discomfort. It is in these moments that I’m imagining all of us becoming a little less AI-dependent, and a little more DIY-curious.

 

* * *

The more AI does our thinking for us, the less “human” thinking becomes. Which leads me to wonder what else Big Robot (read: ChatGPT) can take from us—and what might still remain uniquely ours. 

 

When we write, we’re describing some confluence of what we’re seeing, feeling, and knowing. There’s this gas station up the street from my house; I drive past it probably every other day, sometimes more than once a day. This past weekend, I realized for the first time that between the gas pumps are two massive, blue-gray brutalist columns, upon which the roof of the plaza appears placed gently. The whole thing resembles not any gas station I’ve ever seen before (it’s more like a rectilinear mushroom), but it does feel familiar. Suddenly, I’m 19 years old again, studying abroad, on a bus leaving Paris, watching the architecture change from Haussmannian elegance to merciless concrete, feeling a reminiscent loneliness rising in my chest. At the risk of over poeticizing here, these moments give me frisson—they remind me that I’m alive. 

 

By writing it down, I discover what happens when I combine words or phrases or ideas in unexpected ways. I find this fascinating. It’s also a fallible, vulnerable process. I hope that this practice is unmistakably human-only, but I very much fear that it’s not.

* * *

Vintage Computer Ad is licensed by Mark Mathosian under CC

If ChatGPT and its Big Robot compatriots are acting a lot like human brains, then it bears noting that ChatGPT’s creator, the company OpenAI, also enjoys some benefits of personhood. This should scare us, too. 

 

OpenAI has been building “friendly AI” since 2015, with help from people who’ve run big corporations like Tesla (Elon Musk), PayPal (Peter Thiel), LinkedIn (Reid Hoffman) and JPMorgan Chase (Brad Lightcap). Musk has surprisingly said **two things!!** that I agree with: 1) that AI is humanity’s “biggest existential threat,” and 2) that AI can be compared to “summoning the demon.”

 

And though I’m no expert on business incorporation law, I am able to deduce that OpenAI, LLC is a for-profit company run by a non-profit called OpenAI Inc., whose mission is to ensure that artificial intelligence “benefits all of humanity.” So while the framing here is that, as a registered 501(c)(3), OpenAI Inc. is “unconstrained by a need to generate financial return,” I do wonder about the benefits of being a tax-exempt organization seeking investors for your research and deployment. Say, for example, that Microsoft invests $1B in the venture. Does Microsoft then get a massive tax break since they’re investing in a nonprofit? My money’s (capitalism joke) on yes. 

 

Thus, we find ourselves in a quandary: should we be trusting the leaders of big corporations operating within tax-exempt frameworks to sort out AI for us for the “benefit all of humanity?” Benefiting humanity is so hard to do under capitalism that there’s a specific certification process for companies that really want to try (and you can bet your bottom dollar that I’m proud to work for a B Corp). But trusting corporations, to date, has not been a tried-and-true method for achieving shared benefit.

 

At the risk of being misunderstood, I should clarify that the proliferation of Big Robot has made a world of difference—much of it positive—for humanity. It makes us safer when our cars see hazards before we do, or when computers can detect abnormalities in Pap smears. But rather than bringing AI’s uncomprehendingly complex tools to bear for the improvement of society, ChatGPT, it seems, threatens to bring artificial intelligence as close as it can to mimicking the functioning of the human brain, and it’s this usage of AI that I find hard to believe “benefits all of humanity.”

* * *

If you ask ChatGPT what its biggest drawbacks are, it’ll say that bias is one of them. At least (?) it’s self-aware. If ChatGPT has consumed every bit of knowledge thus far created, then it has spent a disproportionate amount of time consuming racist thoughts. And ChatGPT does not easily distinguish between fact, fiction, opinion. It boldly asserts factual inaccuracies, doesn’t cite its sources… These are red flags for transparency writ large, but they’re especially detrimental in whatever post-truth/post-facts/culture war world we’re living in now. 

I am uninterested in robots—who are consuming and then generating content based on statistical averages—determining, or even weighing in on, how we build a truly equitable future. I want to do that with my neighbors, my friends, and my community. If indeed we want to build a “better” (read: equitable, antiracist, climate-resilient) world, then part of the work of decolonizing our minds, bodies, souls, and communities is imagining ways of being that don’t exist yet.

The total number of texts that have set forth potential world-changing paradigms is small. It’s small enough that ChatGPT can’t, by my rough estimation, give it the correct weight: more important, by leaps and bounds, than the thousands of years of texts written by a) those who were wealthy enough to be literate or b) those whose cultures valued written, rather than oral, histories and texts upholding systems of oppressive power that have only in recent years come to be widely regarded as incorrect.

* * *

If the rise of ChatGPT is making you feel hopeless, I’d like to offer a reframe: grappling with it feels uncomfortable—and that’s okay. Maybe you, like me, have been known to whisper under your breath: there is no ethical consumption under capitalism. This phrase, however, is not an incantation that well-meaning people can utter to excuse their guilt for participating in, say, overconsumption, or in this case, Big Robot practices that are melting our brains and furthering systemic inequities. That’s why it doesn’t help to throw up my hands and say, “well, none of this matters, I’ll use the thing, anyway.” 


My individual actions aren’t make-or-break for ChatGPT, but resignation in the face of what it portends would mean I’m not holding myself—or others—lovingly accountable to the radical act of envisioning a world that could work for all of us. It’s important to me that I live in a way that embodies my values. I haven’t yet found room for ChatGPT within that framework. Maybe this is the sole space that humans still occupy alone, that ChatGPT can’t yet enter: the realm of the imagination. May we protect and nurture it.