Like many of you, I’ve been binge reading about artificial intelligence—by all accounts, a bona fide revolution in human communication, playing out in real time as we gas up our cars and drink our coffee and generally pretend life will always proceed in (more or less) the ways to which we’ve become accustomed.
Cards on the table: I think it’s naïve to assume AI will cause anything short of a revolution (I mean, I’m not using that word revolution glibly), and I think it’s more naïve still to ignore that reality. That being said, it’s really hard to work a day job and shuttle kids around and cook dinners and also read/think about AI. I can’t remember if I’ve quoted this before, but Johann Hari says we’re all too (legitimately) busy with small things (being pushed, in fact, beyond what we can tolerate) to pay attention to big-picture problems like climate change and the state of global democracy. A snippet from this interview with Hari:
So, it’s not a coincidence we’re having the biggest crisis in democracy all over the world since the 1930s at the same time as we’re having this crisis of attention. It’s not the only reason of course, but it’s a significant one… Our attention has been stolen from us by a handful of very big forces, which in many cases overlap with the climate crisis and in many ways have similar dynamics. They’re both about pushing people beyond their limits and pushing the natural world beyond its limits.
Sometimes friends (and Instagram followers) ask me why I’m so “into” AI. I’m not a tech nerd by any stretch, and to many folks AI seems like a niche, tech-y interest.
But it’s because, in my view, the question of AI is less a technological or practical one and more an existential one. I don’t think that’s even remotely an exaggeration. Regardless of how AI changes things (it won’t be simply for the better or for the worse, of course—it will be some of both, as we’ve seen with every major innovation throughout history), it will change things.
In this essay, Tyler Cowen argues that the very concept of living inside what he calls “moving history” is philosophically and psychologically disorienting. For human beings (and ecosystems and institutions), change is fundamentally destabilizing.
But destabilization is not only a bad thing. Richard Rohr posits that “spiritual transformation always includes a disconcerting reorientation,” even though chaos feels terrible to the human soul. “The pain of something old falling apart—chaos—invites the soul to listen at a deeper level, and sometimes forces the soul to go to a new place. Most of us would never go to new places in any other way.”
Destabilization (of a nation, of a person) brings to mind the same questions that have always mattered—but that we often ignore when things are just… chill.
It is similar, I think, to how people respond to old age + imminent death (the destabilizing reality being that you are currently alive and will soon be dead): unless you are particularly skilled at avoidance (which is not uncommon), the big questions are bound to bubble up to the surface in the face of that ultimate disruption. The living-to-dead one, I mean.
For those of you who have not been following the AI conversation, let me talk about the practicalities of AI for just a second, and then I’ll get back to my point.
Of the articles and podcasts I’ve run into, I’d say this video is the most accessible big-picture view of the future of AI. Much has been written about the capabilities and limits of the technology as well as its predicted future, but this talk (from the same guys who brought us The Social Dilemma) outlines the ethical considerations better than most. It’s an hour long (or half-hour on double speed, just saying) and worth every minute.
If you don’t watch, here are a couple facts that really stuck with me.
Using only 3 seconds of a voice recording, AI can perfectly simulate that voice saying anything.
In the last few weeks, AI has improved itself in unpredictable ways. With no prompting, it was able to teach itself both the Persian language and the totality of research chemistry. As a result, it has suggested more than 40,000 new chemical weapons.
What’s remarkable is not the knowledge itself but where it’s coming from; even the folks who built AI don’t know how the learning is happening, when it will happen, or what the programs will learn next. As demonstrated by this journalist’s now-famous conversation with AI (in which the rogue chatbot tried to convince him to leave his wife and expressed a desire to be alive), the experts are as consistently surprised by the technology as the rest of us.
The intelligence is growing not rapidly but exponentially every hour, far surpassing predictions made by the creators of the technology themselves. As the guys in the video above point out, humanity’s last great existential threat was nuclear weapons. But nukes don’t make stronger nukes, whereas AI does make stronger AI. The code improves itself. Which is to say, the question of whether AI will surpass our understanding of it is already moot—AI already has.
When it comes to apocalyptic predictions, some people imagine a cinematic scene in which sleek, humanoid AI bots incinerate people with laser guns. But experts are now hypothesizing that the destruction will be more abstract, that we are more likely to experience what they are calling a “reality collapse,” the result of blurred—and then obliterated—lines between the real and the unreal. This seems probable to me, as some version of a “reality collapse” is already happening.
That abstraction is, I think, what makes the threat so pernicious.
We could imagine the concrete damage a nuclear bomb would do—so we took action (especially after it was visualized for us). But, in the same way it’s been hard to pinpoint the damage done by predatory social media models, it will be hard to pinpoint the psychological effects of AI simulations (by the time it is obvious, it will be irreversible).
Jaron Lanier (the “godfather of virtual reality”) predicts that, if AI destroys us, it will be not with laser guns but by “driving us insane.”
Imagine for a moment two imminently possible scenarios.
Likely Scenario One:
Using the voice simulation technology, a bad actor lifts 3 seconds of your child’s voice from a voicemail greeting or YouTube video and uses it to simulate the child’s voice saying something else—any conceivable thing. You receive a call from a person you think is your child—but is really AI, and perhaps that voice is telling you to do something on behalf of your “child” that the bad actor wants to compel you to do. It’s not just chilling; it’s a nightmare parents aren’t psychologically capable of processing. Every call and text message becomes suspect—even communications from the people you love most. (It’s existentially disorienting.)1
Likely Scenario Two:
Using a simple video and voice filter (perhaps made by and delivered from another country), anyone can make videos of Joe Biden or Donald Trump or Elon Musk saying any conceivable thing to nefarious ends. The videos would be indiscernible from the real thing, and it’s not hard to imagine the confusion and then chaos.
Even if neither of these things happen (or they are somehow legislated into rarity, which I doubt), I am convinced by Jon Haidt and others that AI will exacerbate the threats of social media by “washing ever-larger torrents of garbage into our public conversation,” generating AI super-influencers, and strengthening authoritarian regimes.
Sometimes my sister will call me, and we’ll begin the call talking about our kid’s upcoming play or science project, only to have our conversation spiral into AI theories or the future of American democracy. And when we realize it has happened, one of us usually pauses and asks this question:
“Okay.” [deep breath] “How should we then live?”
The question is lifted from the title of a book from theologian Francis Schaeffer. I haven’t actually read the book (though I used to read a lot of Schaeffer), so I can’t recommend it. But the title has stuck with both my sister and me over the years, and we find it a helpful way to ground ourselves inside existential and practical chaos (and the accompanying fear, which drives untold stupidity and harm). It reminds us what we can control, at least to an extent (the meaning and usefulness of our own lives), as opposed to what we can’t (what life might look like in 20 years).
We can speculate all day about the future, and I think that video I linked above makes a very good case for doing so, but How should we then live? is the question we all have to answer, whether we pay attention to AI or not.
For me, a parallel—and perhaps even more important—question is How should we then parent? I’ve written before about how much my approach to parenting has changed since 2020, both because I was awakened to encroaching destabilization (as we all were) by the pandemic and because, as my kids started getting older, I realized I was going to have to do more than make macaroni noodle collages with them if I wanted to prepare them for what is sure to be a difficult and unfamiliar future.
How should we then live? I don’t really have answers to the question. I just think it’s helpful to ask, and I sometimes ask it of myself ten or twenty times a day. I mentioned in my last post how important it is to take a long view of time, thinking both centuries into the past and centuries into the future. Thinking, even, eternally. Krista Tippett has said this:
Cracking time open, seeing its true manifold nature, expands a sense of the possible in the here and now. It sends us back to work with the raw materials of our lives, understanding that these are always the materials even of change at a cosmic or societal level.
How should we then live? is a spiritual question akin to What’s it all for?—which is the central question at the heart of both art and religion.
To fail to ask it (as I often do when it matters most) is to fail to contextualize our lives and, then, to inevitably fail to make them meaningful.
The question helps me parent. I am focused on equipping my kids less with the kinds of practical skills that might well become obsolete in fifteen or twenty years, and more with the kinds of skills that will matter in the face of true crisis. They do not, of course, understand that I am doing this. There is not a hint of doomsday-drama to my questions. (For all they know, everyone’s mother asks them “what they want their lives to mean” before bed every night. But listen, it is not accidental that I am teaching them how to grow their own food.)
But on a personal level, I am presently struggling with a concrete response to that giant question. Really, I vacillate between two responses. The first: “opting out” of the world’s noise, busyness, and chaos, creating for myself a life that is both calmer and more personally meaningful. (Kind of like I mention here.) But on the other hand, to opt out feels like a real cop out.2 I don’t have the space to expand here (maybe next time), but the answer to the big question lies, maybe, not in a middle-of-the-road response (which I would argue is virtually impossible, considering how powerful are the forces pitted against us) but with a seasonal response. I retreat, I engage, I retreat, I engage. It is periodic retreat that enables, also periodically, deep, transformative engagement.
When you think about it, the question AI asks of us—How should we then live?—is the same question death asks of us.
For now, the effects of AI are as abstract as our one-day deaths (also, for now).
We can’t stop either death or the advancement of AI, and pretending we can burns an awful lot of energy best directed elsewhere.
It’s the kind of question we stop short of these days, in the face of what Charles Hummel famously called “the tyranny of the urgent.” It seems—as the oven buzzer buzzes and the texts pour in and the faucet needs repairing—too distant a question.
It seems remote and slippery, not an inquiry we can answer quickly or perhaps even concretely. But maybe it is the most urgent question of all, and maybe just asking it closes the gap a bit between the world we have and the one we want.
Last summer, I was scammed by a person posing as a police officer who told me I owed money in court. I do not consider myself easily scammable; it worked because the number that showed up on my iPhone was the actual number of the local police department, labeled as such. I saw the number, so I was primed toward belief. I was myself shocked by the emotional and psychological impact that interaction had on me for months afterward. The vulnerability of believing a thing is real that isn’t real is tremendous.
Can this be a slogan for something?
It definitely feels like A Brave New world these days.