I have been receiving questions from readers about the spiritual implications of AI and how the church should respond. While I have been addressing the spiritual implications of digital technology for a while (see here and here and here), readers have been pushing me to more directly engage with the practical questions concerning the appropriate Christian response to developments like ChatGPT and the possibility of AGI.
Let me begin by saying that I am AI-skeptical. In my column at Salvo I have offered a string of posts outlining reasons for caution. I think AI is too powerful a tool for humans to wield, which I discovered when I fell into digital addiction. While AI brings benefits, I am not convinced it is making the world a better place. And yet…I am not a Luddite. On the contrary, I utilize chatGPT in my work, I have argued that AI is not Satanic, and I have even urged classical Christian schools to bring ChatGPT into the classroom. To some, this may seem like inconsistency. How can someone who has spilled so much ink making the case for AI skepticism not take the position of total rejection?
To answer this question, we need to begin by defining terms. “Luddite” is often used as a pejorative or conversation-stopper, rather like calling someone a conspiracy theorist. As these types of straw man tactics are never helpful, I want to address the Luddite position by interacting with a steel man, namely the best and strongest representative of the anti-AI approach. And that would be Paul Kingsnorth and his recent essay, “The Neon God.”
I am a fan of Paul Kingsnorth. We are in correspondence (we have even discussed the ideas in this present article), and I have tried to arrange for him to come to the United States to speak. Moreover, if I were to make a list of the six most insightful cultural critics living today, Kingsnorth would be on that list alongside such eminent figures as Byung-Chul Han, Jonathan Pageau, and Dr. Carl Trueman. I think everyone should follow Paul’s substack, The Abbey of Misrule, and watch his conversations on YouTube. Paul is not only a perceptive thinker but someone who is deeply attuned to the spiritual dynamics of the contemporary situation.
After Kingsnorth published “The Neon God,” last week, I told friends that if anyone could make a Luddite out of me, it would be him. While I encourage you to read his entire essay, I will curate the high points here before offering my response.
The Machine is Invading Religion
Kingsnorth chronicles his recent trip to Mt. Athos, the monastic center of Eastern Orthodoxy, and the distress he felt at seeing how it has been invaded by modern technologies. This, he points out, is a cameo of what is happening throughout the world as spiritual traditions from Buddhism to Roman Catholicism succumb to the onward march of The Machine. We are even to the point where, in some parts of the world, robot priests are hearing confession and filling in where there is a shortage of clergy.
The Human Race is in Danger
It is not just our religion that is in danger of being compromised. Kingsnorth also suggests that AI poses a threat to the human race. He references a recent Time Magazine article outlining how easy it would be for “an AI initially confined to the internet to build artificial life” and take over the world—a future that can only be averted by drastic action like bombing data centers.
Hate and Helplessness
Towards the end of the essay, Kingsnorth lays his cards on the table and explains what this is all doing to him: it is causing him to hate the anti-culture of the machine. Here’s what he writes:
I don’t hate many things in this world – hate is an emotion I can’t sustain for long – but I hate screens, and I hate the digital anticulture that has made them so ubiquitous. I hate what that anticulture has done to my world and to me personally. When I see a small child placed in front of a tablet by a parent on a smartphone, I want to cry; either that or smash the things and then deliver an angry lecture. When I see people taking selfies on mountaintops, I want to push them off. I won’t have a smartphone in the house. I despise what comes through them and takes control of us. It is prelest, all of it, and we are fooled and gathered in and eaten daily.
You see what these things do to me? Perhaps they’re doing it to you too. I think it’s what they were designed for. If there was a big red button that turned off the Internet, I would press it without hesitation. Then I would collect every screen in the world and bulldoze the lot down into a deep mineshaft, which I would seal with concrete, and then I would skip away smiling into the sunshine.
The intensity of Kingsnorth’s feelings arises from his sense of utter helplessness, as he sees himself tied into a system from which he cannot escape.
But I am writing these words on the Internet, and you are reading them here, and daily it is harder to work, shop, bank, park a car, go to the library, speak to a human in a position of authority or teach your own children without Ahriman’s intervention. The reality is that most of us are stuck. I am stuck. I can’t feed my family without writing, I can’t write without using the laptop I am tapping away on now, and I can’t get the words to an audience without the platform you are reading this on; a platform which has allowed me to write widely-read essays critiquing the Machine. I know that many people would love to leave all of this behind, because I often receive letters from them – letters mostly sent via email. But the world is driving them – us – daily deeper into the maw of the technium.
There is no getting away from any of this. The Machine is our new god, and our society is being constructed around its worship.
This is a bleak picture: the world is being taken over by The Machine, and although we can rage, there is no escaping from it. Well, almost no escape. Kingsnorth ends by proposing ways we can institute technological askesis. This ranges from what he calls the “raw tech-ascetic” (those who do things like hammering your smartphone, never going online, and “bring[ing] your children up to understand that the blue light is as dangerous as cocaine”) and the “cooked barbarian” (those who who live within the beast system but practice a silent dissent through observing boundaries). Given his job, Kingsnorth is trying to practice the latter by, for example, resolving never to interact with AI, never to scan a QR code, and never to allow a smartphone into his house. The boundaries, wherever you draw them, must be defended no matter what the consequence.
What happens when the lines you have drawn become hard to hold? As Shari suggested when we spoke: you just hold it, and take the consequences. If you refuse a smartphone, there might be jobs you can’t do or clubs you can’t join. You will miss out on things, just as you would if you refused a car. But such a refusal can enrich rather than impoverish you.
I was at a restaurant the other day where my food was delivered by a robot, who chatted to me with programmed responses. Maybe it’s a sadistic side of me, but if I am successful at bringing Paul to the United States to speak, I will take him out to eat at that restaurant just to watch his reaction. I confess I reacted to my robot server with some alarm, and I had an almost visceral aversion at having a machine forced on me without my consent. I think that’s a healthy response. In fact, as an AI-skeptic, there is much about Kingsnorth’s discussion with which I concur. I wish we had more thinkers like him to help keep the rest of us accountable.
I especially think Kingsnorth is spot-on about the tragedy of Athos. I heard about this first hand from a monk who had come from the Holy Mountain. He said it began by pilgrims giving smartphones to monks as gifts, and just spread from there. Just yesterday a priest told me that Athonite monks now bring tablets into the altars with them. I read somewhere that St. Paisios warned not to introduce cars to the Holy Mountain and that if cars were introduced then the whole character of the Athonite community would change. I can’t remember where I read that, so my readers will have to forgive me for my lack of due diligence. But after hearing Paul’s account, I have to wonder: is the invasion of Athos by smart phones a judgment for disregarding St. Paisios’s wisdom? Were cars the slippery slope?
Anyway, Amen to Paul’s concerns.
And yet…I have reservations about his overall premise. Or maybe it isn’t an actual disagreement but just a different emphasis.
Distinguishing the Remote From the Immediate
To contextualize my reservations, I’d like to make a distinction between the remote concerns about digital culture and the immediate concerns, and how these interrelate. And while I will be talking about digital culture in general, in another sense this is all about AI since everything digital is soon to be integrated with AI, whether we like it or not.
The remote concerns about digitization have to do with hypothetical fears of culture-wide catastrophes such as an AI-take-over, or chips in our foreheads, or churches that succumb to robot priests, etc. By contrast, the immediate concerns about digitization might include things like one’s smartphone eroding one’s attention span, your teenager is accessing porn on his phone, or the cultural challenges wrought by digitization that directly impact me (i.e., my neighborhood doesn’t have a sense of community because everyone is plugged, I find it hard to sleep because of notifications, etc. )
I want to suggest that an over-attention to the remote concerns (regardless of whether those concerns are true or false) causes us to make poor choices about how to address the immediate concerns. Let me explain.
In novels that take place in pre-industrial times, it is interesting to see how war is treated. Often the characters talk about never having seen a war, or perhaps hearing there is a war in the north and then wondering if it will come this far south. The reason war is discussed as something remote is because the real center of gravity is what is happening in one’s own village. Sure, there may be war that is getting closer, but until it actually arrives, this is not as noteworthy as the fact that one’s neighbors have just had baby goats or that the harvest is coming earlier this year.
To us moderns, this type of mentality seems quaint and old-fashion, and indeed it is. But from the perspective of anthropology and neurology, it is more natural to how human brains were designed to function. We were designed to process information in our immediate surroundings, including communities of around 150 people or less. The further the radius is extended, the greater becomes the human propensity for misjudgment. Thus, a person who is required to think about the bigger picture (for example, a king) must have a special set of virtues; and consequently the Hebrew Wisdom Literature gives considerable attention to the epistemic virtues necessary for the wise ruler.
Now our society has all sorts of technologies that make remote happenings seem especially pressing, vibrant, and relevant—in fact, more relevant than what is happening in our own community. We consume this information yet without the epistemic virtue of the wise king. The fact that there has been an earthquake in Turkey feels more relevant than the fact that my neighbor has no way to get to work because his car has broken down. As a result, we are apt to misinterpret the danger that remote happenings actually pose to us and our community. The more time we spend listening to news about these remote happenings, the more it seems as if hypothetical disasters are on our doorstep even when they are not. This gives all of us the burden that once rested with the king.
I’ll give an example from my own life, before explaining why I think overemphasis on the remote causes us to misjudge the most prudent responses in dealing with the immediate realm. I used to live in a small woodland village in a sparsely populated county of the Pacific Northwest, near the border with Canada. One might think it would have been a throwback to an earlier, less technological age, but many of my friends in the village spent inordinate amounts of time online, getting riled up by everything going on in the world. Partly this was because a lot of them had blue collar jobs that afforded the freedom to listen to videos and podcasts at work. Friends would routinely text me articles, videos, and podcasts. These would often be of an alarmist nature about supposedly world-changing developments. For example, the Jews are about to take over. The mark of the beast has arrived. There is about to be widespread persecution against Christians. There is a colony of giant insectoids under the earth that will come out to make war on the human race. A third of the world’s population is about to be eliminated as a result of vaccines. A second civil war is on the verge of breaking out. And so on and so forth. All of this resulted in a type of permanent state of alarm. I even got emails that circulated in this community with links to webpages in which respected priests and church leaders prophesied coming catastrophe and gave timeframes for when their prognostications would occur. When the dates came and went without incident, my friends simply moved onto new predictions of doom. This sense of permanent crisis was fueled by spending time listening to news, podcasts, and videos about actual or potential remote disasters, all of which made it appear as if these disasters were on our doorstep even when they were not. This, in turn, fueled the type of susceptibility to fake news and false information that I discussed here and here and here.
What would we think if we ignored all that noise and just focused on what is happening in our own community? That is something I found out a few months ago when I moved to Northern Virginia. Before my departure, well-meaning friends in the aforementioned village warned me about the risk of living in such a liberal place as Virginia, while one friend repeatedly suggested I would not be safe living so close to the nation’s capital when the civil war broke out. I confess, I started to wonder if it was true and whether I would be safe in Northern Virginia. Was I walking closer to the incipient crisis? Yet when I arrived and got involved in a parish, I found that not only was I not walking into a crisis situation, but that all the various crises that had consumed our attention for the last year were merely virtual, generated by spending too much time online. In this parish, where the people are generally too busy to read the internet or listen to podcasts, very few conversations occur about remote concerns in the world that have not yet reached us. In fact, I experienced the contemporary equivalent to being too occupied with your neighbor’s goats to worry about rumors of impending war. I have now been in the parish since mid-March and have never once been involved in or overheard a conversation about what’s wrong in the world, or conspiracy theories, or even the news. People are too busy just living life: attending baby showers, reading novels, struggling to give their children a Christian education, practicing music, doing their jobs, and on and on. As one friend told me when I mentioned this to him, “All the bad stuff that’s going on in the world doesn’t actually need to affect us if we don’t have our children in the public schools. It only seems really important if you spend time online.” This statement would probably need to be qualified, as the reach of cancel culture makes it increasingly difficult for committed Christians to work in many professional fields, especially medicine, but I still think he pinpointed an important principle.
I don’t actually spend much time online either – less than an hour a week. Although I’m interested in AI, most of what is going wrong in the world is, frankly, quite boring for me. Even most of what I cover in my Salvo column is based on information people send me (though, of course, I perform due diligence on the information, usually with offline resources). Consequently, when I read that some spiritual traditions are integrating robots priests into their worship, or that theorists are saying that AI may soon terminate us, this is interesting but it does not feel as vivid and relevant as the fact that my friends have agreed to study Augustine’s Confessions with me, or that I am meeting a friend for breakfast in the morning, or that I am trying to find better accommodations so that a family member can come and live with me, or that I have a hike planned for the weekend, and so on. I simply don’t have time to keep myself worked up in a permanent state of emergency about the possibility of robot priests, artificial life forms, etc. The war may never reach me.
This may seem hopelessly naïve of me, the equivalent to people who denied the existence of COVID until it came to their community. Yet I would argue that attention to real life in one’s community helps us achieve the type of grounding that is necessary for knowing what is actually important. When the Dylan Mulvaney controversy was in full swing, the leftwing media wanted us to think that Dylan Mulvaney was super important, and the right-wing media also wanted us to think that Dylan Mulvaney was super important. But was Dylan Mulvaney important? Sometimes you really do have to unplug to appreciate what’s real. My point is that if we spend too much time in the infosphere, we will inevitably begin feeling a type of helplessness that is artificial, because based on inputs that are not an accurate gauge of reality.
There has always been perversion ever since Sodom and Gomorrah. The difference now is that perversions are imported into our homes through the internet. Then people who spend a lot of time online (either directly, or by proxy through listening to those are emersed in the news) begin to feel that our civilization is on the brink of collapse. At the moment those who have this doomsday inclination will likely attach their fears to the threat coming from digital technology in general and artificial intelligence in particular (if I was writing this last year, I would have said the threat of vaccine genocide).
Sometimes there are corporate interests at work that want us to believe the artificial is more real than it actually is. Take all of the hype about the metaverse last year. Mark Zuckerberg wanted us to believe the metaverse was going to take over all of life because this triggered investment. Many prepper-type Christians obliged and, in some cases, made economic decisions based on a projected need to escape from the coming reality. After all, they reasoned, if all technology is about to be integrated into the evil metaverse, then our families should get rid of all technology despite the consequences. Nobody really talks about the metaverse anymore, but while the hype was going on, it led to a real feeling of helplessness, and this helplessness was correlated with reactive decisions.
It’s easy to feel this type of helplessness after reading Paul Kingsnorth, and then to adopt the type of reactive positions that are a feature of the Luddite mentality. Indeed, too much time being focused on remote concerns (a robotic priest in a Buddhist temple halfway across the world, monks bringing smartphones into their altars, a person in California who is trying to create the metaverse) may lead us to deviate from the most prudent course of action in our immediate sphere, with the consequence that we adopt reactive and defensive positions that merely entrench us in folly.
This is the crux of my argument so I will repeat it: too much time being focused on remote concerns may lead us to deviate from the most prudent course of action in our immediate sphere, with the consequence that we adopt reactive and defensive positions that merely entrench us in folly.
Let’s look at some examples of reactive and defensive positions that may be more indicative of folly than wisdom.
Reactive Luddite Positions
Examples of reactive decisions or positions adopted by Luddites would include things like:
- The myth that AI is satanic. As much as we may hate AI, there is no ghost in the machine, while belief in spiritual machines comes with negative real-world consequences.
- A feeling of helplessness. The position of total rejection instills a sense of helplessness that disincentivizes us to find ways to leverage the new technologies for the advancement of the church and the flourishing of humanity. (The opportunities to leverage AI for good are discussed below in the section “AI and Christian Practice.”)
- Lack of prudential reasoning. When one feels like everything digital is the front line in a battle to take over the world, then every line one draws in the sand (whether it be no cell phones in the house or refusal to scan a QR code), takes on an apocalyptic quality that absolutely brooks no compromise. But this is antithetical to the type of prudential, situation-by-situation wisdom that is the hallmark of moral reasoning according to the book of Proverbs (more about Proverbs coming to this blog soon).
- Misidentifying Moral Hazard. All technology comes with a moral hazard. A moral hazard associated with the hammer is that it enables you to construct buildings quickly, with the result that you might be in such a hurry that you don’t pay attention to the type of building you’re creating. A moral hazard associated with boots is that now you can walk in the dark, so you might be over confident venturing in the woods at night. The moral hazards associated with AI are numerous, but the most obvious is that by being able to crunch data faster, we may become so focused on process that we neglect to ask deeper questions about the type of society we are creating, let alone the side effects that come from outsourcing decisions to machines. Proper use of technology involves being aware of the moral hazard, properly identifying where the moral hazard is located, and taking steps to avoid the hazard. But that is hard work and requires intensive engagement with the technology in question. If we take a defensive position about AI, perhaps even dismissing it as Satanic, then that absolves us from having to engage in the task of identifying the moral hazard and taking appropriate caution; instead, we can simply dismiss the entire project as flawed. Since it is very difficult for generalized fear to be operationalized into situation-specific caution, the Luddite position inadvertently feeds incaution.
- Functional Repudiation of Conservative Values. A hyper Luddite tendency can result in a person feeling so disenfranchised from “the West” that he ends up eschewing traditional conservatism for more revolutionary-sounding rhetoric. If we believe the advance of “the Machine” has reached a point that there is nothing left in our society to conserve, thus necessitating more society-overthrowing agendas (which both Paul Kingsnorth and Rod Dreher come precariously close to advocating, even given the qualifications that must be kept in mind to account for their polemic rhetoric), then we will not only be disincentivized from working as good conservatives to address difficulties like Moloch alignment, AI mission creep, technocracy, and the paperclip problem, but we will also likely eschew educating our children to work in professional fields or to engage in the type of cultural labors that I defend in my book Rediscovering the Goodness of Creation. Instead of embracing conservative models of change rooted in a fidelity to the heritage of our past (yes, the past of “the West” despite all its problems), this type of reactionary Luddite stance may incline us to think there is nothing left to preserve, and that the breakdown of our society is inevitable. But while we may think things are bad, we haven’t seen anything compared to the tragedy awaiting us if the civilization we call “the West” falls. We know from history what civilizational collapse looks like: it means being ruled by bullies and thugs. Those who say it can’t get any worse because we are already ruled by bullies and thugs have no idea how much worse things could become in the wake of widespread civilizational collapse. We should work to avoid such a state of affairs!
- Repudiation of AI-based models of information literacy. ChatGPT is causing a crisis in education that can only be adequately met by teachers offering information literacy training. That means bringing AI into our classical Christian schools—something Luddites oppose. It is easier, and perhaps more emotionally satisfying, to take the approach of total rejection than to do the difficult work of information literacy instruction.
These are just some examples of reactive and defensive positions we may be driven to out of fear. My hypothesis is that too much time spent focusing on remote concerns (listening to doom and gloom in the news) is a driver in prompting these types of reactive decisions and positions.
Another Type of Luddite
There is another type of Luddite to what I described above. This is the more consistent Luddite – the type I call “lifestyle Luddites.” Rather than spending lots of time online learning how bad technology is, this type of Luddite spends most of his life unplugged. He practices what he preaches. He loves the old-fashion ways – books, animals, fireside chats, making music, playing board games, and various activities that are more human and humane. This type of Luddite is often difficult to get hold of because he has rid himself of modern communication technology, or perhaps only keeps those technologies around for emergencies.
This is definitely a much healthier type of Luddite. I personally think this type of lifestyle is unpractical since information-sharing networks that existed in the past have now migrated online, thus making the attempted replication of an old-fashion lifestyle truly anachronistic. All the same, if someone chooses to life this way, I have no objection. For certain people, this may even be the spiritually healthier option.
I have observed that it is very difficult, if not impossible, for non-monastics to embrace this type of lifestyle without problematic ideology being layered into it, including truth-claims such as the following:
- “Everything we need to know is in books.” This is false because most of the latest research in the STEM subjects and social sciences is only published in digital journals and academic databases. I discuss the importance of academic databases here and here and here.
- “Students don’t need to be taught information literacy, because young people easily pick up computer skills anyway.” This statement might be true if we were talking about basic IT skills, but this is blatantly false when it comes to information literacy and the epistemic virtues conducive to it. Indeed, lack of information literacy instruction merely opens our students up to various post-truth epistemologies such as relativism, perspectivism, and standpoint epistemology. For more about that see here and here.
- “Modern scholarship is unreliable, so being completely unplugged and reading older works is better.” Those who make this claim usually only have a superficial understanding of modern scholarship, perhaps generalizing from bad experiences in one or two fields. The false descriptive statement here (modern scholarship is unreliable) justifies a form of anti-intellectualism since it relieves the person from having to understand and master the intellectual tools that enable us to wisely differentiate genuine scholarship from ideology-driven scholarship. In the end, this knee-jerk dismissal of modern scholarship results in a type of parochial insularity that dismisses the value of whole university departments, and disputes entire disciplines that Christians researchers routinely contribute to.
Suffice to say, if you want to unplug as a lifestyle choice, then more power to you. But don’t simultaneously set yourself up as an expert on the modern world and then make false claims about technology. These false claims are comparative to the false statements that some ultra-conservative fundamentalist Christians make about rock music even though they have never taken the time to understand this style, or what people who have never bothered to understand modern art say about contemporary exhibitions. This is not an error that people like Paul Kingsnorth fall into, since he carefully studies everything that is going on, but it is a trap that more consistent Luddites fall into.
The Empire of Digitization and AI
As an alternative to the Luddite approach in its various purmutations, I want to hold out the example of how early Christians responded to life in the Roman Empire. This is appropriate because machine culture is a type of empire. Whether or not the remote concerns referenced above (chipped foreheads, robot priests in our local church, AI-generated life forms, etc.) ever move from the hypothetical remote to the actual immediate, a culture of AI is about to envelop us just like the early Christians were enveloped by Rome. But let me explain how AI is about to become like an empire, before circling back to talk about how Christians approached the Roman Empire.
On its own, AI is not intrinsically useful to human beings. If you were to release an AI chat bot into hunter gatherer society, or into the time of Jane Austen, it would have been no more useful than releasing a book into a purely oral culture. Before AI can perform services that are useful to men and women, something else must first happen. We must first adjust our environment to become such that the machine can flourish within it. A good example of creating an environment in which a machine can flourish is a dishwasher. We don’t just build a robot to stand at the sink and do dishes because that would be exceedingly expensive and not very effective. Instead, we build a three-dimensional space in which a “robot” (in the most general sense of the term) can accomplish a task that would have been impossible within a natural environment. The same applies to cars. You don’t just release a robot down the street and say, “build me a car.” Instead, you build an entire environment around the robot that enables it to have the right interactions; in short, you build a car factory.
What is true on the micro level with things like dishwashers and car factories is also true on the macro level of society as a whole. The process of customizing our social spaces, routines, and expectations for our machines has been a recurring theme throughout the history of technology. One very clear example is with the invention of the automobile. I pointed out earlier that when the automobile was invented, it was widely believed that the new transportation would save time for ordinary people, especially women. While the ease of travel did open up new opportunities, its promise as a time-saving mechanism never materialized. This is because the infrastructures of economics, townscapes, and social life simply adapted themselves to the invention and the expectations that came in the automobile’s wake. We enveloped the world around the new invention. Moreover, when you consider all the changes that came as a result of cars (the emergence of large department stores, zoning requirements, city planning, new expectations for buyers and sellers, and on and on), we can truly say that our lives are controlled by cars even if we don’t have one. There is nothing surprising about this, for throughout history we have been accommodating our internal and external environments to inventions in a symbiosis that causes the inventions to take on a new importance and indispensability. (That, incidentally, is why I don’t believe tools are ever neutral, but that is a different topic.)
What happened with the car is analogous to what is happening today as we forge an information society. Before information and communication technologies can appear smart and useful, we must first envelope the world around them. We must create a society in which digital code can, so to speak, flourish. What does a world look like after it has been enveloped around digital code? Well, look around you. Contemporary life is increasingly built on top of a substructure known as the “internet of things” whereby physical and digital objects constantly communicate with one another—and often make decisions for us—in a sprawling network of interoperability. You might think you are only using the internet when you are interacting with a screen, but you are actually using the internet when you shop, drive, watch TV, fly on an airplane, or go to the mall. This is because the technologies that facilitate activities like shopping, driving, etc., are increasingly built on a foundation of ubiquitous computing, embedded systems, and wireless sensor networks. These systems now comprise an entire ecosystem of semi-autonomous processes that could not be easily stopped even if we wished to, short of massive disruptions.
Now AI is about to be integrated into this digital ecosystem—it will be everywhere, all around us, seamlessly integrated into the infrastructures of modern life, just as the empire of Rome was integrated into the life of the early Christians.
Use Unrighteous Mammon and Christianize AI
Given the type of world we are entering, it becomes increasingly difficult to opt-out. Whether one is a “cooked barbarian” or a “raw tech-ascetic”, to use Kingsnorth’s categories, it will become increasingly difficult to function without accommodating oneself to the rule of The Machine. My argument is that it is not only possible to pursue such accommodation as a faithful Christian, but that we should eagerly embrace such accommodation as a form of Christian discipleship.
Consider how the early Christians behaved in the Roman Empire. Yes, they drew lines and refused to engage in certain practices (for example, burning incense to Caesar) but they also worked within the existing structures to slowly transform them. For example, they paid taxes, served in the military, Christianized what they could (for example, much of the philosophy and literature), and used governmental and cultural infrastructures to promote the gospel whenever possible. In other words, they became skilled at making friends with unrighteous mammon, to use the phraseology of the rather puzzling passage in Luke 16.
In addition to thinking about God’s people under the dominion of Rome, we might equally reflect on how God’s people were told to behave after being taken into captivity in Babylon. The prophet Jeremiah told them to take wives and plant vineyards and get on with being the people of God in spite of the evil around them.
So let’s use AI and exploit it for good, just as the Israelites used Egyptians gold to build the tabernacle. Let’s Christianize AI! What might this look like in practice? Christianizing AI has both a theological and practical component. Let’s start with the theological dimension.
AI and Christian Theology
Part of what it means to Christianize AI is to develop a theology of AI, similar to how Christian teachers have developed a theology of driving, or a theology of architecture, or a theology of craft. Although this may sound far-fetched, our tradition already gives us a head start in developing a theology of AI.
Consider that Christian thinkers have long taught that prudence, one of the four cardinal virtues, involves collecting information prior to making a decision. Sometimes information-collecting comes from a human source (e.g. wise counsel or a book), sometimes it comes from a natural source (e.g. evidence of past high watermarks on a river bank), and sometimes it also comes from an electronic source (e.g. a calculator or spreadsheet). There is no reason, in principle, why AI could not fall into the latter category by offering an additional source to inform wise decision-making. A theology of AI would explore how the doctrine of prudence sheds insight on the contemporary situation, and how we can interact with AI in a prudential way.
Beyond the question of prudence, however, AI needs to be situated within the context of Christ’s redemptive work. At the heart of Christian theology is the doctrine that, through Christ, the Creator God is progressively renewing the world and eliminating the reign of death. This is the theme of my recent book with Ancient Faith, Rediscovering the Goodness of Creation. In that book I point out that while the church fights against death through explicitly “spiritual” means (like preaching the word, bringing salvation to the nations, and administering the sacraments), the church also acts as an agent in God’s renewal through supporting technology and inventions that help advance, in partial and incomplete ways, Christ’s victory over death. For example, St. John Chrysostom’s pastoral theology put a high premium on self-care, while he advocated various medicinal remedies to help with ailments. Other missionary saints have worked as doctors and set up hospitals. Christians today can continue in this tradition by embracing the potential that new technologies have for good. For example, if AGI could tell us when a bridge is about to collapse or find a cure for cancer, Christians of all people can rejoice to see the effects of the curse reversed in small and incomplete ways. Believers can be at the forefront of these advances by pointing towards a humane use of AI technology, and working to situate innovation within a context that accepts, rather than seeks to disrupt, the value of limits and the givenness of creational order.
Additionally, the church plays a prophetic role in warning against dehumanizing uses of AI technology. Yet more than merely offering critique against this or that wrong use of AI, our theology gives us resources for addressing the broader structural shifts that emerge when our interaction with the world, and each other, comes to be mediated through algorithms. How is AI—and the larger order of The Machine in which it is situated—changing our understanding of ourselves and those around us? How do we preserve our humanity in a world where information is valued more than communion, process more than substance, efficiency more than flourishing, the tree of knowledge over the tree of life? The church has the philosophical resources to address these questions with wisdom and irenicism, and it has the prophetic mandate to do so. But even more than simply addressing these pressing issues of our time, we must be able to show—through our liturgy, our sacraments, our communities, and our way of life—what it means to renew the human in a technological age. I know Paul would agree with this, but where I suspect we part ways is that I believe one of the ways we can renew the human in a technological age is by using AI. Let’s unpack what that looks like in practice.
AI and Christian Practice
How can Christians work within the structure of what Kingsnorth calls The Machine? This is a question that specialists in different fields have already begun to answer. For example, the Center for Humane Technology is bringing together a consortium of experts to explore ways to steer technology towards human flourishing. I am in contact with people who are using modern technology, and even AI, to support craftsmen working with their hands and reviving old trades. As already mentioned, I am personally working to integrate AI into classical Christian schools to increase information literacy and to raise awareness of previously remote philosophical questions and literary texts. All of these projects are our own equivalent of the early believers Christianizing the Roman empire with the gospel. It is also similar to what happened with cars.
I already mentioned that, as a whole, cars have had a dehumanizing impact. They have led to the erosion of healthy neighborhoods by enabling us to curate our social communities. Even worse, cars have made possible the twin evils of the modern city: namely zoning and city planning. Yet in His providence, God has not seen fit to judge the automobile through bringing an end to the era of the car, just as in His providence He enabled His people to flourish under the Babylonian empire and under the Roman empire. God enabled the church to flourish in the age of the automobile, while the Holy Spirit has even helped us leverage cars to advance the gospel.
This doesn’t mean cars are a neutral tool, nor does it mean their net effect hasn’t been evil. There is no inconsistency in recognizing that, as a whole, cars have had a dehumanizing impact on society while still recognizing that we can use cars, and teach our children to drive cars, in a God-honoring way. Similarly, we can recognize that AI is likely to have a negative impact on society (in fact, I have been quite open about the possibility that AI could cause the human race to go extinct) and still teach our children to use it in a God-honoring way. Just as I advocate drivers-ed, even though I realize my family might all die in a car crash, so I advocate AI-ed and am actively working to provide direction for how AI can be taught in the classroom.
Discerning how to use AI in a God-honoring way is hard. It is much easier to simply have an ideology or set of principles that we then go and apply to the specific situations we encounter. Maybe it’s because I’ve read too much Edmund Burke, or perhaps because my inclinations as a virtue ethicist in the information field has given my thought a strong pragmatic dimension, but for whatever reason, I have a strong distaste for this type of ideology-driven approach. I am much more inclined to assess each issue and situation on its own and ask, “What will promote human flourishing in this situation?” The particularities of flourishing will differ from person to person, family to family. For example, in my own life, I take my smartphone to bed with me, on airplane mode, and listen to audiobooks or Mozart piano sonatas to help me get back to sleep if I have insomnia. To me, that is just a win-win situation. I realize that for others, this would not work because of the temptation to turn on the phone and check for messages. I could imagine a situation where a family chooses to use only candlelight after the sun has set, and yet has a robot read otherwise unavailable audiobooks to them while they sit together by candlelight (the robot having most of its features turned off). Maybe another family will have a total no-technology policy yet use ChatGPT for language instruction because they want to read Dante’s Divine Comedy in the original. Maybe another family lets their children use AI every Tuesday if they get their homework done, but only to practice information literacy exercises.
This type of situational prudence I am urging does not imply that technologies are just neutral tools. Nor does it deny that there are structural and systemic problems that come with a technology-saturated society. Even families choosing to use AI prudently are doing so in the midst of an overall ecosystem that is systemically antithetical to our flourishing. Our social ecosystems and formative institutions can lurch society towards folly as the default modus operandi even if specific individuals can take steps to cultivate wisdom. But again, the same could be said of being a Christian under the order of Rome, or the automobile. Christians could not opt out of living in the Roman empire, just as we cannot opt out of living in townscapes (and even mental plausibility structures) based around cars. But in both cases, Christians found ways to work within the systems and to flourish. Obviously there are lines we have to draw, but just as it is possible to compromise by not drawing lines where we should (for example, worshiping Caesar) so it’s possible to be overly scrupulous and draw lines where it is not necessary. Paradoxically, over-scrupulosity can end up backfiring and playing into the very systems we are trying to avoid. As I pointed out in my article, “Why AI is Not Satanic“:
Have you ever noticed that families who have a total no-technology policy are the same families who, once they decide to let their teenager get a smartphone, eventually let them operate with no restrictions or purely minimal monitoring? This might seem like a paradox, but it makes sense when we understand that both naïve technological optimism and uncritical rejection (i.e. AI is satanic) are two sides of the same coin. The alternative to both is critical engagement…
Critical engagement of the sort I am recommending is hard. It means forestalling our tendency to think we know everything about the implications of a technology without first doing the serious work of research, analysis, and synthesis. It means developing a tolerance for complexity even when simplistic answers seem more emotionally satisfying. It means not dismissing out of hand the things we don’t understand. And above all, it means turning from a black and white approach to asking the more difficult philosophical questions about how a given technology may change our way of understanding the world and ourselves.
That type of engagement takes courage because it involves confronting and engaging with the things we don’t understand when it is easier to take a dismissive approach.
Looking Toward the Future With Hope
If AI intensifies the degeneracy and pornification of society, God may come in judgment and shut the whole thing down. This may happen through external intervention, or perhaps organically (for example, as a natural outgrowth of humans failing to solve the alignment problem in time). Or, as we saw with the automobile, He may simply show the church how to advance in the midst of the new order and even leverage it. Or he may allow the order of The Machine to become a source of persecution to purify His people. It may even be a combination of all of the above, just as the Roman Empire was both a tool for purification and a mechanism for the advancement of the gospel.
My personal theory (or perhaps it is merely a hope) is that there are great times ahead. Consider that just as the Kindle helped spawn a new market for limited edition hardcovers, and just as fast-food outlets helped ignite the slow-food movement, and just as plant-based “meat” has led to “real meat” becoming a marketing feature, so the reductionistic and dehumanizing aspects of The Machine will give the Church the type of opportunities associated with differentiation. People may rediscover older practices that have been lost, like writing letters by hand, or the importance of face-to-face encounters, to communicate important information. In an AI-integrated society, the church may be an oasis for people to return to simplicity, and basic things like growing fresh food, learning to cook, paying attention to the world around us. Our schools, churches, and communities are poised to be at the forefront of this renewal, as they are positioned to say to a world starved of transcendence, “Come and see.”
- Bring AI Into Classical Christian Schools!
- Why AI is Not Satanic
- Absolutizing and Abstraction, Conservation and Piety, by Alan Jacobs
- “The West Must Repent,” he blogged
- ChatGPT in the Classroom (Part 1)
- ChatGPT in the Classroom (Part 2)