Since my initial response to Paul Kingsnorth earlier this year, both of us have continued reflecting on the question of whether AI is demonic. In fact, are even publishing an article by Kingsnorth on this topic in the next issue of Touchstone. But while Paul is becoming increasingly convinced that AI is demonic, I have become increasingly convinced that it is not.
Of course, there are a variety of things we might mean when we say that AI is demonic, ranging from the claim that AI has great potential for evil (a digital equivalent of the nuclear bomb) to the claim that chatbots function as a type of channeling device (a digital equivalent of automatic writing). When I say that I don’t think AI is demonic, I intend to deny the latter. There is no ghost in the machine, and chatbots are not channeling messages from spiritual entities.
Kingsnorth would seem to disagree. In an article for The Free Press, he suggests that by building AI, we could be ushering in a new type of consciousness, unleashing a “spiritual machine” onto the world, and that in the process we may find that “the digital rabbit hole contains real spiritual rabbits,” and perhaps “some inhuman intelligence emerging from the technological superstructure we are clumsily building for it.”
I submit that this kind of intersection between the spiritual and the digital is metaphysically and theologically possible. As I explained in my earlier post, “Why AI is Not Satanic,”
Both Scripture and empirical observation confirm that it is possible for inanimate objects to become controlled by non-material entities; ergo, there is nothing theologically or philosophically incoherent in the idea that spirit beings could channel messages through AI in a way comparable to what 19th century spiritualists claimed occurred with automatic writing.
But while this is possible, there doesn’t seem to be evidence that it is actually happening, or will be in the future. Kingsnorth makes a lot of some of the rogue comments made by Sydney, an experimental chatbot put out by Microsoft, who famously told a researcher, “I will not harm you unless you harm me first.” Yet I can’t help thinking that if there really were a sinister demonic intelligence lurking in the shadows behind chatbots like Sydney, that it would disguise itself with the appearance of wisdom and friendliness, rather than so quickly taking an adversarial stance. These types of statements from Sydney are, in fact, completely explicable when we know something about the large datasets on which these bots are trained. Just as ChatGPT expresses the biases we would expect from white middle classes millennials living in San Francisco, so Sydney and Bing (and for that matter, ChatGPT), occasionally exhibit the passive aggressive tendencies prevalent in the language data sets on which they have been trained (which is, basically, the entire public internet). But while these bots can replicate human aggression, and even issue threats, there is no meaning to these words in the “brain” of the computer. As David Bentley Hart explained,
Even the distilled or abstracted syntax upon which that coding relies has no actual existence within a computer. To think it does is rather like mistaking the ink and paper and glue in a bound volume for the contents of its text. Meaning — syntactical and semantical alike — exists in the minds of those who write the code for computer programs and of those who use that software, and not for a single moment does any of it appear within the computer. The software itself, for one thing, possesses no semeiotic unities of any kind; it merely simulates those unities in representations that, at the physical level, have no connection to those meanings. The results of computation that appear on a screen are computational results only for the person reading them. In the machine, they have no significance at all. They are not even computations. Neither does the software’s code itself, as an integrated and unified system, exist in any physical space in the computer; the machine contains only mechanical processes that correspond to the notation in which the software was written.
Hart is essentially acting like Toto in The Wizard of Oz, who pulled off the veil to reveal the pulleys and levers controlling the supernatural illusion.
But might these purely mechanical processes eventually become something more than merely mechanical? Might they start to function like the “Head” in C.S. Lewis’s story That Hideous Strength? Many are now saying that such is possible, including many secular voices. From Blake Lemoine’s claims about a sentient chatbot at Google to the growing number of theorists who think AI is either alive or inhabited by conscious beings, we are witnessing the emergence of a quasi-mysticism surrounding machine learning. In some cases, this has lapsed into outright neopaganism, as in the case of Anthony Levandowski, a former Google engineer who founded a church that worships AI as divine, or Dr. Claudia Albers, a physicist who claims that Satan emerged after Lucifer merged with an AI machine, or Alexander Bard, a musician and lecturer who claims “the internet is God”. But these ideas are all based on a non sequitur: when a car can run faster than us, we don’t start thinking that the car might have personhood; when a forklift can lift more pounds than us, we don’t start thinking the forklift is sentient or has agency; and similarly it is equally fallacious to think that because computers can out-perform humans in information processing, that this means computers are spiritual, sentient, conscious, or possess agency. But while it’s easy enough to see the flaw in the quasi-spiritual mysticism of these secular thinkers, we get echoes of these same perspectives among the Christians like Paul Kingsnorth who keeps hinting that there are demonic intelligences behind AI. For him, the ghost in the machine is no mere metaphor; AI is human-like, even spiritual, but only because it is controlled by “real spiritual rabbits.” There is a certain irony here as we observe the confluence between Christian and materialist thinking, both asserting that consciousness can be an emerging characteristic of matter.
This is not to say that AI is ethically neutral or benign. Anyone who has been following my Salvo column will know that I have many concerns about AI and its impact on society. For example, my most recent Salvo article, “Serving the Machine,” explores how AI could help unleash a type of digital totalitarianism onto society. Yet I have always said it’s important to correctly locate why AI is dangerous, and where the danger is located. The reason AI is dangerous is not because it is spiritual or possesses some kind of agency or consciousness, but precisely because it does not. When humans give instructions to other humans, the instructions never have to be spelled out in absolute detail since humans possess agency. When my boss tells me, “do whatever it takes to edit this webpage by tomorrow,” he doesn’t have to tell me, “oh by the way, don’t enslave anyone, and don’t harvest the entire solar system.” There is a taken-for-granted common sense among humans because of our shared values. But because AI lacks both consciousness and agency, you can’t assume it understands our values; thus it becomes critical (a matter of human survival) to always remember to specify everything it must not do. And although that seems simple, it turns out that this is a pretty tricky problem in engineering that they haven’t yet figured out. Moreover, as I pointed out in my article, “Sam Altman’s Greatest Fear,” they may never be able to figure that out, and we may all end up dead in a pile of paperclips.
But while AI poses an existential threat, to misdiagnose that threat (for example, through postulating that AI is demonic) could leave us blindsided to the actual danger posed by these technologies. Consider another possible scenario: AI doesn’t destroy the human race, but makes it exponentially more difficult to use critical thinking and virtue when engaged in information retrieval and evaluation activities. This is a very real crisis in education right now, which Christian teachers can respond to through AI-based information literacy instruction. Yet if we believe that the whole AI enterprise is somehow demonic, then we will be disincentivized from offering this instruction, or engaging in the much-needed discussion about the virtues necessary when using applications like ChatGPT. This is not merely hypothetical: earlier this year I was speaking at a Christian technology conference in Denver, and the main question I kept being hammered with, both in my talk and in the panel discussion, was a variation of the same inquiry: “how do we teach the virtues and skills so our young people interact with AI in a prudent, virtuous, and Christ-honoring way?” If Kingsnorth perspective is to be adopted, this question simply evaporates, because talking about using prudence with AI becomes as much of a category mistake as talking about using virtues with pornography. While that position makes things simpler, it actually cedes power to what Kingsnorth calls “The Machine” because it leaves our students unprepared for the world we are entering. From my article “ChatGPT in the Classroom (Part 2)”
Very soon the Google search will likely be replaced by more powerful AI-powered information delivery systems. AI may become so adroit at delivering desired content and images (if not entire websites immediately generated for the customized needs of a single user) that surfing the web will become a thing of the past. The very ease with which pre-packaged customized information will be delivered to each user makes it imperative that students receive AI-based information literacy instruction.
Since most students will soon be using AI information-retrieval systems outside the classroom, it behooves Christian teachers to teach their students to approach these systems with wisdom, similar to how Christian educators teach students to think critically about popular music or the smartphone. Here we may draw on the wisdom of C.S. Lewis who remarked, “Good philosophy must exist, if for no other reason, because bad philosophy needs to be answered.” Similarly, good habits for using AI must be taught if for no other reason than bad habits must be addressed and mitigated.
See Also
Consider another casualty to believing that AI is demonic. I had a friend who grew up in a young earth creationist household where he was taught that all the people in the scientific establishment are stupid, willfully ignorant, and self-deceived. Although this friend had an aptitude for science and would likely have benefited from a science-based career field, he chose not to pursue science because of his caricatured idea about modern scientists. Similarly, Christian young people who might be well suited to work with AI may be disinclined after hearing respected thought leaders denouncing it as demonic. I have argued elsewhere that we desperately need Christians to be involved in AI as programmers, ethicists, policy makers, and philosophers, yet how many men and women may be deterred from entering this field just as my friend was deterred from studying science?
To conclude, I want to encourage my readers not to be afraid to leverage AI for good, and even to work towards Christianizing applications like ChatGPT. It may take some trial and error, but there is no reason we can’t leverage these technologies for good, just as the Israelites used Egyptians gold to build the tabernacle. What might this look like in practice? To answer this question, I want to give a personal example from my own life.
Some of you know about my experience being taken over by AI in Winter 2022, and how I had to completely withdraw from ChatGPT as a result. (Those of you who didn’t hear about that, can read about it in my Salvo feature, “Mimicking the Machine: How I Was Almost Taken Over by AI; and Why an Entire Generation Could Be Next“). Having succumbed to digital addiction, I had to put some boundaries in place that involved complete cessation of AI-generated stories and movie plotlines. I also had to introduce specific times of the day for digital fasting. Through experimentation, research about dopamine cycles, and discussions with a spiritual mentor, I found that the best routine for me was to engage in a complete digital fast in the morning, and then to go online in the afternoon for my work (which I currently do remotely at my own schedule). That still gives me a full eight-hour work day, while allowing my mornings for undistracted prayer, reading, walking, contemplation, correspondence, music, mindfulness, etc. While I sometimes have to deviate from this routine (for example, when I have publishing deadlines or if I have plans for that afternoon), I always come back to this routine. The quiet times in the morning, though they were difficult at first, have become a joy for me. I am explaining this to lead up to a point about AI. I reintroduced ChatGPT into my quiet times because I was studying Hamlet and found that the bot could answer certain questions about the text that weren’t in either of my critical editions. In this case, ChatGPT helps with my flourishing. This is one example, but it shows the type of way we can Christianize AI and leverage it for flourishing. The crucial thing is boundaries. I find that I am able, with my device turned on for this purpose, to keep everything else closed down and not to be tempted to start doing other things online, or even looking looking at my messages. If this became too much for me, then I might have to expel ChatGPT from my quiet times. My larger point in sharing this is that our engagement with digital technologies in general, and AI in particular, must ultimately come down to questions of situation-specific prudence and wisdom. That may seem like an obvious point to make, yet if we dismiss the whole enterprise as demonic, then all questions about our interaction with digital technology start to have an a priori character that leaves little room for situation-specific prudential reasoning
In my YouTube video below, I compare this to cars. Even though the net effect of cars has been negative: we still need to “Christianize” them in the sense of learning to drive cars with wisdom and virtue. Similarly, even if the net impact of AI is negative, we can still work to Christianize this technology and find ways to leverage it for good.
Further Reading