Why AI is Not Satanic

In The Wizard of Oz, the wizard is thought to be a powerful and mysterious ruler with magical abilities. However, it turns out he is merely using mechanical devices and tricks to create the illusion of mystical and spiritual powers. He admits that he is “a very bad wizard” but “a very good humbug.”

In another story, C.S. Lewis’s That Hideous Strength, we have a reverse scenario. A severed head of a criminal is kept alive, ostensibly through mechanical means, to control and manipulate humans. But in the course of the narrative it turns out that the mechanical apparatus that is thought to control the head is just a front for demonic spirits.

These two stories help frame possible answers to a question that people have been asking me with increasing regularity: do I think artificial intelligence is satanic and therefore something Christians should completely avoid? For some Christians, the answer to this question is yes: AI is demonic. Accordingly, AI is only ostensibly mechanical/digital but in actual fact demonic like the talking head in That Hideous Strength. Yet other Christians argue that those who take this view are making the same mistake as the characters in The Wizard of Oz. Who is correct?

Before jumping into this question directly, it may be helpful to sketch the big picture of how I approach AI theologically. Those of you who follow my Salvo column know that I am AI-skeptical, and that  I am not convinced AI is leading us into a better future, but quite the opposite. But even though I am AI-skeptical, nevertheless on Biblical grounds I do not completely rule out the potential for good in new technologies like machine learning. What are those Biblical grounds?

Well, first there is the Biblical virtue of prudence. According to Proverbs, prudence involves collecting information prior to making a decision. Sometimes information-collecting comes from a human source (wise counsel), but sometimes it also comes from an environmental source (i.e., evidence of past high watermarks on a river bank), or an electronic source (i.e., a calculator or spreadsheet). Could AI fall into the latter category by offering an additional source to inform wise decision-making? In fact, AI already does just that: every time we do research using a search engine, we are using AI. If AI is satanic, never do another Google search!

But prudence is not the only Biblical principle that informs my approach to AI. There are numerous Biblical principles that undergird the project of utilizing the resources of this world for technological development. I spend quite a bit of time outlining these principles in my new release with Ancient Faith, Rediscovering the Goodness of Creation. In this book I  argue that it is a Christian imperative to use the raw materials of creation to perfect and improve human culture. Given this theological framework, I have become convinced that Christians should be at the forefront of AI, leading the way in steering this technology towards humanizing rather than dehumanizing uses. Here we can applaud Pope Francis for being at the forefront of this necessary conversation.

But Christians can also be at the forefront of AI through helping us think critically about this technology in ways that avoid both the extremes of uncritical rejection and unthinking adaptation to the latest innovations. As I argued in the video portion of my article, “Why I Don’t Reject AI, and am Not a Luddite,” the alternative to both uncritical rejection and unthinking adaptation is to be thoughtful and reflective about AI like we would any new technology, and that includes analyzing how it might be used in dehumanizing ways but also thinking about how it could be used to the glory of God.

What then are some ways we could use AI constructively? Well, if AI could tell us when a building is about to collapse, or help find a cure for diseases, or relieve human workers from repetitive tasks, all these could contribute toward flourishing.

But what if AI is satanic? If AI is akin to Lewis’s talking head, then I admit that all the things I’ve just shared, no matter how seemingly beneficial, amount to little more than a Faustian bargain. If AI is satanic, then using it as a source for prudent information-collecting is equivalent to visiting a medium to find out what to do. If a medium helped us save lives by telling us when a bridge was about to collapse that would not justify the occult. Ergo, if AI is satanic, then using it towards human flourishing is like trying to use an incantation to make the crops grow better. So we need to confront the spiritual question head-on.

Statements like “AI is demonic” or “Satan is behind AI” can mean a variety of different things. Sometimes we use “demonic” to describe the activity of demons, who are living and willful beings. But the software and hardware that make AI possible is not alive. It does not have its own agency, and it is not sentient in either the sense that a biological entity is, or in the sense that angelic beings are. It is just a machine. As in the Wizard of Oz, behind the curtain are mere mechanics, although in this case not levers and pulleys but billions of silicon chips across multiple data centers, servers, and one supercomputer consisting of 285,000 CPU cores and 10,000 GPUs, in addition to a graphics processing unit with 8 NVIDIA V100 GPUs and 256 GB of RAM.

David Bentley Hart helps pull back the hood even further to deconstruct AI’s apparent wizardry. From Hart’s article “Reality Minus“:

Even the distilled or abstracted syntax upon which that coding relies has no actual existence within a computer. To think it does is rather like mistaking the ink and paper and glue in a bound volume for the contents of its text. Meaning — syntactical and semantical alike — exists in the minds of those who write the code for computer programs and of those who use that software, and not for a single moment does any of it appear within the computer. The software itself, for one thing, possesses no semeiotic unities of any kind; it merely simulates those unities in representations that, at the physical level, have no connection to those meanings. The results of computation that appear on a screen are computational results only for the person reading them. In the machine, they have no significance at all. They are not even computations. Neither does the software’s code itself, as an integrated and unified system, exist in any physical space in the computer; the machine contains only mechanical processes that correspond to the notation in which the software was written.

To be sure, the shear complexity of computers makes it seem like they are conscious. I get that. But, building on Hart, I would argue that it is actually the other way around: it is not that the complexity suggests consciousness, but that conscious entities caused the complexity. Humans manifest our capabilities through our tools, which then become external manifestation of humanity. As I observed last August when news began to surface about Google’s LaMDA system having achieved consciousness, the fact that computers can be so “smart” is not a testament to machine sentients, but to human consciousness and thus, ultimately, to God’s design.

Humans are amazing creatures, but only because we have been programmed by a Designer.  Thanks to His design, humans can create things new through rearranging the materials of creation. By contrast, LaMDA is not able to create anything altogether new; at its best, it merely points to the intelligence of its human creators, and thus to God’s design.

Many of the Christians who assert that AI is demonic will not dispute what I have said here, nor Hart’s observation above, nor what I said in the paragraph before citing Hart. Instead, saying “AI is satanic” means that Satan is behind it in the sense of having caused or inspired the invention, in much the same way as we might say the devil was behind World War I, meaning that he was a causal agent responsible for the war.

While it is certainly possible AI is satanic in this sense, it is unlikely since it hinges on too simplistic a view of cause and effect. Most of the time when we think about cause and effect, we have a very simple idea in our minds of something like shooting a billiard ball into the hole, or a rock falling to start an avalanche. There is one cause, and then afterwards comes the result/s. But when you’re dealing with the social, historical, cultural, or anthropological conditions that lead to technological innovation, most change happens through the confluence of multiple causes. (I have discussed this in relation to ideal and material causation in my earlier post, “Do Ideas Have Historical Consequences?”) So rather than thinking of technological changes in the cause and effect framework appropriate to a rock falling and triggering a reaction, we should think of these in the cause and effect framework appropriate to an ecosystem in which changes occur as the result of multiple interactions over long periods of time. The devil may be involved in some of these interactions while he may be uninvolved in others. So simply saying “the devil caused AI” or “the devil is behind AI,” belies too simplistic a view of how human society actually works. Sure, we can back up and find the devil’s work somewhere in the chain of causes and effects that led to any cultural condition including those that enable new inventions, but because we are dealing with hundreds of causes and effects that function like an ecosystem, we can also find God’s hand. But at the end of the day this genealogical method is of only limited value in the task of discernment and critical appropriation.

“OK,” someone might say, “Even if Satan isn’t behind AI in the sense of causation, he may still be behind it in the sense of correlation. That is to say, using AI might open a person up to the powers of darkness in the way that a ouija board does. Sure it isn’t alive, but spirits can inhabit objects.” It is important to note that those who make this claim are not merely saying that someone could use AI for dark purposes like someone can use a car to serve the devil; rather, they are suggesting that there is something intrinsically demonic about AI in a way comparable to a ouija board.

Ironically, theories like these collude with ideas posited by unbelievers in the tech world who are attributing spiritual-like powers to algorithms. From Blake Lemoine’s claims about a sentient chatbot at Google to the growing number of theorists who think AI is either alive or inhabited by conscious beings, we are witnessing the emergence of a quasi-mysticism surrounding machine learning. In some cases, this has lapsed into outright neopaganism, as in the case of Anthony Levandowski, a former Google engineer who founded a church that worships AI as divine, or Dr. Claudia Albers, a physicist who claims that Satan emerged after Lucifer merged with an AI machine, or Alexander Bard, a musician and lecturer who claims “the internet is God”. These ideas are all based on a non sequitur: when a car can run faster than us, we don’t start thinking that the car might have personhood; when a forklift can lift more pounds than us, we don’t start thinking the forklift is sentient or has agency; and similarly it is equally fallacious to think that because computers can out-perform humans in information processing, that this means computers are spiritual, sentient, conscious, or possess agency and personhood. But while it’s easy enough to see the flaw in these ideas from secular thinkers, we get echoes of these same perspectives among the Christians who think AI is satanic. For them the ghost in the machine is no mere analogy; AI is human-like, even spiritual, but only because it is controlled by demons. There is a certain irony here as reactions to AI bring confluences between Christian and materialist thinking, both asserting that consciousness can be an emerging characteristic of matter.

When you study how deep learning and neural networks actually function, the reality is much more prosaic. Yes, it seems like magic, but it is really just pattern recognition at a very high level. That is why ChatGPT-3, while it can pass the Turing test, cannot pass Searle’s Chinese Room test.

This does not prove that AI might not be demonic in the sense that an Ouija board is demonic, given the distinction between material and efficient causes. Both Scripture and empirical observation confirm that it is possible for inanimate objects to become controlled by non-material entities; ergo, there is nothing theologically or philosophically incoherent in the idea that spirit beings could channel messages through AI in a way comparable to what 19th century spiritualists claimed occurred with automatic writing. But the burden is on those making this claim to bring forth evidence that this is the case. If we do not have clear evidence that AI is demonic, and if we have not performed due diligence on this claim, then is it wise to simply throw out this idea? I would say no, and here’s why. Throughout Proverbs we read that fools will readily believe a case without closely seeking out and attending to the criticisms of it; we read that it is the nature of folly to attend to and spread rumors, inaccurate reports, and unreliable tales while failing diligently to pursue the truth of a matter. By contrast, Proverbs teaches that the wise examine things carefully before moving to judgment or passing on a report. How many of us who are saying that AI is demonic have actually attended to the matter closely and performed due diligence?

To perform due diligence on the claim that Satan is behind AI would involve asking questions like the following:

    • Do we really know that AI is satanic or is it merely a theory?
    • If a theory, how would we go about performing due diligence to find out if the theory is true?
    • Granted that AI can be used for evil, does that automatically mean that the devil is behind it or that it is satanic?
    • Can a technology that is surrounded by anti-human assumptions be used in a Christian way?
    • If spiritual entities channeled messages to humans through pencils and paper (a demonic practice known as automatic writing) it would not therefore follow that pencils and paper are demonic. By the same token, is it the case that even if spiritual entities could channel messages through AI, this would not make AI demonic?

These are questions that must be grappled with before the idea that Satan is behind AI can have any credibility. But those who are promoting superstitious ideas about AI typically avoid this type of careful analysis since it is the nature of superstition to eschew due diligence and critical thinking.

Superstition is a real problem when it comes to digital technology. There is currently a lot of superstition in Silicon Valley, where a growing contingent of technocrats are doing their best to resuscitate the primordial fears and old gods that are the last gasp of a passing order that has been defeated by Christ. As Christians it is easy to get sucked into these superstitions and the attendant climate of fear and irrationality. But at some point theology needs to kick in. Saints and Christian teachers through the ages have reminded us that superstition exists when the devil tricks us into believing he has more power than he actually does. Make no mistake: Satan has power to work through AI, but only the power we give him. There is no actual ghost in the machine any more than there was a wizard in Oz: strip away all the quasi-spiritualism, the mysticism, the spiritual mumbo jumbo that we impute to AI, and all you get is a bunch of wires and silicon chips.

It is also worth noting that the claim that AI is satanic often correlates to lack of critical thinking about machine learning that, I fear, is comparable to what happened with gay marriage. The same people who, in the early 21st century, were saying “Just shoot the fagots,” are now the ones saying, “It’s none of my business what goes on behind closed doors.” My point is that lack of critical thinking has a strange way of converting people from one position into its opposite. Watch this happen in real time with AI over the next 10 years: the Christians who are now taking a knee-jerk oppositional approach to AI will be the same ones who, a decade down the road, will be giving their children and grandchildren bots to cure boredom.

See Also

I am also concerned that believing that the devil is behind AI has real world consequences that are problematic in church and education. Let’s start with church.

In some of the churches I’ve attended over the years, I’ve witnessed people dividing the community through condemning certain objects or practices – from rock music to women wearing trousers to protective face coverings to men having earrings to vaccines – as demonic. This not only divides the church into two groups, but it short-circuits the critical debate we need to have about these issues (well, some of them) through a kind of hysteria. In the church community I currently attend, there are a number of men who use machine learning in their work. Are we going to start treating these men like they are tainted and involved in occultism?

What about education? Consider, we think the devil is behind AI, then we will likely become disincentivized to teach students how to use AI as a research tool, how to identify and evaluate bot bias, how to exercise wisdom when working with machine learning, including how to interact with and fact-check bot generated content. On the ground level, this is a disaster waiting to happen. Right now I’m doing consulting for classical educators who are trying to figure out how AI may impact their schools, and as I talk to people in the educational world, one of the things that keeps coming up is that not only are we unprepared for the impact of AI, but we haven’t even taught students how to properly engage with the internet. Often students in classical schools do not have even minimal digital literacy when it comes to best practices for online information retrieval and information evaluation, and if this was a problem in the age of the Google search, it will be even more a problem when AI becomes the primary research tool. In short, Christian educators are not ready for the digital tsunami that is about to hit them with AI. We have a lot of work to do really fast, yet it is hard to convince people that this work is important or even ethical if they have been convinced that everything about AI is demonic.

As I’m looking at this conversation from a big picture perspective, it seems this is a cameo of the larger problem with how Christians typically respond to new technologies. Predictably, the discourse on technology tends to fracture into two groups.

  1. Those who feel the technology (whatever it may be) is not only non-neutral, but is demonic and should be entirely rejected, and who even eschew the process of thinking critically about it to take a “just say no” approach.
  2. Those who feel the technology, whatever it might be, is merely a neutral tool that we can leverage.

Caught in the pincer movement of these two extremes it’s easy for the church to become divided between a naïve optimism vs. an uncritical rejection. But although these extremes may seem like polar opposites, I want to argue that they are actually two sides of the same coin. The alternative to both is to think critically about the new technology, and to reflect on how our tools change how we relate to the world and each other.

Thinking critically about technology is not just a matter of focusing on how a technology can be used, as important as that is. Philosophers of technology have repeatedly warned us that the challenge with any new technology is that people typically become so focused on how it’s being used (i.e., the content coming through the new medium, or the uses for which it is employed) that they neglect to consider how the technology itself changes our view of the world, including how new technologies introduce collective assumptions into society in ways that are often structural, systemic, and precognitive. Our tools are not neutral but bring with them particular understandings of the world. For example, historians have noted how map technology, and later the automobile, changed how humans orient themselves to space and distance. Similarly, clock technology changed how humans think about time (the idea that time is something we can cut up into ever tinier pieces is an idea we owe to the clock). Or again, the technology of the steam powered engine still influences how we think about human emotion via the popular idea that it is healthy to “vent” (something I discussed here). AI, especially as it enables augmented reality, extended reality, and virtual reality, will likely influence how we think about the world of space and time, and the role that the physical body plays, or does not play, in our identity as human beings.

So knowing all this, when we come to a technology like AI, the question is not whether it will change our way of perceiving the world and ourselves, but how. Yet this very question is in danger of being short-circuited through either a naïve optimism which treats AI as simply a neutral tool, or through an uncritical rejection that dismisses the entire field as demonic and evil.

We see a precedent for this with cell phones. Have you ever noticed that families who have a total no-technology policy are the same families who, once they decide to let their teenager get a smartphone, eventually let them operate with no restrictions or purely minimal monitoring? This might seem like a paradox, but it makes sense when we understand that both naïve technological optimism and uncritical rejection (i.e. AI is satanic) are two sides of the same coin. The alternative to both is critical engagement. And that means getting our hands dirty with the nitty gritty of information literacy instruction.

Critical engagement of the sort I am recommending is hard. It means forestalling our tendency to think we know everything about the implications of a technology without first doing the serious work of research, analysis, and synthesis. It means developing a tolerance for complexity even when simplistic answers seem more emotionally satisfying. It means not dismissing out of hand the things we don’t understand. And above all, it means turning from a black and white approach to asking the more difficult philosophical questions about how a given technology may change our way of understanding the world and ourselves.

That type of engagement takes courage because it involves confronting and engaging with the things we don’t understand when it is easier to take a dismissive approach. As the Wizard of Oz told Dorothy, the Scarecrow, the Tin Man, and the Lion, “You’re running away from something you don’t understand, and just as surely as you run away from it, you run into something else just as bad.”

Further Reading


Scroll To Top