The AI Apocalypse is Happening Right Now…but not in the way you think

Person of Interest was an American sci-fi crime drama created by Jonathan Nolan. It aired from 2011 to 2016.

I have to confess, I’m fascinated by artificial intelligence. My favorite TV show, Person of Interest, explores what happens when a machine develops human-like intelligence. I am currently watching through the entire series (for the first time) with my son Timothy.

Why I’m Fascinated by AI

There are a number of reasons for my fascination with AI. One reason is that you can tell a lot about a culture’s unconscious fears by looking at the angst that emerge in their stories and myths. In the early 19th century, when people were afraid of increasing mechanization following the industrial revolution, stories like Mary Shelley’s Frankenstein tapped into these latent anxieties. As the 19th century progressed and the theories of Darwin took root in public consciousness, a number of stories began to emerge that gave voice to the fear that the demarcation between man and beast might be porous (Stevenson’s Strange Case of Dr Jekyll and Mr Hyde is probably the most well-known example). Accordingly, I find myself asking if the current popularity of contemporary stories about AI (2001: A Space Odyssey, The Matrix, Person of Interest, etc.) might be giving voice to deep-seated and widespread fears that we have been ceding important aspects of our humanity to digital code.

I’m also fascinated by AI because of how it is changing the way we think of relationships. I won’t say anything about that right here, but it is a theme I have explored in my articles, “Marriage Between Humans and Robots,” and “Customized Communication in a Virtual World” and “Unbundled Reality and the Anti-Poetry of Pokémon Go.

Recently I’ve found myself interested in artificial intelligence for another reason, and it has to do with the way we have been changing our environment so that our machines can become–or can appear to be–smarter than they are. Let me explain.

Two Types of AI

Within the discourse on AI, there has been a long-running tension between two perspectives. One perspective is that of the engineer. According to this narrative, AI enables machines to do what users want them to do in order to perform specific utility functions. An example of this type of AI would be a computer program that can detect anomalies in the stock market, or an algorithm that can rank websites for a search engine, or a computer that can play chess, and so on. This type of AI is what we can call “real AI” because it exists in the real world and is not a fantasy.

The other perspective is the imaginary type of AI, and it is propounded by the futurists and those waiting for the so called “Singularity.” This the type of AI involves a range, from those who imagine that one day computers might acquire consciousness to those who invest machines with a type of quasi-mystical significance. There are some pretty big names in Silicon Valley who advance various iterations of this fanciful perspective. This is the type of AI that is receiving such traction in our stories and myths, and for many people it has become synonymous with the very term AI.

In her book, Artificial Intelligence: How Computers Misunderstand the World, technology journalist Meredith Broussard relates a story showing the clash between these two perspectives (which she elsewhere describes in terms of “narrow AI” and “general AI.”)

One excellent illustration of the confusion between real and imaginary AI happened to me at the NYC Media Lab’s annual symposium, a kind of science fair for grownups. I was giving a demo of an AI system I built. I had a table with a monitor and a laptop hooked up to show my demo; three feet away was another table with another demo by an art school undergraduate who had create a data visualization. Things became boring when the crowd died down, so we got to chatting.

‘What’s your project?’ he asked.

‘It’s an artificial intelligence tool to help journalists quickly and efficiently uncover new story ideas in campaign finance data,’ I said.

‘Wow, AI,’ he said. ‘Is it a real AI?’

‘Of course,’ I said. I was a little offended. I thought: Why would I spend my day demonstrating software at this table if I hadn’t made something that worked?’

The student came over to my table and started looking closely at the laptop hooked up to the monitor. ‘How does it work?’ he asked. I gave him the three-sentence explanation… He looked confused and a little disappointed.

‘So, it’s not  real AI?’ he asked.

“Oh, it’s real,” I said. ‘And its spectacular. But you know, don’t you, that there’s no simulated person inside the machine? Nothing like that exists. It’s computationally impossible.”

His face fell. ‘I thought that’s what AI meant,’ he said. …He looked depressed. I realized he’d been looking at the laptop because he thought there was something in there–a ‘real’ ghost in the machine.

AI and Archetypal Fears

Mr. Finch is a deeply conflicted tech genius who struggles with the tension between wanting to use technology to control the world and thus to help people, vs. the forces of chaos unleashed by his well-intentioned control.

When I say that the futurist perspective on  general AI is “imaginary,” I don’t mean to be dismissive. It is “imaginary” in the sense that a myth is imaginary while revealing deeper truths about us as human beings. (I discuss this mythic role of imagination in my earlier article ‘The Sacramental Imagination.‘) The imaginary perspective on AI is tapping into a primal angst that goes back to the most primitive human communities. Human beings have always existed with a tension between the impulse to dominate the forces of material nature, vs. a corresponding angst that our control of the world may release hostile forces that need to be pacified. This is one of the reasons that libations and sacrifices have always been associated with effective control of the world, especially when that control begins to breaks down, as it does during times of famine or flooding. In traditional societies, the role of the prophet or witch doctor helped to maintain this precarious balance between the human need to exert domination over the world vs. the human need to pacify the forces beyond or within the world that may exact recompense. Even in our desacralized and materialistic society, we retain a collective memory of this tension, so that even as we exploit the world with technology, we fear the coming recompense from forces outside our control. This fear finds expression in judgment narratives and doomsday scenarios, which include everything from angst of a coming global warming catastrophe, to overpopulation scares, to the post-human singularity when technology displaces anthropology (which, for practical purposes, does function as a kind of judgment narrative, as Josiah Trenham has shown).

The character of Root, played by Amy Acker, is a psychopathic hacker who finds a way to adjust Finch’s invention for her own chaotic ends.

One of the reasons I like Person of Interest so much is that it taps into these primordial themes. The archetypal tension between the desire to control the forces of chaos (which by the way, is a central Biblical motif that goes back to mankind’s earliest vocation) and the fear that such control might unleash forces beyond our control, is embodied in the character of Mr. Finch. Mr. Finch is a deeply conflicted tech genius who struggles with the tension between using technology to help people (via a self-learning surveillance machine he has created) vs. the forces of chaos that his well-intentioned impulses have unleashed. In Person of Interest, chaos is represented by the female character Root, a murderous psychopathic hacker who finds a way to adjust Finch’s invention for her own chaotic ends. If Finch is the archetype of the Apollonian male whose desire for order leads to paranoia, Root is the Dionysian female. In pursuit of a Kurzweilian utopia in which machines displace humans (who she dismisses as simply “bad code”), Root works to undermine the paternalistic constrains Finch has programmed into his AI machine.



AI and Machine Learning

Back to the two types of AI.

Both types of AI (the real type vs. the imaginary type, or narrow AI vs. general AI) involve something called “machine learning.” The very term “machine learning” implies technological sentience, but this is based on a linguistic confusion. I quote again from Meredith Broussard’s book Artificial Unintelligence:

Meredith Broussard is a data journalist whose new book, Artificial Unintelligence, came out earlier this year.

As the term machine learning has spread from computer science circles into the mainstream, a number of issues have arisen from linguistic confusion. Machine learning (ML) implies that the computer has agency and is somehow sentient because it ‘learns,’ and learning is a term usually applied to sentient beings like people (or partially sentient beings like animals). However, computer scientists know that machine ‘learning’ is more akin to a metaphor in this case: it means that the machine can improve at its programmed, routine, automated tasks. It doesn’t mean that the machine acquires knowledge or wisdom or agency, despite what the term learning might imply. This type of linguistic confusion is at the root of many misconceptions about computers.”

No matter how many cool chess moves a robot can learn, it will never learn enough moves to suddenly be capable of taking over the world.

How does machine learning work in practice? Well, let’s consider the case of a computer program that can win at chess. The reason the computer can win at chess is not because an engineer has programmed the computer to know how to respond to every possible move. Rather, the programmer has created an algorithm that includes within itself a mechanism for self-improvement, so that the program can continually update itself every time it encounters new information. This is why a computer chess program can lose, but can never lose in the same way twice. The idea of a computer code being programmed to evolve may sound like the sci-fi type of AI, but it still remains firmly tethered within the engineering perspective on machine learning. No matter how smart a self-learning algorithm might become at playing chess, it will never be smart enough to suddenly burst the banks of its automated utility function. No matter how many cool chess moves a computer can “learn,” it will never learn enough moves to suddenly be capable of taking over the world, or experiencing desire, or having agency, or creating utility functions for itself outside its programmed intentionality. We see this last point illustrated in the fact that the most advanced chess-playing computers cannot even beat a four-year old at tic-tac-toe, unless of course a human programs a new utility function into the code.

This isn’t to say that there couldn’t be situations where the real type of narrow AI might go terribly wrong. We could imagine that a machine designed to eliminate spam email might start killing human beings as the most effective way of eliminating spam messages. Such a scenario is unlikely, but if it did happen it would ultimately be traceable to user error (a mistake in the code) and not because the digital code had acquired agency, and certainly not because of any quasi-transcendent anthropomorphic qualities within the machine itself. In fact, it is precisely because machines will always remain stupid that we need to be careful when defining their utility functions, especially when the machine has access to firearms. Whenever we outsource decision-making to our machines (i.e., “how should the self-driving car respond when faced with a choice to crash into a boy and his dog, or two elderly women?”), there are always ethical implications, and it is possible for human beings to make mistakes. In fact, human beings make so many mistakes with their machines that it’s easy to begin believing that these machines might develop hostile intentions.

Enveloping the World for Our Machines

To sum the ground I have covered so far, there has been a tension between two visions of AI. One perspective is the real perspective of the engineer, where AI is understood in a narrow sense as applying to the specific utility functions embedded in programs. The other perspective is the fanciful perspective of the futurist, which sees AI as becoming general, as we anticipate that computer code might begin creating utility functions for itself independent of human intentionality.

At this point, I have to pause and ask a penetrating question. What if this binary between the engineering perspective and the futurist perspective has been blinding us to the advance of a more sinister and subversive side of AI that neither perspective has the categories to adequately address?

That was a question I found myself asking after watching the video of Luciano Floridi’s 2013 talk for the Innovation Forum. Floridi is an Italian philosopher who has advanced influential theories in the metaphysics of information.

In this talk, Floridi gives a nod to this long-running tension between these two perspectives on AI, but then he makes a startling suggestion. What if our machines will always remain stupid, but we have gradually been changing our environment to make them seem smarter? What if our entire society is progressively being structured in such a way so that our machines can flourish? Floridi comments,

“Meanwhile [as the debate between the two types of AI played out], what was happening is that we were changing the environment to make sure our…stupid technology  was able to work in the new environment. So instead of developing (because we could not) intelligent (real intelligent) applications, we changed the environment to make it fit the stupidity of our machines.”

See Also

To illustrate what he means by changing our environment to fit the stupidity of our machines, Floridi gives the example of a dishwasher. You can’t simply build a robot to stand at the sink and do dishes. Okay, you could, but it would be extremely expensive and the robot would do a very bad job, just like robots are very bad when it comes to folding laundry. A human will always do a better job. However, by creating a new environment for the dirty dishes (namely the environment of the dishwasher), you enable the machine to get the job done effectively. A dishwasher is a 3d space in which a “robot” (in the most general sense of the term) can accomplish a task that would have been impossible within a natural (read: human customized) environment.

The same applies to the machines that create cars. Floridi points out that you don’t just release a robot down the street and say “build me a car.” Instead, you build an entire environment around the robot that enables it to have the right interactions. One way of talking about this type of environmental customization is to say that we have “enveloped” the world around our machines. By creating enveloped environments, we allow our stupid machines to appear smart and to flourish. As Floridi says, “You build a world around the machine. The stupid idea is to have the machine do it like we do.”

A dishwasher or car factory are micro-environments that enable machines to perform specific functions. But Floridi goes onto explain that gradually we have been enveloping our world, so that the macro-environment in which we exist is becoming customized for our machines to flourish. Through this process of machine customization (not customizing machines to our world, but customizing our world to machines), technology that is unintelligent begins to appear intelligent. This process of customizing the world  for our machines leads to a state of affairs that Floridi calls the “infosphere.” Our world has been enveloped so that our machines can do what machines do best; consequently, wireless information becomes ubiquitous, and human beings can work with it. But wireless information is not just “everywhere” in a quantitative sense; the ubiquity of data is qualitative to the degree that it envelops how we perceive the world. As Floridi explains elsewhere, information and communication technologies are “promoting an informational interpretation of every aspect of our world and our lives in it.”

We have accommodated our environment to our machines to such an extent that in many parts of the urbanized world today a person can no longer flourish without having the right relationship to the “infosphere” around him. Our employability, our ability to borrow money, and even our social capital, are largely contingent on how we stand in relation to data in the cloud. (I have never laid awake at night worrying about the weather like my ancestors did, but I have laid awake thinking about my relationship to data in the cloud.) The machines that store this data start to seem very important and smart to us, but only because we have enveloped our environment around them. As Floridi puts it in his book, “The digital online world is spilling over into the analogue-offline world and merging with it,” leading to “the informization of our ordinary environment.”

Floridi takes this in some problematic directions, as his anti-metaphysical metaphysics holds him back from drawing the right conclusions. In Floridi’s reading of our situation, human beings occupy the same ontological space as algorithms and information. Humans are simply informational organisms among many other. You, me, my computer—we’re all just informational organisms. Anthropology and technology exist on the same ontological plane of existence. Floridi puts this into an elegant (and a little too tidy) retelling of human history in which the digital revolution is the next stage in a process of self-understanding that runs through Galileo, Darwin, and Freud.

I reject this. But I am intrigued by the idea that we are adapting our environment so that our machines can flourish and control us. What are the implications of living in an environment that has been customized for machines, and which has the effect of making our stupid machines appear intelligent?

The Real Problem with AI…that hardly anyone is talking about

We desperately want smart machines, and we desperately want to believe that our machines can rise to the level of human intelligence. I get that—it is the grownup version of what I felt as a child when I desperately wanted to believe my Teddy Bear could understand me. In fact, we so desperately want to believe there is a ghost in the machine that, when faced with the impossibility that our machines will ever rise to our level, we settle for second best and lower ourselves to their level (which, ironically, Floridi himself does when he defines human beings as simply one informational organisms among many).

Could that be the real AI apocalypse that we should be worried about? Not machines taking over and acquiring agency, and not machines evolving their own intentionality, but human beings creating a world customized for the flourishing of machines? Did we cede power to our machines as soon as we began customizing our world so that they (these stupid machines) could appear much more smart, necessary and indispensable than they actually are? Within machine-oriented infrastructures, new problems come to exist that never existed for our ancestors, and to solve these problems we have to capitulate to even deeper levels of machine-dependence. One example of this is the move towards digital identification, which seeks to solve a problem created by the unnatural condition of human beings living in the infosphere (i.e., an environment customized for machines). Microsoft journalist Jason Ward observes that “Participation in the modern economy, the ability to buy and sell, attain employment, healthcare, social services and more are virtually impossible without a digital identity.” Microsoft’s solution to this problem is to partner with governments to funnel the entire world’s population into infrastructures that allow each and every person to have a unique “digital identity.”

As our identities and relations become managed by and mediated through machines, our relationship to the natural world recedes into anachronism. This was impressed upon me when I read that a new edition of the Oxford Junior Dictionary is changing the words deemed important for modern childhood. In place of words that are relevant to a child’s relation to the natural world (specifically, acorn, adder, ash, beech, bluebell, buttercup, catkin, conker, cowslip, cygnet, dandelion, fern, hazel, heather, heron, ivy, kingfisher, lark, mistletoe, nectar, newt, otter, pasture, and willow), the publishers decided to include words that are relevant to a child’s relationship to the electronic world (specifically, attachment, block-graph, blog, broadband, bullet-point, celebrity, chatroom, committee, cut-and-paste, MP3 player, and voice-mail).  Reflecting on these changes, Robert MacFarlane commented that “Children are now (and valuably) adept ecologists of the technoscape, with numerous terms for file types but few for different trees and creatures.”

We shouldn’t be too hard on the editors of the Oxford Junior Dictionary. They are simply reflecting the widespread drift towards a world customized for machines. The problem is that this onward march to customize human environments for machines is no more conducive to our flourishing than if we were to force people to live in a world built around the needs of elephants, or wolves, or dolphins. Yet it is “smart machines” (AI in the narrow/real sense) that is making possible this deeply anti-anthropomorphic state of affairs. This, I submit, is the AI invasion we should be worried about. Instead of being concerned about machines taking on human qualities, we should be concerned about humans taking on machine qualities. If you think I’m exaggerating, take a peep at the literature about people who want to grow up to become machines (for starters, see here and here). Even for those who don’t harbor post-human aspirations, life in the infosphere orients us to despise those characteristics that make us most human and which separate us from our machines: characteristics like altruism, wonder, questioning, inefficiency, empathy, and vulnerability.

Even Floridi, who celebrates these developments as ushering in a new epoch of self-understanding and who views these changes through the lens of technological determinism, offered an uncharacteristic nod to the drawbacks of the emerging infosphere:

“The risk we are running is that, by enveloping the world, our technologies might shape our physical and conceptual environments and constrain us to adjust to them because that is the best, or easiest, or indeed sometimes the only, way to make things work…. The deepest philosophical issue brought about by ICT’s [information and communication technologies] concerns not so much how they extend or empower us, or what they enable us to do, but more profoundly how they lead us to reinterpret who we are and how we should interact with each other.”

The process of customizing our world for our machines has been a recurring theme throughout the history of technology. One very clear example is with the invention of the automobile. When the automobile was invented, it was widely believed that the new transportation would save time for ordinary people, especially women. While the ease of travel did open up new opportunities, its promise as a time-saving mechanism never materialized. This is because the infrastructures of economics, townscapes, and social life simply adapted themselves to the invention and the expectations that came in its wake. (I have discussed this in my post ‘The Ironies of Household Technology.‘) When you consider all the changes that came as a result of cars (the emergence of large department stores, zoning requirements, city planning, new expectations for buyers and sellers, and on and on), we can truly say that our lives are controlled by cars even if we don’t have one. There is nothing surprising about this, for throughout history humans have been accommodating their internal and external environments to their inventions in a symbiosis that causes the inventions to take on a new importance and indispensability.

In much the same way, we are now accommodating our internal and external environments to computers that produce, record, and store data. Instead of just making our computers user-friendly, we have made our world computer-friendly. The result is that data has taken on a new importance and indispensability. Yet we have to ask: are we sleep-walking into a world where our computers are controlling us? In customizing our environment to a data-dependent state of affairs, are we in danger of losing the things that are most important to our humanity?

To pose these problems in terms of AI, a final question that it might be helpful to ask is this (and here I am paraphrasing the last line in Nicholas Carr’s article ‘Is google Making us Stupid?‘): as we come to rely on computers to mediate our understanding of the world, is our own intelligence in danger of flattening into artificial intelligence?

Further Reading

Scroll To Top