In a recent article on my Substack, I observed that if a killer robot walked into a city and began turning humans into paperclips, we have a mental map for how to think about that kind of problem. Yet we are less prepared for diffuse and non-attributable system changes that, over time, may subtly problematize uniquely human forms of intelligence, virtue, and wisdom. There is some evidence, I argued in my Substack, that in a data-centric world we begin to perceive human thought, imagination, and insight are increasingly treated as mere surplus input, even liabilities that must be overcome. That is worse than a killer robot because we don’t have a mental map for that kind of a threat.
Here I am echoing warnings raised by Matthew Crawford in a 2023 First Things lecture, titled “Antihumanism and the Post-Political Condition,” Crawford suggested that antihumanism manifests itself is in the idea that humans are merely inferior versions of computers.” In an environment increasingly customized for computers, humans become the weak link in the system since they are not adapted to the type of clean inputs that automated systems require. He points to what happened when one of Google’s self-driving cars found itself at a four-way stop. Naturally, being a robot, the vehicle was unable to communicate with the other drivers. All the factors that go into negotiating a four-way stop—eye contact, social intelligence, slight movements of the car that indicate a willingness to go, etc.— were invisible to the self-driving vehicle. Not knowing what to do, the car froze and broke down. But the most telling part of this incident is what happened afterwards. When the engineer in charge of the vehicle was asked if he had learned anything from the incident, he said humans need to stop being so idiotic. Crawford invites us to think about the implications in such a statement. When the engineer said humans need to become less idiotic, he meant they need to behave more like computers or, as Crawford put it, “more legible to systems of control and better adaptive to the need of the system for clean inputs.”
In short, the human animal must be nudged towards compliance with the machine so as not to disrupt the type of inputs required by automated systems. In My Fair Lady, Professor Higgins famously declared, “Why can’t a woman be more like a man?” In the world of digital totalitarianism, the question becomes, “Why can’t a human be more like a computer?”
If these misanthropic assumptions become sedimented in our institutions and way of life, this would clearly represent a threat to human flourishing, yet it is a threat not easily recognized. We don’t have the same mental map for that type of a threat as we do for terminator-type scenarios.
From my Substack article,
When new inventions are introduced, most of the discourse centers on the utility value of the tool rather than its long-term impact on our understanding of the world. All too often we fall into the age-old trap of assuming our tools are simply neutral vehicles for getting things done. It is easy enough to understand why we become victims of this faulty thinking: the utility value of a new technology is usually immediate and obvious whereas the long-term implications often take years to realize and thus initially seem remote and abstract. Yet from the standpoint of history, these long-term implications exercise a more lasting and formative impact.
Knowing this, when we come to a technology like AI, the question is not whether it will change our way of perceiving the world and ourselves, but how. In grappling with this question, a good place to start might be to consider how AI has already begun subtly to change how we talk and frame our questions about the world. Consider the Google search engine, which is a rudimentary type of AI. We are used to asking questions of Google and have learned to eliminate unnecessary words or phrases that might impede the efficiency of the search. Right now, most of us can switch back and forth between how we speak to computers and how we speak to humans, but what if AI becomes so integrated into our lives that the way we speak to machines becomes normative? Or, to take it one step further, what if the way we think, and frame intelligence itself, comes to be modeled on the machine’s way of doing things?
Further Reading