Government by Bot

During the period known as “the Enlightenment,” many Europeans entertained a feverish enthusiasm for overturning existing canons of political thought for theories and methods rooted in the axioms of human reason. In the new climate of opinion, anything that had been inherited from the past was suspect; instead, society must be rebuilt from the ground up, according to universal philosophical laws, often believed to have the certainty of arithmetical precision.

This new mood was reflected when French revolutionaries attempted to reconstruct their society along purely rationalist lines, with reforms that included administrative centralization, abolishment of regional customs, a ten-day week, and a new political system based ostensibly on social contract theory.

The British statesman Edmund Burke looked askance on the French experiment, asking, “Is every landmark of the country to be done away in favor of a geometrical and arithmetical constitution?”

Even after the disaster of the French Revolution and the various corrections in the 19th century romantic movement, the rationalizing tendencies of the Enlightenment continued to exercise a magnetic pull over those in positions of government and social management throughout Europe. However, following the industrial revolution and new nihilistic trends in philosophy, social planners gradually ceased looking to philosophical laws as the touchstone for rationality, but instead sought political certainty in the statistical movement of the 19th century.

In the early 20th century, while the fetish for politics-by-statistics continued apace, there came to be a growing expectation that utopia might come from the science and engineering quarter. This zeitgeist temporarily resulted in an American political party, known as “The Technocracy Party,” that advocated handing all government over to technical experts (see my Salvo article, “Technocracy Creep:  Technology and the Utopian Impulse.”)

Now the world is enjoying another ideology from political rationalists eager to redesign society around methods promising geometrical and arithmetical precision. But this time the fetish centers around artificial intelligence. What the French revolutionaries, the British statistical movement, and the American technocracy party merely dreamed of achieving, can now be realized in the true political rationalism of government by bot. Or so the narrative goes, but you can probably guess that I don’t share the utopian optimism.

The utopian optimism for government-by-bot was on full display last Tuesday when the technologist and venture capitalist, Marc Andreessen, published an article in The Free Press. Andreessen argued that AI will eventually offer god-like love and wisdom to every child, CEO, and government official.

Every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful. The AI tutor will be by each child’s side every step of their development, helping them maximize their potential with the machine version of infinite love….

Every leader of people—CEO, government official, nonprofit president, athletic coach, teacher—will have the same. The magnification effects of better decisions by leaders across the people they lead are enormous, so this intelligence augmentation may be the most important of all.

As creepy as it would be to have robots acting as surrogate Christ figures dispensing “infinite love” to children, it is perhaps even more dangerous to have robot mentors offering infinite wisdom to lawmakers.

What does Andreessen think will be the result of government officials having access to the type of intelligence augmentation? In a word, utopia. Again from his Free Press article:

Productivity growth throughout the economy will accelerate dramatically, driving economic growth, creation of new industries, creation of new jobs, and wage growth, and result in a new era of heightened material prosperity across the planet.

Andreessen even looks forward to a time when AI will help nation’s wage war, arguing that “military commanders and political leaders will have AI advisors that will help them make much better strategic and tactical decisions, minimizing risk, error, and unnecessary bloodshed.”

These dreams of heaven-on-earth come with a price tag, and its cost is that we hand the management of the world over to intelligent machines and their handlers.

Andreessen is not alone in hoping for this future. Among political activists, think tanks, and various NGOs, there is a growing interest in leveraging data science to bring to policy the precision associated with mathematics. This type of hubris is animated by a naïve and almost childlike trust in the power of data crunching and artificial intelligence.

For example, the Center for Public Impact, a think-tank connected with the Bill and Melinda Gates Foundation, anticipates AI disrupting our political systems through better and more efficient decision-making mechanisms:

Effective, competent government will allow you, your children and your friends to lead happy, healthy and free lives…. It is not very often that a technology comes along which promises to upend many of our assumptions about what government can and cannot achieve… Artificial Intelligence is the ideal technology to support government in formulating policies. It is also ideally suited to deliver against those policies. And, in fact, if done properly policy and action will become indistinguishable… Our democratic institutions are often busy debating questions which – if we wanted – we could find an empirical answer to… Government will understand more about citizens than at any previous point in time. This gives us new ways to govern ourselves and new ways to achieve our collective goals.

The AI platform Turbine Labs offers tools for politicians to help guide their decisions. On their website they explain that

It is crucial that we equip political decision-makers with the right data and, in turn, that those decision-makers use that information to formulate the most comprehensive and inclusive policies.

Bot-driven politics is already a reality and is being felt at the highest level of geopolitics. In August 2022, authors Henry Farrell, Abraham Newman, and Jeremy Wallace wrote a piece showing that overreliance on Big Data instills dictators with a false confidence, thus making the world more dangerous. But this is not just a temptation for dictators. In an article published in April 2023 for The American Mind, Robert C. Thornett recounted how today’s world powers, including the United States, are exchanging diplomacy and direct communication for the illusory security of data-crunching. He gives examples of how this has led to numerous international blunders.

See Also

To be sure, sometimes AI really can assist to inform policy as one among a range of information-collecting mechanisms. But the temptation is to begin looking to pattern recognition as a replacement for traditional governance, diplomacy, and relationship-based intelligence. It isn’t hard to understand why we fall under this temptation: after all, AI promises to offer a more clean and scientific approach to governance compared to the complex and messy realm of real-world negotiation, prudential reasoning, and relationship-based intelligence gathering. But it doesn’t require a dystopian imagination to anticipate how these trends could go terribly wrong. Consider scenarios like the following:

  • The bot declares that a preemptive strike would be 70% more effective than diplomacy.
  • Artificial intelligence has identified that if organizations with more than 100 employees are legally required to limit the hiring of white males to 20 percent of the workforce, it is likely that there will be a decrease in workplace harassment.
  • Computers discovered that giving tax breaks to families who do online rather than in-person church will lead to reduction in carbon emissions.

AI and the Crisis of Political Legitimacy

In our current crisis of authority, in which there has been a breakdown in the type of trust required for political legitimacy, it is tempting to look to data science, and even mechanized decision-making, as an attractive alternative. For example, an article published by the European journal, Data & Policy, suggests that AI governance is the solution to the various crises of political legitimacy facing modern nations:

A lack of political legitimacy undermines the ability of the European Union (EU) to resolve major crises and threatens the stability of the system as a whole. By integrating digital data into political processes, the EU seeks to base decision-making increasingly on sound empirical evidence. In particular, artificial intelligence (AI) systems have the potential to increase political legitimacy.

It is, of course, entirely possible that America will never experience the type of top-down governance by machine that these social reformers are agitating for. Given our unique social and political context, it is perhaps more likely that the corporate sector will be a primary driver in bringing technocracy to the United States. Indeed, there is already strong indications that corporate elites are interested in using surveillance capitalism to mainstream new social norms.

But whether technocracy comes from the direct rule of politicians or the de facto rulership of corporate elites working in tandem with lawmakers, machine-driven governance threatens to undermine the type of legitimacy on which authority depends. Indeed, far from being a solution to the current problems, bot-governance will almost certainly perpetuate the crisis of legitimacy through making the rationale behind policy inscrutable.

In a democracy, rulers are accountable to the people, and this accountability means, among other things, that rulers can be called upon to explain their decisions. But what happens to democracy when policy comes to have the same inexplicability as computerized chess? In professional chess, the commentators will often say things like, “the computer is telling us that such-and-such would have been a better move, but we’re not sure why.” The brain of the computer is opaque, what people often describe as a black box. This doesn’t really matter in professional chess, but in governance, one of the criteria for rationality is explainability. As Matthew Crawford wrote in American Affairs,

Institutional power that fails to secure its own legitimacy becomes untenable. If that legitimacy cannot be grounded in our shared rationality, based on reasons that can be articulated, interrogated, and defended, it will surely be claimed on some other basis. What this will be is already coming into view, and it bears a striking resemblance to priestly divination: the inscrutable arcana of data science, by which a new clerisy peers into a hidden layer of reality that is revealed only by a self-taught AI program—the logic of which is beyond human knowing.

I think Crawford is correct. When used as a governing mechanism, the black-box character of AI outputs can cause humans to feel excluded. It isn’t hard to see how this could accelerate the type of helplessness people already feel when trying to correct mistakes in a credit report or fix a suspended account. In reality, the attempt to construct a society on rationalistic grounds only creates new forms of irrationality, savagery, and a type of political gnosticism. As I warned in 2022,

The new technocratic order not only has its own version of demonic possession, but boasts a caste of priest-kings through which the rule of our robot overlords is continually mediated. These rulers, who echo ancient god-kings in their aspirations, are the ones perceived to truly understand the inner workings of the digital ecosystem in which all of us now live and have our being. With data analysts as their soothsayers, these god-kings gain access to the esoteric knowledge inaccessible to the rest of us—knowledge through which they can control us. They then communicate this gnostic knowledge to the masses in vagaries that approach, but never quite achieve, full coherence.

Scroll To Top