The Artificial God

(Image: Hellio42 / Pixabay)

Modern politics began in the seventeenth century with a technological ambition: to build an “artificial man, though of greater stature and strength than the natural,” an “automaton that moves by springs and wheeles as doth a watch.”1 This technological ambition immediately becomes a theological ambition. Writ large, this artificial man becomes the great Leviathan, “the mortal god, to which we owe under the immortal God our peace and defence.”2

Thomas Hobbes’ desire to submit ourselves to an artificial god of our own construction is now being realized beyond his wildest dreams on a technological rather than a political plane. Paul Kingsnorth relays that when asked whether he believed that God exists, the tech mogul and futurist Ray Kurzweil responded, “I would say, not yet,” and the transhumanist Elise Bohan assures us that it won’t be long, that “we are building God.”3

Divergent opinions about AI

There are, of course, wildly divergent opinions about where this artificial god will lead us, ranging from the apocalyptic to the variously reassuring. On the one side there are those, including many who are helping to bring the brave new world of AI about, who prophesy that AI will destroy humanity or, at the very least, make human beings obsolete. Yuval Noah Harari, in his bestseller Homo Deus, seems almost giddy at the prospect. That AI will have enormous effects on employment and reshape entire industries seems beyond doubt. It seems all but certain that many people involved in so-called brain work, who labor behind a laptop, are likely to suffer the same fate as assembly line workers and craftsmen before them.4

On the other side are some who maintain that AI is simply a tool that extends human agency by magnifying its power. You might call this the Boromir paradigm for thinking about AI and technology more generally, a position immediately familiar to fans of The Lord of the Rings. AI, like any other technology, is merely a tool or an instrument, neutral in itself and subordinate to human decision, something essentially outside us that leaves its users unaffected. It’s how we use it that matters.

The important thing, as Boromir argued at the Council of Elrond, is that this great power be used by good people for good and moral purposes instead of evil ones. And, of course, we imagine that we are the good people with the good purposes. Like Boromir, we think we can handle it. It is difficult to see how we resist a technology that is so rapidly interposing itself between us and the reality outside us, when what Anton Barba-Kay calls the “frictionless immediacy” of lesser digital technologies has made once normal human activities—the face to face encounter with the teller at the bank, the trip to the local brick and mortar store, even picking up the phone and making a call—seem like a chore.

Others offer a different kind of assurance. They maintain, with some justification for anyone who has used Open AI or Chat GPT, that the capabilities of large language models turn out to be relatively modest and mediocre and that AI will end up as just another item on the list of science-fiction fantasies that failed to live up to the breathless techno-hype, like genetic sequencing and stem cell technology. Anyone who has ever experienced the so-called customer service of automated phone receptionists, banks, airlines, or even with helpless servers in bars and restaurants—anywhere that human agency has been surrendered to the algorithms of an automated system—will have no trouble believing in the limited capacities of AI.

Nevertheless, you might have already guessed that I don’t find such reassurances very reassuring, both because I don’t find the surrender of human agency to something dumber than us very comforting, yet more substantially, because I don’t think the question and the threat posed by AI finally turn on what AI ultimately can and can’t do. The mediocrity of AI might well make it more dangerous.

The problem with modern technology

But there is a deeper reason for my agnosticism about the future. It stems from a structural feature of modern technology that is not exclusive to AI, though it may well find its exemplary expression there.

Modern technology differs from ancient techne in many ways: in the depth of its penetration into nature, including our own, in the novelty of its objects and products, in the dynamic and viral replication of its consequences, and in its role in our lives as the collective raison d’etre that gives our society its social form. Who we are and what we have actually become much more closely resemble the world of Francis Bacon’s New Atlantis than the world envisioned by Hobbes’ Leviathan, or even our own Constitution.

Modern technology magnifies human power beyond a human scale, extending our power to act beyond our power to see, think, or judge. Which means that we don’t know what we’re doing. We lay hold of the very foundations of reality without thinking very deeply or honestly about them. The proliferation of technological means exceeds the articulation of ends; as Hans Jonas puts it, “discovery and invention precede not only the power but the very will for what they made possible—and imposed the unanticipated possibilities on the future will.”5 It is only after we have acquired or ensnared ourselves in some new form of technological power that we discover what it is for, by which point we can only partially glimpse what it means.

While no one can predict with certainty what the future holds, we do know a thing or two about false gods. After all, we have had a lot of experience with them. We know that false gods are liars and despots—they promise power and deliver enslavement. This is a truth as old as the Garden of Eden, which is to say that it is eternal. St. Augustine, in the fifth century, in his famous City of God, speaks of the “city of this world, which holds nations in enslavement, but is itself dominated by that very lust for domination.”

The acute form of this problem presented by modern technology is implicit in what we have said already about our power exceeding our foresight. The smartphone illustrates the problem. Twenty years ago, none of us knew we needed an iPhone, and no one asked our permission before introducing this revolutionary technology into the stream of history. And yet this technology has changed us while we were not looking, to channel a phrase from Anton Barba-Kay’s wonderful book, A Web of Our Own Making. The digital revolution has been almost instantaneous, and it has been systematic, penetrating, and transformational in every sphere of life.

And while you may presently choose not to avail yourself of a smartphone, you are not free from having to choose, and your choice, one way or the other, is not without great costThe choice to use a smartphone, tweet, or be on other social media is increasingly a choice about whether to be a person in the virtual public square that has replaced the real one. And, of course, we know what most people choose, or rather, how most people are overtaken before they make a deliberate choice. That I can access more information than any human being can digest from some kind of virtual cloud by wiggling my thumb, that AI anticipates my thoughts or completes them before I go to the trouble to think them—that it seems to “think” them on my behalf and at every turn—makes it almost irresistible.

To the extent that we can still choose, or even conceive of choosing, it is because we still remember life before this technology. We don’t imagine such a choice where automobiles, air travel, or electricity are concerned. Soon, there will be no one living on earth who remembers a time before smartphones, no one who is not conditioned to apprehend the world, interact with others, find their way around, buy and sell, or even do their most basic thinking and remembering through a screen. Its conquest will be invisible to those who are conquered by it.

As an extension of the “web of our own making,” AI is already a great “success,” whatever its limitations may be. The problem is that if nothing succeeds like success, nothing also entraps like success. C.S. Lewis makes the point powerfully in The Abolition of Man when he explains what “man’s power over nature must always and essentially be” and why this libido dominandi terminates not in freedom, but in slavery. Lewis writes, “if any one age really attains, by eugenics and scientific education, the power to make its descendants what it pleases, all men who live after it are the patients of that power. They are weaker, not stronger: for though we may have put wonderful machines in their hands we have pre-ordained how they are to use them.”6

We inevitably place ourselves in the position of Lewis’ protagonists, imagining ourselves as the ones who impose our will on future generations through technology. We do not imagine Lewis’ future as our present. As AI encroaches further and further upon our lives, interposing itself between us and the world, governing the rhythms of life, finishing and indeed composing our thoughts for us, we might do well to imagine ourselves on the receiving end.

False gods steal your soul

Certainly there are grounds for worry about what will happen to human expertise when AI becomes the repository of knowledge and decision making, especially when we have seen firsthand how GPS causes one’s powers of observation and sense of direction to atrophy, how the storage capacity of our devices contributes to the erosion of our memories, and how it is increasingly difficult for a distracted people even to complete the reading of a book. But I worry more about the effects of our mundane encounters with AI as it encroaches upon us and penetrates ever deeper into our lives.

What happens to our minds when they are subjected to an ever-present but invisible machine that encroaches on every facet of our lives, a machine that remembers, anticipates, and finishes our thoughts, a machine that presents us with an unrelenting and uninvited offer to spare us the labor not only of thinking but of learning, of undergoing the interior transformation that comes with the mastery of a subject matter, that shields us from learning how to reason or from an encounter with reality? What is left to the human being once AI substitutes itself for the most basic intellectual operations that underlie all our thinking: attending, reading, remembering, and basic discursive reasoning?

Such questions bring to mind another thing we know from our long experience with false gods: they steal your soul. I often think of the scene from the movie O Brother Where Art Thou when young Tommy Johnson is chastised for selling his soul to the devil in exchange for mastery of the blues guitar. Tommy’s response: “Well, I wasn’t using it.” Hans Jonas observes, “There is a strong and, it seems, almost irresistible tendency in the human mind to interpret human functions in terms of the artifacts that take their place, and artifacts in terms of the replaced human functions.”7

In the seventeenth and eighteenth centuries, it was the clock, whose springs and wheeles doth move themselves, that provided the image in which we understood ourselves:

The modern servo-mechanism is described as perceptive, responsive, adaptive, purposive, retentive, learning, decision-making, intelligent, and sometimes even emotional, and correspondingly men and human societies are being conceived of and explained as feedback mechanisms, communication systems, and computing machines. The use of an intentionally ambiguous and metaphorical terminology facilitates the transfer back and forth between the artifact and its maker.8

This vicious circle applies perforce to the very notion of “artificial intelligence.” The idea of “artificial intelligence,” which consists in large-scale pattern recognition and data processing, rests on a prior reduction of human intelligence to what figures such as René Descartes and Thomas Hobbes said it was some four centuries ago: “reckoning with consequences.” It rests, in other words, on a reduction of thinking to data processing, to the determination of functional relationships between atomic facts, discrete units of self-evident meaning that admit of endless analysis and synthesis, and thus endless manipulation, but no further intellectual penetration.

Looked at from this side of Jonas’ vicious circle, “artificial intelligence” is not the beginning of a process but the culmination of one: namely, the four centuries-long project of modern science and pragmatic philosophy to transform the meaning of thinking and truth and to restrict the scope of what we can meaningfully think about. It turns out that much of human intelligence in our New Atlantis is already essentially artificial intelligence—journalism, to take one example, can only relate facts but not answer questions of truth. This is why today’s so-called “brain workers” now find themselves in the same position as industrial workers of the last century: because their “brain work” was already essentially mechanized work, a cog, so to speak, within a larger mechanized, technological-industrial process, destined to be replaced by a more powerful and more efficient machine when it was eventually developed.

It follows that there are, in fact, two ways that artificial intelligence can make human intelligence obsolete. One is that your “brain work” involves the kind of “thinking” that can be performed better and more powerfully by a machine. The other is that your “brain work” involves a kind of thinking that no longer has a place, or even any meaning, in our New Atlantis and its conception of the “real world.”

The decoupling of “intelligence” from intellect

Which brings us to the other side of Jonas’ vicious circle. Harari describes the advent of AI as a “great decoupling” of intelligence from consciousness. But a decoupling of intelligence from consciousness really means a decoupling of “intelligence” from intellect. St. Thomas Aquinas once said that “the name intellect arises from the intellect’s ability to know the most profound elements of a thing; for to understand (intelligere) means to read what is inside a thing (intus legere). Sense and imagination know only external accidents, but the intellect alone penetrates to the interior and to the essence of a thing.”9 There are obviously no profound elements, no interior, no essence, to a world comprised of “data” and therefore nothing to read. It’s why there is no such thing as a profound question in American life, because we don’t confront the world as a mystery to be contemplated, or a meaning to be plumbed, but a problem to be solved by application of the right technical, managerial, or political technique.

The decoupling of intelligence from consciousness is thus the decoupling of intelligence from the very thing that makes it intelligence in the first place, the very thing that once made us human in the great tradition that saw us as a rational animal and the image of God. It is the decoupling of intelligence from thinking, judgment, understanding, and truth. Ask OpenAI to explain Michael Hanby’s philosophy of technology, and it can do a passable job assembling a synthesis from the available online materials. Ask it whether this philosophy is true, and it will answer that it “depends largely on one’s worldview.” To say that truth “depends largely on one’s worldview” is to say there is no such thing.

We are in a better position now to see what happens when human intelligence is understood in the image of artificial intelligence and when the mind is surrendered to its relentless, intrusive offer to think on our behalf and conformed to its interminable rhythms. A mind that has surrendered its most basic intellectual operations is a mind that will soon no longer be able to perform them. But perhaps still more fundamentally, an intelligence decoupled from thinking, understanding, and judgment is an intelligence incapable of distinguishing the true from the factual, incapable of understanding what things are or what they mean, or imagining that they can really mean much of anything.

The exponential increase in the power of artificial intelligence is best measured negatively: in the truths we will no longer be able to perceive, in the ideas we will no longer be able to think, in the experiences we will no longer be able to have, and in the fact that we will not even know what we are missing.

Thus, Yuval Noah Harari, as if on cue, employs his extraordinary talent for making profound questions banal by explaining away human nature, defining (and dismissing) the human organism as an algorithm, and thereby absolving himself of the need for any serious thought about whether human existence means anything at all. Two thousand years after God’s self-revelation in Christ unveiled the mystery and vocation of man, it has come down to this. In an age when human intelligence, manufacturing a new god for itself, has unleashed a power that earlier generations could not even conceive, we learn that human nature does not exist and that human beings are just algorithms, presumably to be analyzed and manipulated, and functionalized like any other algorithm. This is what passes for truth in the age of artificial intelligence.

What we need now is to remember an older truth, from an older intelligence, revealed to us through the words of the Psalmist.

Their idols are silver and gold,
The work of men’s hands.
They have mouths, but do not speak;
Eyes but do not see.
They have ears, but do not hear;
Noses, but do not smell.
They have hands, but do not feel;
Feet, but do not walk;
And they do not make a sound in their throat.
Those who make them are like them;
So are all who put their trust in them. (Psalm 115:4–8)

(Editor’s note: This essay is a modified version of a lecture given to the St. Thomas More Society of Wilmington, Delaware. It was published originally on the What We Need Now site and is reposted here in slightly different form with kind permission.)

Endnotes:

1Thomas Hobbes, Leviathan, or The Matter, Forme, and Power of a Common-Wealth Ecclesiastical and Civil (London: Penguin, 1985), 81.

2Ibid., 227.

3Paul Kingsnorth, Against the Machine: On the Unmaking of Humanity (New York: Thesis, 2025), 257.

4See Against the Machine, 252.

5Hans Jonas, Philosophical Essays: From Ancient Creed to Technological Man (Englewood Cliffs, NJ: Prentice-Hall Inc., 1974), 76.

6C.S. Lewis, The Abolition of Man Or Reflections on Education with Special Reference to the Teaching of English in the Upper Forms of Schools (New York: MacMillan, 1947), 36.

7Hans Jonas, “Cybernetics and Purpose: A Critique,” The Phenomenon of Life, 110.

8Ibid., 110.

9St. Thomas Aquinas, Quaestiones disputatae de veritate, I.XII.


If you value the news and views Catholic World Report provides, please consider donating to support our efforts. Your contribution will help us continue to make CWR available to all readers worldwide for free, without a subscription. Thank you for your generosity!

Click here for more information on donating to CWR. Click here to sign up for our newsletter.


Read original article

Be the first to comment

Leave a Reply