God, Human, Animal, Machine explores the boundaries between humans and intelligent machines.
The following are the key points I highlighted in this book. If you’d like, you can download all of them to chat about with your favorite language model.
I knew of course that this was a programmed response—but then again, aren’t emotions in biological creatures just algorithms programmed by evolution?
As the philosopher Thomas Nagel points out in his 1974 paper “What Is It Like to Be a Bat?” consciousness can be observed only from the inside.
Science requires a third-person perspective, but consciousness is experienced solely from the first-person point of view. In philosophy this is referred to as the problem of other minds. In theory it can also apply to other humans. It’s possible that I am the only conscious person in a population of zombies who simply behave in a way that is convincingly human.
“Why should physical processing give rise to a rich inner life at all?” Chalmers wrote. “It seems objectively unreasonable that it should, and yet it does.” Twenty-five years later, we are no closer to understanding why.
Today, as AI continues to blow past us in benchmark after benchmark of higher cognition, we quell our anxiety by insisting that what distinguishes true consciousness is emotions, perception, the ability to experience and feel: the qualities, in other words, that we share with animals.
If there were gods, they would surely be laughing their heads off at the inconsistency of our logic. We spent centuries denying consciousness in animals precisely because they lacked reason or higher thought. (Darwin claimed that despite our lowly origins, we maintained as humans a “godlike intellect” that distinguished us from other animals.) As late as the 1950s, the scientific consensus was that chimpanzees—who share almost 99 percent of our DNA—did not have minds. When Jane Goodall began working with Tanzanian chimps, her editor was scandalized that her field reports attributed an inner life to the animals and described them with human pronouns.
Even philosophers and neuroscientists who subscribe to the most reductive forms of physicalism, insisting that mental states are identical to brain states, often use terminology that is inconsistent with their own views. They debate which brain states “generate” consciousness, or “give rise to” it, as though it were some substance that was distinct from the brain, the way smoke is distinct from fire. “If they really thought that conscious states are one and the same as brain states,” Papineau argues, “they wouldn’t say that the one ‘generates’ or ‘gives rise to’ the other, nor that it ‘accompanies’ or is ‘correlated with’ it.”
Note: Don’t agree with this, but it’s an interesting view even of modern literature. I believe this is wrong mostly because I do think consciousness is another type of phenomenon that gets generated in the universe if certain criteria is met.
According to these thinkers, there is no “hard problem” because that which the problem is trying to explain—interior experience—is not real. The philosopher Galen Strawson has dubbed this theory “the Great Denial,” arguing that it is the most absurd conclusion ever to have entered into philosophical thought—though it is one that many prominent thinkers espouse. Chief among the deniers is Daniel Dennett, who has often insisted that the mind is illusory. Dennett refers to the belief in interior experience derisively as the “Cartesian theater,” invoking the popular delusion—again, Descartes’s fault—that there exists in the brain some miniature perceiving entity, a homunculus that is watching the brain’s representations of the external world projected onto a movie screen and making decisions about future actions.
Dennett argues that the mind is just the brain and the brain is nothing but computation, unconscious all the way down. What we experience as introspection is merely an illusion, a made-up story that causes us to think we have “privileged access” to our thinking processes. But this illusion has no real connection to the mechanics of thought, and no ability to direct or control it.
Perhaps it’s true that consciousness does not really exist—that, as Brooks put it, we “overanthropomorphize humans.” If I am capable of attributing life to all kinds of inanimate objects, then can’t I do the same to myself? In light of these theories, what does it mean to speak of one’s “self” at all?
I live in a university town, a place that is populated by people who consider themselves called to a “life of the mind,” and yet my friends and I rarely talk about ideas or try to persuade one another of anything. It’s understood that people come to their convictions—are in some sense destined to them—by elusive forces: some combination of hormones, culture, evolutionary biases, and unconscious emotional or sexual needs. What we talk about endlessly, exhaustively, is the operations of our bodies: our exercise routines, our special diets, what drugs everyone is taking. Twice a week I attend a yoga class where I am instructed to “let go of the thinking mind,” as though consciousness were something we were all better off without.
What, after all, is “the thinking mind”? It is nothing that can be observed or measured. It’s difficult to explain how it could possess real causal power. Materialism is the only viable metaphysics in modernity, an era that was founded on the total irreconcilability of matter and mind. Perhaps consciousness is like the whistle on a train or the bell of a clock, a purely aesthetic feature that is not in any way essential to the functioning of the system. William James tried for years to demonstrate that consciousness could be studied empirically before giving up, concluding that the mind was a concept every bit as elusive as the soul. “
Thomas Nagel refers to this third-person standpoint as “the view from nowhere.” It is the conviction that in order to describe the world accurately and empirically, we must put aside res cogitans—the subjective, immediate way in which we experience the world in our minds—and limit ourselves to res extensa, the objective, mathematical language of physical facts.
To say that consciousness is an illusion is to place it outside the material world, deeming it something—much like Descartes’s soul—that does not exist within time or space. Perhaps the real illusion is our persistent hope that science will be able to explain consciousness one day. As the writer Doug Sikkema points out, the belief that science is capable of explaining the entirety of our mental lives entails “a philosophical leap.” It requires ignoring the fact that the modern scientific project has been so successful precisely because it excluded, from the beginning, aspects of nature that it could not systematically explain.
My body had become strange to me, and for the first time in my life I succumbed to dualistic thinking. It did not help that I was working as a cocktail waitress, a position that privileges sexual physicality and requires a measure of disassociation. For the length of my shifts I willed my mind to vanish and became sheer physics in motion: I was nothing more than the hand that poised trays of drinks above my head, the legs that carried ice buckets up from the basement, the neck and arms and waist that were constantly touched by the hands of male patrons. The women I worked with would often chastise me for not being more vigilant. Don’t let them touch you like that, they’d say. Have some self-respect. But I no longer saw myself as synonymous with my body. Nobody could reach my true self—my mind—which resided elsewhere. My true self was the brain that consumed books in bed each morning with an absorption so deep I often forgot to eat. And yet this “real” self was so ephemeral. It existed in perfect isolation, without witness, and seemed to change from one day to the next. My philosophy of life shifted with each book I read, and these transient beliefs rarely found expression in my actions in the world.
According to this thinking, consciousness can be transferred onto all sorts of different substrates: our new bodies might be supercomputers, robotic surrogates, or human clones. But the ultimate dream of mind-uploading is total physical transcendence—the mind as pure information, pure spirit.
Given that so little is known about consciousness, there are plenty of concerns about the feasibility of mind-uploading. One of the most common objections involves a problem known as “continuity of identity.” When a person’s mind is transferred onto a digital medium, how can we be sure that his actual consciousness—his subjective experience of selfhood—survives? The philosopher Susan Schneider believes that this is impossible. While granting that consciousness is at root computational, she argues that most mind-as-software analogies, including patternism, take the metaphor too far. Consciousness cannot leave the brain and travel to some remote location. We know that ordinary physical objects—rocks, tables, chairs—don’t simultaneously exist here and elsewhere. Mind-uploading may indeed produce a digital copy of a person that acts and appears from the outside identical to the original. But the new person will be a zombie with no subjective experience. The most mind-uploading will ever be able to achieve is functional similarity to the original.
Kurzweil addresses this problem at one point in The Age of Spiritual Machines. He imagines that the new, uploaded person will not only appear to observers to have the same personality and outward behaviors as the original; he will also claim to be the same person, in possession of the memories of his biological twin and the same interior sense of self. This claim becomes more complicated, obviously, if the original person is still alive. Both people will claim to possess the consciousness of the original. Kurzweil contends that if the patternist view is correct—if consciousness is just the organization of information—then the new person will have the same subjective experience, meaning that the scanned person’s mind will exist in two places at once. Of course, there will be no way to prove this, which circles back to the fundamental problem of consciousness: it is impossible, from an external, third-person point of view, to know whether it exists.
In the end, transhumanism is merely another attempt to argue that humans are nothing more than computation, that the soul is already so illusory that it will not be missed if it doesn’t survive the leap into the great digital beyond. This is the great paradox of modern reenchantment narratives: even the most mystical end up simply reiterating the fundamental problem of our disenchanted age: the inability to account for the mind.
A couple weeks after it appeared, I opened my email and found a message from Ray Kurzweil. I immediately concluded it was a prank. But after reading the first sentences, I realized it was authentic. He said that he’d read my article and found it “thoughtful.” He too found an “essential equivalence” between transhumanist metaphors and Christian metaphors: both systems of thought placed a premium value on consciousness. The nature of consciousness—as well as the question of who and what is conscious—is the fundamental philosophical question, he said, but it’s a question that cannot be answered by science alone.
Philosophers and neuroscientists often point out that our belief in a unified interior self—the illusion, as Richard Dawkins once put it, that we are “a unit, not a colony”—has no basis in the actual architecture of the brain. Instead there are only millions of unconscious parts that conspire, much like a bee colony, to create a “system” that is intelligent. Emergentism often entails that consciousness isn’t just in the head; it emerges from the complex relationships that exist throughout the body, and also from the interactions between the body and its environment.
Emergentists, in contrast, believe that complex, dynamic systems cannot always be explained in terms of their constituent parts. It’s not simply a matter of peering into the brain with MRIs and discovering a particular area or system that is responsible for consciousness. The mind is instead a kind of structural pattern that emerges from the complexity of the entire network—including systems that exist outside the brain and are distributed throughout our bodies.
The AI philosopher Mark A. Bedau has argued that emergence, in its strongest iterations, “is uncomfortably like magic,” as it assumes that a nonmaterial property (consciousness) is capable of somehow acting causally on a material substance (the brain).
Minsky once described the mind as “a sort of tangled-up bureaucracy” whose parts remain ignorant of one another. He described the act of deciding to take a sip of tea in the following terms: “Your GRASPING agents want to keep hold of the cup. Your BALANCING agents want to keep the tea from spilling out. Your THIRST agents want you to drink the tea. Your MOVING agents want to get the cup to your lips.” Just as the intelligence of a beehive or a traffic jam resides in the patterns of these inert, intersecting parts, so human consciousness is merely the abstract relationships that emerge out of these systems: once you get to the lowest level of intelligence, you inevitably find, as Minsky put it, agents that “cannot think at all.” There is no place in this model for what we typically think of as interior experience, or the self.
Descartes came out of his dark night of the soul convinced that the only thing he could trust was consciousness itself. The cogito—I think, therefore I am—affirmed interior, first-person experience as the foundation of reality. But this foundation was from the very beginning shaky. The decision to place consciousness outside the physical world, as we’ve seen, made the mind seem increasingly unreal, especially as mechanistic philosophy became more prominent in the sciences.
Galileo, the father of physics, made essentially the same divisions as Descartes: there was the quantitative world, which could be measured and predicted, and there was the qualitative world of the mind, which contained colors, sounds, and sensations—phenomena that had no material existence and could not be studied by the physical sciences. Today we continue to trust that things that can be objectively quantified maintain a “real” existence independent of our minds. As the science fiction writer Philip K. Dick once put it, reality is “that which, when you stop believing in it, doesn’t go away.”
Some physicists have proposed that consciousness itself causes the wave function to collapse, a possibility that, if true, would radically disrupt the foundational premises of materialism—the notion that the world behaves predictably and deterministically, independently of our minds. It would also have serious implications for theories of consciousness. The materialist perspective has long discounted or ignored consciousness on the grounds that the world is causally closed: there is no evidence that consciousness “does” anything, and there is no gap in the physical world for consciousness to fill. But as David Chalmers has pointed out, the wave function collapse is precisely this kind of gap.
While the article received some buzz upon publication, the theory’s popularity has escalated over the past decade or so. It has gained an especially fervent following among scientists and Silicon Valley luminaries, including Neil deGrasse Tyson and Elon Musk, who have come out as proponents. (Musk has said he believes the odds that we are not living in a computer simulation is “one in billions”). A few years ago, a New Yorker profile of the venture capitalist Sam Altman reported that two unnamed billionaires are currently funding scientists to figure out how to break us out of the simulation. It has become, in other words, the twenty-first century’s favored variation on Descartes’s skeptical thought experiment—the proposition that our minds are lying to us, that the world is radically other than it seems.
Why is the only plausible explanation for an obsession the imbalance of neurotransmitters or depressed nerve centers—why could I not have been driven to the same ends by an idea?
Their ideas lost steam after World War II, as philosophy became more hostile to metaphysics, but over the past couple decades panpsychism has been revisited by notable philosophers such as Galen Strawson, David Chalmers, and Thomas Nagel. The impasse surrounding the hard problem of consciousness and the weirdness of the quantum world has created a new openness to the notion that the mind should have never been excluded from the physical sciences in the first place.
“What is mysterious is reality,” he writes, “and our knowledge of consciousness is one of the best clues we have for working out what that mysterious thing is like.”
One of the leading contemporary theories of consciousness—probably the leading one at the time of this writing—is integrated information theory, or IIT. Pioneered by Giulio Tononi and Christof Koch (the neuroscientist who used the free-will argument to justify leaving his wife), IIT holds that consciousness is bound up with the way that information is “integrated” in the brain. Information is considered integrated when it cannot be easily localized but instead relies on highly complex connections across different regions of the brain.
One of the main appeals of panpsychism is that it manages to avoid many of the intractable problems of consciousness—both the hard problem of materialism and the interaction problem of dualism. It makes it easier to speculate about how observation, in quantum mechanics, causes the wave function to collapse, given that consciousness is not merely an illusion but a fundamental property of the world that can presumably have causal effects on other objects.
Whenever I mention panpsychism in social settings, someone will inevitably begin speaking enthusiastically about a novel they just read about tree consciousness, or a podcast they heard about mushroom communication networks, or a recent New Yorker article about how psychedelic plants evolved to use “messenger molecules” to communicate with human neurotransmitters. Seeing the world as broadly alive is less a novel proposition than a return to the worldview of all early human cultures, a mental schema that is perhaps innate to us. It’s clear that humans are predisposed to believe all things have intelligence and agency, that nature and even inanimate objects are like us.
We are eager to create narratives about the physical world as though it were composed of agents embroiled in some grand cosmic drama. This tendency, he argued, is exacerbated by confirmation bias. Human consciousness is a meaning-making machine, and once it takes note of some coincidence or pattern, it will obsessively search for more evidence to corroborate it.
This woman is a poet, and I tend to grant her theories some measure of poetic license. It seems to me that beneath all the New Agey jargon, she is speaking of the power of the unconscious mind, a realm that is no doubt elusive enough to be considered a mystical force in its own right. I have felt its power most often in my writing, where I’ve learned that intuition can solve problems more efficiently than logical inference.
Early critics of IIT pointed out that deep-learning systems like IBM’s Watson and Google’s visual algorithms have nonzero values of phi, the threshold for phenomenal experience, but they do not appear to be conscious. Koch recently clarified the issue in his book The Feeling of Life Itself. Nothing in IIT, he argues, necessitates that consciousness is unique to organic forms of life—he is not, as he puts it, “a carbon chauvinist.” So long as a system meets the minimum requirements of integrated information, it could in principle become conscious, regardless of whether it’s made of silicon or brain tissue. The problem, he argues, is that most digital computers have sparse and fragmented connectivity that doesn’t allow for a high level of integration. This isn’t simply a matter of needing more computing power or developing better software. The digital structure is foundational to modern computing, and building a computer that is capable of high integration, and hence consciousness, would require essentially reimagining computers from scratch.
If neurons are conscious—and according to Koch they have enough phi for “an itsy-bitsy amount of experience”—and my brain is made of billions of neurons, then why do I have only one mind and not billions? Koch’s answer is that a system can be conscious only so long as it does not contain and is not contained within something with a higher level of integration.
In a 2019 essay David Chalmers notes that when he was in graduate school, there was a saying about philosophers: “One starts as a materialist, then one becomes a dualist, then a panpsychist, and one ends up as an idealist.” Although Chalmers cannot account for where the truism originated, he argues that its logic is more or less intuitive. In the beginning one is impressed by the success of science and its ability to reduce everything to causal mechanisms. Then, once it becomes clear that materialism has not managed to explain consciousness, dualism begins to seem more attractive. Eventually the inelegance of dualism leads one to a greater appreciation for the inscrutability of matter, which leads to the embrace of panpsychism. By taking each of these frameworks to their logical yet unsatisfying conclusions, “one comes to think that there is little reason to believe in anything beyond consciousness and that the physical world is wholly constituted by consciousness.” This is idealism.
The cyberneticists of the 1950s and ’60s were strict materialists who mostly avoided the term “consciousness” or believed that consciousness was nothing more than the chemical processes in the brain. While this idea is still arguably the consensus in neuroscience and artificial intelligence, popular movements have attempted to refute it. The 1980s and 1990s saw the rise of dualisms—the idea that the mind is the software of the brain, or that consciousness somehow emerges as a property of matter. This was followed in the early 2000s with a renewed interest in panpsychism, and now we are beginning to hear, as Chalmers put it, “some recent stirrings of idealism.”
How could anyone argue that the existence of matter is as certain as the existence of mind? Consciousness is all we can know for certain. We know from direct experience that we see images and colors and movement, but to call those images “matter” involves an inferential leap that is rarely acknowledged. Matter, he concluded, was an “explanatory abstraction.”
But unlike Descartes, Kastrup does not attempt to use this knowledge as a foundation for rebuilding a belief in the material world. Like most idealists, he has come to believe that consciousness is all that exists. “Reality is fundamentally experiential,” he writes in his 2019 book The Idea of the World.
The universal mind—whether it goes by God, Brahman, or some other name—is a common feature of idealism. Without it, it’s difficult to explain why there is a shared, objective world that all of us experience, making the theory indistinguishable from solipsism. The cosmic mind also ensures that this objective world continues to exist independently of human perception—that trees still fall in the woods even when no one hears them. Bishop Berkeley, the eighteenth-century idealist philosopher, imagined that the infinite and omnipresent mind of God kept the world in perpetual existence simply by looking at it. Kastrup and his colleagues offered a unique spin on this trope. All living, conscious creatures were the “disassociated alters” of the cosmic mind. This was terminology borrowed from disassociated identity disorder, or DID, the phenomenon in which a person develops several autonomous personalities. In most cases these “alters” are operationally distinct: when one personality is in charge, the others have no knowledge of what is happening. This has recently been supported with empirical evidence. In one study a woman who claimed that some of her alters were blind was hooked up to an EEG. The researchers discovered that when the blind alters were in charge, the sight center of her brain went completely blank, despite the fact that her eyes were open. The processes behind this condition are still not understood, but Kastrup argues that it provides a clue as to how consciousness functions on a universal scale. “If something analogous to DID happens at a universal level,” the authors wrote, “the one universal consciousness could, as a result, give rise to many alters with private inner lives like yours and ours.”
Ivan is caught in a paradox: he believes in empiricism and logic, and yet it is these very enterprises that have revealed that the mind is illusory and unreliable, making it more difficult to believe that human interpretations of the world are truly objective.
Faith, Religion and Modern Spirituality
For centuries we said we were made in God’s image, when in truth we made him in ours.
People often decry the thoughtlessness of religion, but when I think back on my time in Bible school, it occurs to me that there exist few communities where thought is taken so seriously. We spent hours arguing with each other—in the dining hall, in the campus plaza—over the finer points of predestination or the legitimacy of covenant theology. Beliefs were real things that had life-or-death consequences. A person’s eternal fate depended on a purely mental phenomenon—her willingness to accept or reject the truth—and we believed implicitly, as apologists, that logic was the means of determining those truths.
For most of my life I had believed that I would live to see the coming of this new age; that my body would be transformed, made immortal, and I would ascend into the clouds to spend eternity with God.
With each era we were moving closer and closer to this point of culmination, when intelligence would merge with the universe and we would become divine. Evolution for Kurzweil is not merely a blind mechanism of accident and trial and error; it is “a spiritual process that makes us more godlike.”
In hindsight, it is strange that I did not notice the resonances between these ideas and the promises of Christian eschatology—at least not initially. Like the biblical prophets, Kurzweil believed that the dead would rise, that the earth would be transformed, that humans would become immortal. He too envisioned history as a unified, teleological drama wending its way toward a point of final redemption.
One of the most famous instances of this metaphor appears in the Book of Ezekiel. The prophet relays a vision in which he approaches a valley full of skeletons that come miraculously back to life. The dry bones reassemble themselves in human forms, and then they begin to develop new flesh. Readers of that era understood that this was a figure of speech. The images of the dead rising were meant to symbolize the future restoration of Israel and the return to the Promised Land.
Around the third century BCE, however, these prophetic passages began to be read differently—not as literary devices but as a promise that the dead would literally be brought back to life.
Perhaps the most creative solution in this vein was proposed in the third century by Origen of Alexandria, who attempted to forge a middle way between the Christian notion of a bodily resurrection and the Neoplatonist belief in a totally spiritual afterlife. He did so by pointing out that change was already a constant feature of the body. “The material substratum is never the same,” he argued, then went on to propose a new metaphor: “For this reason, a river is not a bad name for the body since, strictly speaking, the initial substratum in our bodies is perhaps not the same for even two days.” And yet despite the constant changes to the body, an individual “is always the same.”
At the time, as I found more and more similarities between transhumanist ideas and the Christian prophecies, I began to entertain a more conspiratorial thought: perhaps these technological visions were not merely similar to theological concepts; perhaps they were in fact the events that Christ had prophesied. Jesus had spoken about the future primarily in metaphors, most of which were vague, if not entirely incomprehensible.
As Stewart Brand, that great theologian of the information age, famously put it, “We are gods and might as well get good at it.”
It was through this broader education that I was able to see transhumanism more clearly and understand where precisely it veered into mystical thinking. More importantly, it became clear to me that my interest in Kurzweil and other technological prophets was a kind of transference. It allowed me to continue obsessing about the theological problems I’d struggled with in Bible school, and was in the end an expression of my sublimated longing for the religious promises I’d abandoned.
But there was one aspect of this fixation I could not abandon, even years later: the strange parallels between transhumanism and Christian prophecies. Each time I returned to Kurzweil, Bostrom, and other futurist thinkers, I was overcome with the same conviction as before: that the resonances between the two ideologies could not possibly be coincidental. All the books and articles I read about the history of transhumanism claimed that the movement was inspired by a handful of earlier thinkers stemming back to the Enlightenment, most of whom were secular humanists and scientists. Bostrom insisted that the term “transhuman” first appeared in 1957 in a speech that Julian Huxley gave on how humanity could transcend its nature and become something new. Nobody seemed to be aware of its appearance in The Divine Comedy.
Eventually I set out to learn more about how Christians had interpreted the Resurrection at different points in history. My understanding of these prophecies had been, up to that point, limited by the narrow parameters of my fundamentalist education. Once I veered slightly beyond the boundaries of orthodox doctrine, however, it became clear that there had existed across the centuries a long tradition of Christians who believed that the Resurrection could be accomplished through science and technology. Among them were medieval alchemists like Roger Bacon, who was inspired by biblical prophecies to create an elixir of life that would mimic the effects of the resurrected body as described in Paul’s epistles. The potion, Bacon hoped, would make humans “immortal” and “uncorrupted,” granting them the four dowries that would characterize the resurrected body: claritas (luminosity), agilitas (travel at the speed of thought), subtilitas (the ability to pass through physical matter), and impassibilitas (strength and freedom from suffering).
Projects of this sort did not end with the Enlightenment. If anything, the tools and concepts of modern science offered a wider variety of ways for Christians to envision these prophecies. In the late nineteenth century, Nikolai Fedorov, a Russian Orthodox ascetic who was steeped in Darwinism, argued that humans could direct their own evolution to bring about the Resurrection. Natural selection had thus far been a random phenomenon, but now, with the help of science and technology, humans could intervene in this process to enhance their bodies and achieve eternal life. “Our body,” as he put it, “will be our business.” The central task of humanity, he argued, should be resurrecting everyone who had ever died. Calling on biblical prophecies, he wrote: “This day will be divine, awesome, but not miraculous, for resurrection will be a task not of miracle but of knowledge and common labor.”
The title page was signed and appended with a handwritten note: Meghan, enjoy the age of spiritual machines. It was a reference to the title, though sans italics, quotation marks, or capital letters, it begged to be read as something else: godspeed for the future, a dispatch from a prophet who might not live to see the promised land.
Wasn’t it, after all, the very notion of imago dei—that humans had some special distinction, a solitary romance with God—that caused us to believe we were distinct from the rest of nature and brought about our alienation?
The problem with the theological objection, he argued, was that it restricted God’s omnipotence. If God is truly all-powerful, could he not give a soul to an elephant if he saw fit? If so, then he could presumably do the same for a machine. This act of divine intervention, he claimed, was not so different from procreation: the physical process is accomplished through the activities of humans—sex and conception—and yet no one would give the parents credit for granting a soul to their child.
Throughout the months she spent at the lab, she struggled to reconcile the lifelike nature of these machines with her belief that humans were made in the image of God. “As our technical creatures become more like us, they raise fundamental theological questions,” she writes in her book on the experience, God in the Machine. “I had learned in theology to understand humans as special, elected by God to be God’s partners.” However, the notion that we ourselves could build intelligent machines in our image assumed “that humans are nothing more than machines, bags of skin that can be rebuilt.”
I told the physicist that when his team revealed that the Higgs appeared to be much lighter than it might have been—when it appeared, in other words, that the universe was fine-tuned—many evangelical Christians in the United States had seized on this as evidence that the universe was intelligently designed.
So I said only that the multiverse theory seemed to require some measure of faith.
The truth, he went on to say, was that people did not object to these theories because they were theoretical but because they found such conclusions unacceptable. Many of the recent discoveries of quantum physics unsettled our belief in human exceptionality. They revealed that we are not in fact the central drama of the universe, that we are merely temporary collections of vibrations in fundamental quantum fields. “People want to believe that life has meaning, that humans stand at the center of existence.” He looked directly at me and added, “This is why religion continues to be so seductive, even now in the modern world.”
When I was still a Christian, these moments were rich with meaning, one of the many ways I believed that God spoke to me, but now they seemed arbitrary and pointless. Coincidences are in most cases a mental phenomenon: the patterns exist in the mind, not in the world.
Kierkegaard was one of the few philosophers we were required to read in Bible school, and he was at least partly responsible for inciting my earliest doubts. It had started with his book Fear and Trembling, a treatise on the biblical story in which God commands Abraham to kill his son, Isaac, only to rescind the mandate at the last possible moment. The common Christian interpretation of the story is that God was testing Abraham, to see whether he would obey, but as Kierkegaard pointed out, Abraham did not know it was a test and had to weigh the command at face value. What God was asking him to do went against all known ethical systems, including the unwritten codes of natural law. His dilemma was entirely paradoxical: obeying God required him to commit a morally reprehensible act.
One popular explanation for these regressions is psychological. Reenchantment is a form of wishful thinking, a weakness that persists among those who are unable to swallow the bitter truths of materialism.
Transcendent truths, she has told me many times, cannot be articulated intellectually because higher thought is limited by the confines of language. These larger messages from the universe speak through our intuitions, and we modern people have become so completely dominated by reason that we have lost this connection to instinct. She claims to receive many of these messages through images and dreams. In a few cases she has predicted major global events simply by heeding some inchoate sensation—an aching knee, the throbbing of an old wound, a general feeling of unease.
It’s not as though I never experienced God’s presence or guidance as a Christian; it was that I could not, as so many of my friends and classmates managed to do, rule out the possibility that those signs and assurances were merely narratives I was constructing.
Calvin argued that God’s revelation was so perfect that “it is not right to subject it to proof and reasoning.” We could know nothing of God through our intellects, only through revelation, and the revelation itself was more or less straightforward. To some extent this made redundant our work as theologians, since exegesis risked sullying the holy text. We were taught Calvin’s approach to hermeneutics: brevitas et facilitas, “brief and simple.” The lengthier the exegesis, the more likely it was to be tainted by human bias. Some professors took the more radical approach of Luther: Scriptura sui ipsius interpres, or “The text interprets itself.”
Although I didn’t yet have the language to articulate it, what I feared most in the theology was this undercurrent of voluntarism—the notion that God exists in an eternal state of exception, or lives in some higher realm where the whole system of human morals breaks down. One of Calvin’s favorite verses was Psalm 115:3: “God, who resides in heaven, does whatever he pleases.” I had always believed that God commanded us to love one another because love has intrinsic value—just as Socrates argues in Plato’s Euthyphro that the gods love piety because it is good, rather than piety being good solely because the gods love it. But Calvin and Luther seemed to believe that God’s goodness rested on nothing more than the Hobbesian rule that might makes right.
Satan had wagered that Job would renounce his faith if he suffered badly enough, and God, being a good sportsman, took the bet. It was this God—the deity who was willing to play games with his subjects, seemingly for his own amusement—whom Calvin insisted we must obey “without asking a reason.” At a certain point one was forced to wonder whether an intelligence so far removed from human nature could truly have our best interests in mind.
There is probably some irony in the fact that the doctrine of predestination was what finally provoked my crisis of faith. Doubt is a natural condition of religious belief, and it was not the first time I’d experienced misgivings. But after reading Calvin and Luther, it became impossible to avoid wondering whether my objections to divine justice were proof that I myself was not one of the elect. Why else would I be having such thoughts, unless I’d never been saved to begin with? My doubts took on a sense of inevitability and evolved into a vicious circle. They became recursive and self-fulfilling, such that every passing heretical thought seemed to confirm that I was reprobate and destined for hell. The more probable this fate came to seem, the more absurd it felt that I was to be punished for something that was entirely out of my control, which only exacerbated my doubts.
If God exists and if He really did create the world, then, as we all know, He created it according to the geometry of Euclid and the human mind with the conception of only three dimensions in space. Yet there have been and still are geometricians and philosophers, and even some of the most distinguished, who doubt whether the whole universe, or to speak more widely the whole of being, was only created in Euclid’s geometry; they even dare to dream that two parallel lines, which according to Euclid can never meet on earth, may meet somewhere in infinity.
“I renounce the higher harmony altogether,” he declares. “It’s not worth the tears of that one tortured child…I don’t want harmony. From love for humanity, I don’t want it…I would rather remain with my unavenged suffering and unsatisfied indignation, even if I were wrong.” If heaven requires such suffering, he says, then “I hasten to give back my entrance ticket.”
When Ivan completes his argument, Alyosha gets up to leave and bows to kiss his brother on the lips. This is the only response he provides: no words, no logical defense, just a simple gesture of love. By the end of our discussion, it had been made clear to me that this was the author’s true defense: faith was incomprehensible and absurd, a leap that cannot be reduced to the principles of reason.
I would like to believe in this future and the possibility of a more humane world, though it is difficult at times to keep the faith. All of us are anxious and overworked. We are alienated from one another, and the days are long and are filled with many lonely hours.
Technology, AI and Digital Ethics
Today artificial intelligence and information technologies have absorbed many of the questions that were once taken up by theologians and philosophers: the mind’s relationship to the body, the question of free will, the possibility of immortality. These are old problems, and although they now appear in different guises and go by different names, they persist in conversations about digital technologies much like those dead metaphors that still lurk in the syntax of contemporary speech. All the eternal questions have become engineering problems.
David Hume once remarked upon “the universal tendency among mankind to conceive of all beings like themselves,” an adage we prove every time we kick a malfunctioning appliance or christen our car with a human name. “Our brains can’t fundamentally distinguish between interacting with people and interacting with devices,” writes Clifford Nass, a Stanford professor of communication who has written about the attachments people develop with technology.
If a computer can convince a person that it has a mind, or if it demonstrates—as the Aibo website puts it—“real emotions and instinct,” we have no philosophical basis for doubt.
Information, he argued, first appeared in atoms, moments after the Big Bang. It proliferated as biology developed on earth, in the form of DNA. Once animal brains began to form, the information became encoded in neural patterns. Now that evolution has produced intelligent, tool-wielding humans, we are designing new information technologies more sophisticated than any object the world has yet seen. These technologies are becoming more complex and powerful each year, and very soon they will transcend us in intelligence. The only way for us to survive as humans is to begin merging our bodies with these technologies, transforming ourselves into a new species—what Kurzweil calls “posthumans,” or spiritual machines.
Kurzweil was one of the first major thinkers to bring these ideas into the mainstream (The Age of Spiritual Machines was a national bestseller). Reading more about him online, I learned that he was a futurist and inventor who had pioneered speech recognition technology in the 1970s and predicted the rise of the internet ten years before it happened. So ardently did he believe in the coming Singularity that he’d embarked on a rigid health regimen, taking more than two hundred supplements a day, to ensure that he lived to see the age of immortality. His belief that technology would one day resurrect the dead had led him to compile artifacts from his deceased father’s life—photos, videos, journals—with the hope that these artifacts, along with his father’s DNA, would one day be used to resurrect him. “Death is a great tragedy…a profound loss,” he said in a 2009 documentary. “I don’t accept it…I think people are kidding themselves when they say they are comfortable with death.”
The transhumanist philosopher Nick Bostrom argues that while it may bear some superficial similarities to religious thought, transhumanism is distinguished by its desire to approach existential questions in “a sober, disinterested way, using critical reason and our best available scientific evidence.” The goal of transhumanism, he writes, is “to think about ‘big-picture questions’ without resorting to wishful thinking or mysticism.”
Proponents of mind-uploading typically imagine it happening via one of two methods. The first, called “copy and transfer,” envisions mapping all the neural connections of a biological brain and then copying this information onto a computer. This might initially involve “destructive” scans, meaning that the person undergoing it will have to die before the new brain can be instantiated. But the goal is to eventually do noninvasive scans using high-powered MRI-like devices (which have yet to be invented) so that a person can create a copy of her consciousness while she is still alive. The second method is a more gradual process in which parts of the brain—or even individual neurons—are replaced one by one with synthetic implants, much as the mythical ship of Theseus was said to have been totally reconstructed with new wood, one plank at a time. We already have devices like cochlear implants that are designed to replace biological organs. In the future, transhumanists believe, we’ll have similar neural-implant technologies that will replace and improve our auditory perception, image processing, and memory.
He was convinced that these simple robot competencies would build on one another until they evolved something that looked very much like human intelligence. “Thought and consciousness will not need to be programmed in,” he wrote. “They will emerge.”
When I mentioned that the internet, traffic jams, and the stock market could also be considered forms of distributed intelligence, I was met with a room full of blank stares.
The most mystical aspect of emergence, after all, is the implication that we can make things that we don’t completely understand. For decades critics have argued that artificial general intelligence—machine intelligence that is functionally equivalent to that of humans—is impossible because we don’t yet know how the human brain works. But emergence in nature demonstrates that complex systems can self-organize in unexpected ways without being intended or designed. Order can arise from chaos. In machine intelligence, the hope persists that if we put the pieces together the right way—through either ingenuity or sheer accident—consciousness will simply emerge as a side effect of complexity.
Dennett argues that the primary danger is not that these social robots will suddenly overtake us in intelligence and turn malevolent, as was once believed, but rather that we will be fooled into prematurely granting them the distinction of human consciousness.
began using robots in its stores, is already implementing training programs to help their employees transition into other sectors, knowing that the number of retail positions will soon decline as machines take over. This process is gradual by design, in an effort to forestall political dissent.
It happened a couple years ago, while watching my teenage cousin play video games at a family gathering. I was relaxed and a little bored and began thinking about the landscape of the game, the trees and the mountains that made up the backdrop. The first-person perspective makes it seem like you’re immersed in a world that is holistic and complete, a landscape that extends far beyond the frame, though in truth each object is generated as needed. Move to the right and a tree is generated; move to the left and a bridge appears, creating the illusion that it was there all along. What happened to these trees and rocks and mountains when the player wasn’t looking? They disappeared—or no, they were never there to begin with; they were just a line of code. Wasn’t this essentially how the observer effect worked? The world remained in limbo, a potentiality, until the observer appeared and it was compelled to generate something solid. Rizwan Virk, a video game programmer, notes that a core mantra in programming is “only render that which is being observed.”
If the cosmos was in fact an enormous computer that was intentionally designed, these regularities suddenly made sense—they were programmed into the software, part of the digital fabric of our world. Bostrom acknowledged in his paper that there were “some loose analogies” that could be drawn between the simulation hypothesis and traditional religious concepts.
Other works of simulation theology propose how individuals should live in order to maximize their chances of resurrection. Try to be as interesting as possible, one argues. Stay close to famous people, or become a celebrity yourself. The more fascinating and unique you manage to be, the more inclined the programmers will be to hang on to your software and resurrect it.
Perhaps Galileo was not so far off when he imagined the universe as a book written by God in the language of mathematics. The universe was software written by programmers in the binary language of code.
Perhaps the larger appeal of Bostrom’s argument was that it was anthropocentric. It allowed us to believe once again that we were at the center of things, and that our lives had purpose and meaning in the larger scheme of the universe. This was essentially the point made by the Harvard theoretical physicist Lisa Randall when asked whether Bostrom’s theory was viable. It requires, she said, “a lot of hubris to think we would be what ended up being simulated.”
Goff pointed out recently that if IIT is correct, then social connectivity is a serious existential threat. Assuming that the internet reaches a point where its information is more highly integrated than that of the human brain, it would become conscious, while all our individual human brains would become absorbed into the collective mind. “Brains would cease to be conscious in their own right,” Goff writes, “and would instead become mere cogs in the mega-conscious entity that is the society including its internet-based connectivity.”
If we are no longer permitted to ask why, Clancy argues, “we will be forced to accept the decisions of our algorithms blindly, like Job accepting his punishment.”
headlines in 2017, during the trial of Eric Loomis, a thirty-four-year-old man from Wisconsin whose prison sentence—six years, for evading the police—was partly informed by COMPAS, a predictive model that determines a defendant’s likelihood of recidivism. During his trial the judge told Loomis that the COMPAS assessment had identified him as a high risk to the community. Naturally Loomis asked to know what criteria were used to determine his sentence, but he was informed that he could not challenge the algorithm’s decision. His case eventually reached the Wisconsin Supreme Court, which ruled against him.
While these opaque technologies have come under fire by civil rights organizations, their defenders frequently point out that human judgment is no more transparent. Ask a judge how she came to a sentencing decision, and her answer will be no more reliable than that of an algorithm. “The human brain is also a black box,” said Richard Berk, a professor of criminology and statistics at the University of Pennsylvania. The same conclusion was advanced in a paper sponsored by the Rand Corporation, which noted, “The thought processes of judges is (like COMPAS) a black box that provides inconsistent error-prone decisions.” These
“As a matter of fact, we’ve always lived in a world that we only partly understood,” he writes. “Contrary to what we like to believe today, humans quite easily fall into obeying others, and any sufficiently advanced AI is indistinguishable from God. People won’t necessarily mind taking their marching orders from some vast oracular computer.”
Perhaps this is why the crisis of subjectivity that one finds in Calvin, in Descartes, and in Kant continues to haunt our debates about how to interpret quantum physics, which continually return to the chasm that exists between the subject and the world, and our theories of mind, which still cannot prove that our most immediate sensory experiences are real. The echoes of this doubt ring most loudly and persistently in conversations about emerging technologies, instruments that are designed to extend beyond our earthbound reason and restore our broken connection to transcendent truth. AI began with the desire to forge a god. It is not coincidental that the deity we have created resembles, uncannily, the one who got us into this problem in the first place.
As black-box technologies become more widespread, there have been no shortage of demands for increased transparency. In 2016 the European Union’s General Data Protection Regulation included in its stipulations the “right to an explanation,” declaring that citizens have a right to know the reason behind automated decisions that involve them. While no similar measure exists in the United States, the tech industry has become more amenable to paying lip service to “transparency” and “explainability,” if only to build consumer trust. Some companies claim they have developed methods that work in reverse to suss out data points that may have triggered the machine’s decisions—though these explanations are at best intelligent guesses. (Sam Ritchie, a former software engineer at Stripe, prefers the term “narratives,” since the explanations are not a step-by-step breakdown of the algorithm’s decision-making process but a hypothesis about reasoning tactics it may have used.) In some cases the explanations come from an entirely different system trained to generate responses that are meant to account convincingly, in semantic terms, for decisions the original machine made, when in truth the two systems are entirely autonomous and unrelated. These misleading explanations end up merely contributing another layer of opacity. “The problem is now exacerbated,” writes the critic Kathrin Passig, “because even the existence of a lack of explanation is concealed.”
As Yuval Noah Harari points out, we already defer to machine wisdom to recommend books and restaurants and potential dates. It’s possible that once corporations realize their earnest ambition to know the customer better than she knows herself, we will accept recommendations on whom to marry, what career to pursue, whom to vote for. Harari argues that this would officially mark the end of liberal humanism, which depends on the assumption that an individual knows what is best for herself and can make rational decisions about her best interests.
The problem is not merely that public opinion is being shaped by robots. It’s that it has become impossible to decipher between ideas that represent a legitimate political will and those that are being mindlessly propagated by machines.
Robert A. Burton, a prominent neurologist, argued that Trump is so good at understanding algorithms because he is himself an algorithm. In a 2017 op-ed for the New York Times, Burton claimed that the president made sense once you stopped viewing him as a human being and began to see him as “a rudimentary artificial intelligence-based learning machine.” Like deep-learning systems, Trump was working blindly through trial and error, keeping a record of what moves worked in the past and using them to optimize his strategy, much like AlphaGo, the AI system that swept the Go championship in Seoul. The reason that we found him so baffling was that we continually tried to anthropomorphize him, attributing intention and ideology to his decisions, as though they stemmed from a coherent agenda. AI systems are so wildly successful because they aren’t burdened with any of these rational or moral concerns—they don’t have to think about what is socially acceptable or take into account downstream consequences. They have one goal—winning—and this rigorous single-minded interest is consistently updated through positive feedback. Burton’s advice to historians and policy wonks was to regard Trump as a black box. “As there are no lines of reasoning driving the network’s actions,” he wrote, “it is not possible to reverse engineer the network to reveal the ‘why’ of any decision.”
The destruction we’d wrought was undeniable and growing more dire all the time. And yet I did not know what I feared more, the continuation of human error or the day when the system became so efficient and autonomous that human error—and humans themselves—became entirely irrelevant.
If we resign ourselves to the fact that our machines will inevitably succeed us in power and intelligence, they will surely come to regard us this way, as something insensate and vaguely revolting, a glitch in the operation of their machinery. That we have already begun to speak of ourselves in such terms is implicit in phrases like “human error,” a phrase that is defined, variously, as an error that is typical of humans rather than machines and as an outcomenot desired by a set of rules or an external observer. We are indeed the virus, the ghost in the machine, the bug slowing down a system that would function better, in practically every sense, without us.
Free Will, Determinism and Human Agency
Writers often speak of such experiences with wonder and awe, but I’ve always been wary of them. I wonder whether it is a good thing for an artist, or any kind of maker, to be so porous, even if the intervening god is nothing more than the laws of physics or the workings of her unconscious. If what emerges from such efforts comes, as Rose puts it, “from regions beyond your control,” then at what point does the finished product transcend your wishes? At what point do you, the creator, lose control?
Then there was Christof Koch, one of the world’s leading neuroscientists, who devoted an entire chapter of his memoir to the question of free will, which he concluded did not exist. Later on, in the final chapter, he acknowledged that he became preoccupied with this question soon after leaving his wife, a woman who, he noted, had sacrificed her own career to raise their children, allowing him to maintain a charmed life of travel and professional success. It was soon after the children left for college that their marriage became strained. He became possessed with strange emotions he was “unable to master” and became captive to “the power of the unconscious.” (The book makes no explicit mention of an affair, though it is not difficult to read between the lines.) His quest to understand free will, he wrote, was an attempt “to come to terms with my actions.” “What I took from my reading is that I am less free than I feel I am. Myriad events and predispositions influence me.”
This is how she explained the dilemma: She had chosen to take the money, based on particular things that were happening in her life. She had needed the money for drugs, and she had used the money to buy drugs. And hundreds of other thieves across the country had done the same, believing their actions to be their own. But once you looked at the whole picture, she said, she was not an individual but a member of a data set whose actions could be anticipated with such precision that the corporation had already budgeted the money it knew she would steal.
It’s true, as my friend pointed out, that the accuracy of these predictions suggests—at least intuitively—that human behavior is deterministic, that the decisions we believe to be spontaneous or freely chosen are merely the end of a long and rigid causal chain of events. Arguments for determinism frequently circle back to the question of prediction, and in some cases conjure some predictive agent. The nineteenth-century scholar Pierre-Simon Laplace speculated that if there was an intellect that knew the current state of every atom in the universe, it could predict any future event.
It was not even possible to know whether the doubts themselves were fated or freely chosen. “Daily experience,” Calvin writes in his Institutes of the Christian Faith, “compels you to realize that your mind is guided by God’s prompting rather than by your own freedom to choose.” The doctrine eradicated not only free will but any coherent sense of self. To concede that one’s mind is controlled by God is to become a machine.
Imagine you were creating a world, and a historical plan with the goal of making men happy in the end, but that in order to do so it was necessary to torture just one child. Would you consent to this bargain?
Scientific Understanding and its Limits
The realms of spirit and matter were porous and not easily distinguishable from one another. Then came the dawn of modern science, which turned the world into a subject of investigation. Nature was no longer a source of wonder but a force to be mastered, a system to be figured out. At its root, disenchantment describes the fact that everything in modern life, from our minds to the rotation of the planets, can be reduced to the causal mechanism of physical laws.
If you are walking through the woods and catch a glimpse of a large dark mass, guessing that it’s a bear comes with a better survival payoff than guessing that it’s a boulder. Even safer to assume it’s another person, who could be more dangerous—particularly if wielding weapons. Things that are animate are more important to our survival than things that are inanimate, and other humans are the most important of all. Thus natural selection rewards those who, when confronted with an uncertain object, “bet high,” guessing that the object is not only alive but human.
The worst thing that science could do was to take up the mantle of reenchantment, presenting itself as a new form of revelation, or what he called “academic prophecy.” In the lecture rooms and the laboratory, the only value that should hold is intellectual integrity.
When organized into a hive, bees were capable of remarkably intelligent collective behavior that transcended their individual actions. Among the swarm there is no leader, no centralized hub, and yet somehow the bees are able to work together such that the system as a whole is capable of “self-organization.” When temperatures begin dropping in the fall, for instance, the bees at the center of the hive cluster closer together to create a core of warmth that regulates the temperature of the hive. The individual bees are not acting consciously, but the system as a whole appears, to an external observer, remarkably intelligent and deliberate.
It quickly became evident that we were all talking about the same thing—emergence: the idea that new structural properties and patterns can appear spontaneously in complex adaptive systems that are not present in its individual parts.
the idea that it was time to finally abandon modern rationalism and its privileging of the human subject; that human exceptionalism was a kind of cancer that had led to our current environmental crisis.
The vitalists insisted that an organism was more than the totality of its parts—that there must exist, in addition to its physical structure, some “living principle,” or élan vital. It was a compelling theory in part because it was intuitive. A machine is always just a machine, but a dead animal clearly lacks something—life, warmth—that once animated its living form, even though all the material parts remain in place. Vitalists hypothesized that this principle of life was perhaps ether or electricity
Such aspirations necessarily require expanding the definitions of terms that are usually understood more narrowly. If “intelligence” means abstract thought, then it would be foolish to think that plants are engaging in it. But if it means merely the ability to solve problems or adapt to a particular environment, then it’s difficult to say that plants are not capable of intelligence.
Just as neurobiologists can explain the correlations between the brain and its functions—the “how”—but not why these correlations are accompanied by subjective experience, so quantum physics is very good at predicting the behavior of particles without knowing anything about what this behavior ultimately means about the world at its most fundamental level.
Math was supposedly a language we invented, and yet many of the laws of physics were first proposed as mathematical theories and only later confirmed through empirical observation, as though there were some odd correspondence between the patterns of the mind and the patterns of the world. Then there was the problem of fine-tuning—the fact that the universe, the more we probe it, appears to be perfectly adjusted to the necessary conditions for life. If the force of gravity were only slightly lower than it is, stars would not have formed, and if it were any higher, they would have burned up too fast. The same observation has been made about the cosmological constant, the density parameter, the strong and weak nuclear force. In some cases the parameters are mind-bogglingly exact. In order for galaxies to form, the density of dark energy must fall within a minuscule range, one that involves 120 decimal points.
Quantum physics is highly prone to reenchantment narratives, particularly in the annals of popular science. The Tao of Physics, a 1975 book that explored parallels between quantum mechanics and eastern mysticism, is often cited as the textbook example of “quantum woo,” a trend that continues to flourish each time Deepak Chopra appears on a panel with theoretical physicists or a science fiction film uses quantum entanglement as a metaphor for empathy and connection. Despite the fact that the field overlaps in significant ways with both consciousness studies and information theory, I’ve often gone out of my way to avoid this area of the debate, willing myself not to click on articles declaring that the universe is a hologram or that matter itself is enminded, so eager am I to avoid regressing into problems that once untethered my most basic assumptions about reality, and in one instance led me to the very outer limits of sanity.
Of course, it was very fortunate that the mass happened to be so low, he said, because if it were higher, atoms would never have had the chance to form and none of us would be here right now, drinking wine in the glorious summer sun. We appeared to live in a very lucky universe, he said, a universe that was abnormally hospitable to life. The odds were too much in our favor. There had to be something else going on, something we didn’t yet understand.
One could still object that it was a fantastic coincidence that we found ourselves in one of the few universes capable of supporting life, but the objection was tautological. Only universes that did in fact have these conditions could have produced humans capable of having such a thought.
The physicist drew in a sharp breath. I saw that I’d hit a nerve. This was a common criticism of theoretical physics, he said. In fact people were currently trying to cut funding for projects like the Large Hadron Collider because they believed these questions were not scientific but speculative. But these were things that could be tested empirically. The technologies to do so didn’t presently exist, but they would eventually.
As the leftist collective Tiqqun notes in The Cybernetic Hypothesis, the disruptions caused by quantum physics, as well as those in mathematics spurred by Gödel’s incompleteness theorem (which demonstrated that mathematics contains logically true statements that cannot be proved), led to the widespread belief around the middle of the twentieth century that all sciences were “doomed to ‘incompleteness.’
Seth Lloyd, an MIT professor who specializes in quantum information, insists that the universe is not like a computer but is in fact a computer. “The universe is a physical system that contains and processes information in a systematic fashion,” he argues, “and that can do everything a computer can do.” Proponents of this view often point out that recent observational data seems to confirm it. Space-time, it turns out, is not smooth and continuous, as Einstein’s general relativity theory assumed, but more like a grid made up of minuscule bits—tiny grains of information that are not unlike the pixels of an enormous screen. Although we experience the world in three dimensions, it seems increasingly likely that all the information in the universe arises from a two-dimensional field, much like the way holograms work, or 3-D films.
Couldn’t the whole canon of quantum weirdness be explained by this logic? Software programs are never perfect. Programmers cut corners for efficiency—they are working, after all, with finite computing power; even the most detailed systems contain areas that are fuzzy, not fully sketched out. Maybe quantum indeterminacy simply reveals that we’ve reached the limits of the interface. The philosopher Slavoj Žižek once made a joke to this effect. Perhaps, he mused, God got a little lazy when he was creating the universe, like the video game programmer who doesn’t bother to meticulously work out the interior of a house that the player is not meant to enter. “He stopped at a subatomic level,” he said, “because he thought humans would be too stupid to progress so far.”
But if neither of these possibilities holds true, then we are almost certainly living in a simulation.
not that we should reject all metaphors, only that we should recognize them for what they are: crude attempts to elucidate concepts that are still beyond our understanding.
It is well established that chronic substance abuse can lead to cognitive lapses that are symptomatically indistinguishable from psychosis. The neurophysiological effects of alcoholism—depressed nerve centers, thiamine depletion—are the same conditions one finds in the brains of psychiatric patients.
I want to say that theories like Bostrom’s are intrinsically untethering—so much so that even now I cannot consider them in any serious way without beginning to question the very foundations of reality.
We keep trying to reclaim the Archimedean point, hoping that science will allow us to transcend the prison of our perception and see the world objectively. But the world that science reveals is so alien and bizarre that whenever we try to look beyond our human vantage point, we are confronted with our own reflection. “It is really as though we were in the hands of an evil spirit,” Arendt writes, alluding to Descartes’s thought experiment, “who mocks us and frustrates our thirst for knowledge, so that whenever we search for that which we are not, we encounter only the patterns of our own minds.”
As the computer scientist Jaron Lanier pointed out in a response to the article, some folk remedies work despite the fact that no one can explain why. But this is why folk remedies are not considered science. “Science is about understanding,” he wrote.
Information, Data and Algorithmic Society
Claude Shannon, the father of information theory, had defined information as “the resolution of uncertainty,” which seemed to mirror the way quantum systems existed as probabilities that collapsed into one of two states.
In his book Homo Deus, Yuval Noah Harari makes virtually the same analogy: “Just as according to Christianity we humans cannot understand God and His plan, so Dataism declares that the human brain cannot fathom the new master algorithms.”
The tranquilizing balm of “metadata” is that our information is equally anonymized and impersonal to those who profit from it. Nobody is reading the content of your emails, we’re told, just whom you’re emailing and how often. They’re not analyzing your conversations, just noting the tone of your voice. Your name, your face, and your skin color are not tracked, only your zip code. This is not of course out of a respect for privacy but rather an outgrowth of the philosophy of selfhood that has characterized information technologies since the early days of cybernetics—the notion that a person can be described purely in terms of pattern and probabilities, without any concern for interiority.
This metadata—the shell of human experience—becomes part of a feedback loop that then actively modifies real behavior. Because predictive models rely on past behavior and decisions—not just of the individual but of others who share the same demographics—people become trapped within the mirror of their digital reflection, a process that Google researcher Vyacheslav Polonski calls “algorithmic determinism.” Law enforcement algorithms like PredPol, which designate in red boxes particular neighborhoods where crime is likely to occur, gather their predictions from historical crime data, which means that they often send officers to precisely the same poor neighborhoods they patrolled when they were guided by their intuition alone. The difference is that these decisions, now bolstered by the authority of empirical evidence, engender confirmation bias in a way that intuition does not.
But they often zero in on other information—zip codes, income, previous encounters with police—that are freighted with historic inequality. These machine-made decisions, then, end up reinforcing existing social inequalities, creating a feedback loop that makes it even more difficult to transcend our culture’s long history of structural racism and human prejudice.
The notion that the laws of the biosphere could apply to the datasphere was already by that point taken for granted, thanks to the theory of memes, a term Richard Dawkins devised to show that ideas and cultural phenomena spread across a population in much the same way genes do.
When Rushkoff began writing about “viral media,” the internet was still in the midst of its buoyant overture, and he believed, as many did at the time, that this highly networked world would benefit “people who lack traditional political power.” A system that has no knowledge of a host’s identity or status should, in theory, be radically democratic. It should, in theory, level existing hierarchies and create an even playing field, allowing the most potent ideas to flourish, just as the most successful genes do under the indifferent gaze of nature. By 2019, however, Rushkoff had grown pessimistic. The blind logic of the network was, it turned out, not as blind as it appeared—or rather, it could be manipulated by those who already had enormous resources. “Today, the bottom-up techniques of guerrilla media activists are in the hands of the world’s wealthiest corporations, politicians, and propagandists,” Rushkoff writes in his book Team Human. What’s more, it turns out that the blindness of the system does not ensure its judiciousness. Within the highly competitive media landscape, the metrics of success have become purely quantitative—page views, clicks, shares—and so the potential for spread is often privileged over the virtue or validity of the content.
Historical Patterns and Philosophy of Progress
The mechanistic philosophy of the seventeenth century divorced not only body from mind but also matter from meaning.
the modern standpoint is that time is going somewhere, that we are gaining knowledge and understanding of the world, that our inventions and discoveries build on one another in a cumulative fashion. But then why do the same problems—and even the same metaphors—keep appearing century after century in new form?
He knew that it is not the grain that appears before all others that grows longest and bears the most abundant crop; he was even convinced that a doctrine too far advanced above the general level of its time would be condemned to temporary failure, that it would have to be buried, perhaps for a long time, but that in time it was also certain to be resurrected.
The modern world was created in less than 10,000 years, and in the past 200 years alone it had undergone more changes than in all the preceding millennia combined.
To this day many “new” ideas are merely attempts to answer questions that we have inherited from earlier periods of history, questions that have lost their specific context in medieval Christianity as they’ve made the leap from one century to the next, traveling from theology to philosophy to science and technology. In many cases, he argued, the historical questions lurking in modern projects are not so much stated but implied. We are continually returning to the site of the crime, though we do so blindly, unable to recognize or identify problems that seem only vaguely familiar to us. Failing to understand this history, we are bound to repeat the solutions and conclusions that proved unsatisfying in the past.
Dostoevsky was mostly interested in the philosophical implications of this discovery—the revelation that geometric axioms are not a priori transcendental forms of the mind but are so alien and paradoxical to human perception that they cannot be visualized, or even imagined.
One of the more contentious arguments against economic shutdown—though discussion was limited to the academic corners of the internet—was that of the Italian philosopher Giorgio Agamben, who concluded that the shutdown proved that “our society no longer believes in anything but bare life.” By “bare life” he meant brute biological survival, apart from any of the ethical, humanistic, and social concerns that make life actually worth living, though it was this phrase—“bare life”—that was quoted again and again by critics, often out of context, until it became shorthand for the ruthless world order that privileged economies over the individual souls they were built to serve.
Arendt too belonged to this generation, and she hoped that in the future we would develop an outlook that was more “geocentric and anthropomorphic.” She was adamant that this did not entail a return to the pre-Copernican view in which we regarded ourselves as the center of the universe and the pinnacle of creation. Instead she advocated a philosophy that took as its starting point the brute fact of our mortality and accepted that the earth, which we were actively destroying and trying to escape, was our only possible home.
Human Experience and Subjective Reality
But we are so easily convinced! How can we trust our subjective response to other minds when we ourselves have been “hardwired” by evolution to see life everywhere we look?
Without that narrative, my life lost its mooring. During those years after Bible school, I lived alone in an apartment across the street from a power plant, spending what little money I made on alcohol and pills.
For the length of time that the Kurzweil book was in my possession, I carried it with me everywhere, in the bottom of my backpack. It would be no exaggeration to say that I came to grant the book itself, with its strange iridescent cover, a totemic power. It seemed to me a secret gospel, one of those ancient texts devoted to hermetic mysteries that we had been dissuaded from reading as students of theology. In hindsight, what appealed to me most was not the promise of superpowers, or even the possibility of immortality. It was the notion that my interior life was somehow real—that the purely subjective experience that I had once believed to be my soul was not some ghostly illusion but a process that contained an essential and irreducible identity.
He nodded as I spoke, as though he were already well aware of this. People find it very difficult, he said, to accept the entirely random and inconsequential nature of our existence. It was not surprising to him at all that people found this explanation more attractive than the alternative.
The human condition, Kierkegaard writes, is defined by “intense subjectivity.” We are irrational creatures who cannot adequately understand our own actions or explain them in terms of rational principles.
Over the following days of the conference, I continued to experience echoes, strange coincidences, like the one that led me to the cemetery where Bohr and Kierkegaard were buried. I would read something in one of the books I’d brought with me—a new theory, a thinker whose name I’d never encountered—and then someone at the conference would mention the same name or the same idea only hours later. I could not help feeling that such coincidences were imbued with meaning—signs from the universe—though I knew this was unlikely, particularly when considered from a statistical standpoint. (How many words, images, and names did I encounter in a given day? It never occurred to me to consider all the ones that were not repeated.) Our brains have evolved to detect patterns and attribute significance to events that are entirely random, imagining signal where there is mostly noise. This tendency is probably hypertrophied in writers, who are constantly seeing the world in terms of narrative.
It might be relevant here to point out that I was existing during those years in a version of reality that was already heavily mediated. My drinking had evolved from escapism to dependence, and factoring in the multitude of pills I took each day to manage withdrawal, there were diminishingly few hours that I was truly sober. My life began to take on the sharp and irregular plotline of a Kafka novel, an endless series of non sequiturs and suspicious similarities that I was left to interpret, and my interpretations became increasingly delusional and solipsistic, fixated on “glitches” and recurrences and the conviction that certain people in my life were not conscious beings but what is known in gaming terminology as NPCs, or non-player characters. I adopted a different route to the bus stop, going well out of my way to avoid the church and the mechanical Christ. Eventually the paranoia became so bad that I stopped leaving my apartment except to go to work. Then I stopped going to work.
About a week ago, she said, she’d taken this money to the store where she’d done the receipt scam, met with the manager in his office, and explained the situation. He was very nice, she said, very understanding. But in the end he told her he couldn’t take the money. The company apparently lost a certain percentage of its revenue each year to theft, a number that could be predicted with enough accuracy to be budgeted into its annual expenses ahead of time. It was called “shrinkage.” My friend asked if she could donate the money, but of course the store did not take general donations. The manager said she could give the money to one of the charities they partnered with, but it would likely be more efficient to send the money to them directly. She said she would consider this, but after she left, the whole situation began to unsettle her. She had gone to the store to redress the harm she’d caused, but the truth was that she had caused no harm at all. The money she had stolen was in a way already accounted for. There was no deficit to pay back.
There was, despite everything, something strangely miraculous about that spring. It was nothing more concrete than a feeling, one that was difficult to put into words and that surfaced only briefly, in the pauses between the rising waves of panic. It had something to do with the quiet that had descended over the world: the emptiness of streets once teeming with traffic, the darkened windows of stores and restaurants, a stillness that seemed to reside in the air itself, which was said to have improved in quality from the reduction of fossil fuels. It was a sense of wonder, I suppose, at the fact that the entire system—all the intertwining networks and supply chains and global flows of capital—had been brought to a halt by the simple imperative to preserve human life. We had been led to believe that it couldn’t be done, but when the time came, it somehow happened. We just pulled the plug. It was an affirmation that life was not a mea
We lived in fragile bodies that would inevitably die, and these images would one day be all that remained of us. It was a period during which everything seemed to be happening through the lens of historical distance, as though I were witnessing the unfolding present as it would be remembered by the future.
Author
Mauro Sicard
CEO & Creative Director at BRIX Agency. My main interests are tech, science and philosophy.