The Big Picture explores life's deepest questions through the lens of modern science.
The following are the key points I highlighted in this book. If you’d like, you can download all of them to chat about with your favorite language model.
Physics is, by far, the simplest science. It doesn’t seem that way, because we know so much about it, and the required knowledge often seems esoteric and technical. But it is blessed by this amazing feature: we can very often make ludicrous simplifications—frictionless surfaces, perfectly spherical bodies—ignoring all manner of ancillary effects, and nevertheless get results that are unreasonably good. For most interesting problems in other sciences, from biology to psychology to economics, if you modeled one tiny aspect of a system while pretending all the others didn’t exist, you would just end up getting nonsense. (Which doesn’t stop people from trying.)
He realized that there was a simple answer to the question “What determines what will happen next?” And the answer is “The state of the universe right now.”
The world, according to classical physics, is not fundamentally teleological. What happens next is not influenced by any future goals or final causes toward which it might be working. Nor is it fundamentally historical; to know the future—in principle—requires only precise knowledge of the present moment, not any additional knowledge of the past. Indeed, the entirety of both the past and future history are utterly determined by the present.
Laplace’s Demon is a thought experiment, not one we’re going to reproduce in the lab. Realistically, there never will be and never can be an intelligence vast and knowledgeable enough to predict the future of the universe from its present state. If you sit down and think about what such a computer would have to be like, you eventually realize it would essentially need to be as big and powerful as the universe itself. To simulate the entire universe with good accuracy, you basically have to be the universe. So our concern here isn’t one of practical engineering; it’s not going to happen.
We know that the quantum state of a system, left alone, evolves in a perfectly deterministic fashion, free even of the rare but annoying examples of non-determinism that we can find in classical mechanics. But when we observe a system, it seems to behave randomly, rather than deterministically.
“Because of the laws of physics and the prior configuration of the universe” isn’t a good answer. Now we’re trying to figure out why the fundamental fabric of reality is one way rather than some other way. The secret here is to accept that such questions may or may not have answers. We have every right to ask them, but we have no right at all to demand an answer that will satisfy us. We have to be open to the possibility that they are brute facts, and that’s just how things are.
There are many possible arrangements of the atoms that give us exactly the same macroscopic appearance. The observable features provide a coarse-graining of the precise state of the system. Given that, Boltzmann suggested that we could identify the entropy of a system with the number of different states that would be macroscopically indistinguishable from the state it is actually in.
When the entropy of a system is as high as it can get, we say that the system is in equilibrium. In equilibrium, time has no arrow.
What Boltzmann successfully explained is why, given the entropy of the universe today, it’s very likely to be higher-entropy tomorrow. The problem is that, because the underlying rules of Newtonian mechanics don’t distinguish between past and future, precisely the same analysis should predict that the entropy was higher yesterday, as well. Nobody thinks the entropy actually was higher in the past, so we have to add something to our picture.
When we think about cause and effect, by contrast, we single out certain events as uniquely responsible for events that come afterward, as “making them happen.” That’s not quite how the laws of physics work; events simply are arranged in a certain order, with no special responsibility attributed to one over any of the others. We can’t pick out one moment, or a particular aspect of any one moment, and identify it as “the cause.” Different moments in time in the history of the universe follow each other, according to some pattern, but no one moment causes any other.
There is an old joke about an experimental result being “confirmed by theory,” in contrast to the conventional view that theories are confirmed or ruled out by experiments. There is a kernel of Bayesian truth to the witticism: a startling claim is more likely to be believed if there is a compelling theoretical explanation ready to hand.
The simulation argument is a little different. Is it possible that you, and everything you’ve ever experienced, are simply a simulation being conducted by a higher level of intelligent being? Sure, it’s possible. It’s not even, strictly speaking, a skeptical hypothesis: there is still a real world, presumably structured according to laws of nature. It’s just one to which we don’t have direct access. If our concern is to understand the rules of the world we do experience, the right attitude is: so what? Even if our world has been constructed by higher-level beings rather than constituting the entirety of reality, by hypothesis it’s all we have access to, and it’s an appropriate subject of study and attempted understanding.
There are several different questions here, which are related to one another but logically distinct. Are the most fine-grained (microscopic, comprehensive) stories the most interesting or important ones? As a research program, is the best way to understand macroscopic phenomena to first understand microscopic phenomena, and then derive the emergent description? Is there something we learn by studying the emergent level that we could not understand by studying the microscopic level, even if we were as smart as Laplace’s Demon? Is behavior at the macroscopic level incompatible—literally inconsistent with—how we would expect the system to behave if we knew only the microscopic rules?
There are probably more particles yet to be found. They just won’t be relevant to our everyday world. The fact that we haven’t yet found such particles tells us a great deal about what properties they must have; that’s the power of quantum field theory. Any particle that we haven’t yet detected must have one of the following features: It could be so very weakly interacting with ordinary matter that it is almost never produced; or— It could be extremely massive, so that it takes collisions at energies even higher than what our best accelerators can achieve in order to make it; or— It could be extremely short-lived, so that it gets made but then almost immediately decays away into other particles.
Effective theories are extremely useful in a wide variety of situations. When we talked about describing the air as a gas rather than as a collection of molecules, we were really using an effective theory, since the motions of the individual molecules didn’t concern us. Think about the Earth moving around the sun. The Earth contains approximately 1050 different atoms. It should be nearly impossible to describe how something so enormously complex moves through space—how could we conceivably keep track of all of those atoms? The answer is that we don’t have to: we have to keep track of only the single quantity we are interested in, the location of the Earth’s center of mass. Whenever we talk about the motion of big macroscopic objects, we’re almost always implicitly using an effective theory of their center-of-mass motion.
This simple example highlights important aspects of how effective theories work. For one thing, notice that the actual entities we’re talking about—the ontology of the theory—can be completely different in the effective theory from that of a more comprehensive microscopic theory. The microscopic theory has quarks; the effective theory has protons and neutrons. It’s an example of emergence: the vocabulary we use to talk about fluids is completely different from that of molecules, even though they can both refer to the same physical system.
Finally, there is the manifest loophole that describing the world in terms of physics alone might not be good enough. There might be more to reality than the physical world. We’ll leave serious discussion of that possibility for chapter 41.
Likewise, even after another hundred or thousand years of scientific progress, we will still believe in the Core Theory, with its fields and their interactions. Hopefully by then we’ll be in possession of an even deeper level of understanding, but the Core Theory will never go away. That’s the power of effective theories.
The progress of modern physics and cosmology has sent a fairly unequivocal message: there’s nothing wrong with the universe existing without any external help. Why it exists the particular way it does, rather than some other way, is worth exploring.
In recent years it has been championed by theologian William Lane Craig, who puts it in the form of a syllogism: Whatever begins to exist, has a cause. The Universe begins to exist. Therefore, the Universe had a cause. As we’ve seen, the second premise of the argument may or may not be correct; we simply don’t know, as our current scientific understanding isn’t up to the task. The first premise is false. Talking about “causes” is not the right vocabulary to use when thinking about how the universe works at a deep level. We need to be asking ourselves not whether the universe had a cause but whether having a first moment in time is compatible with the laws of nature.
To the question of whether the universe could possibly exist all by itself, without any external help, science offers an unequivocal answer: sure it could. We don’t yet know the final laws of physics, but there’s nothing we know about how such laws work that suggests the universe needs any help to exist. For questions like this, however, the scientific answer doesn’t always satisfy everyone. “Okay,” they might say, “we understand that there can be a physical theory that describes a self-contained universe, without any external agent bringing it about or sustaining it. But that doesn’t explain why it actually does exist. For that, we have to look outside science.”
According to this line of thought, it doesn’t matter if physicists can cook up self-contained theories in which the cosmos has a first moment of time; those theories must necessarily be incomplete, since they violate this cherished principle. This is perhaps the most egregious example of begging the question in the history of the universe. We are asking whether the universe could come into existence without anything causing it. The response is “No, because nothing comes into existence without being caused.” How do we know that? It can’t be because we have never seen it happen; the universe is different from the various things inside the universe that we have actually experienced in our lives. And it can’t be because we can’t imagine it happening, or because it’s impossible to construct sensible models in which it happens, since both the imagining and the construction of models have manifestly happened.
Our job, in other words, is to move from the first question, “Can the universe simply exist?” (yes, it can) to the second, harder one: “What is the best explanation for the existence of the universe?” The answer is certainly “We don’t know.”
The same goes for phenomena such as astrology. The only fields that could possibly reach from another planet to Earth are gravity and electromagnetism. Gravity, again, is simply too weak to have any effect; the gravitational force caused by Mars on objects on Earth is comparable to that of a single person standing nearby. For electromagnetism the situation is even clearer; any electromagnetic signals from other planets are swamped by more mundane sources.
If the world we see in our experiments is just a tiny part of a much bigger reality, the rest of reality must somehow act upon the world we do see; otherwise it doesn’t matter very much. And if it does act upon us, that implies a necessary alteration in the laws of physics as we understand them. Not only do we have no strong evidence in favor of such alterations; we don’t even have any good proposals for what form they could possibly take.
If there are many ways to rearrange the particles in a system without changing its basic appearance, it’s high-entropy; if there are a relatively small number, it’s low-entropy. The Past Hypothesis says that our observable universe started in a very low-entropy state. From there, the second law is easy to see: as time goes on, the universe goes from being low-entropy to high-entropy, simply because there are more ways that entropy can be high.
Asking that our understanding of human life be compatible with what we know about the underlying physics places some interesting constraints on what life is and how it operates. Knowing the particles and forces of which we are made allows us to conclude with very high confidence that individual lives are finite in scope; our best cosmological theories, while much less certain than the Core Theory, suggest that “life” as a broader concept is also finite. The universe seems likely to reach a state of thermal equilibrium. At that point it won’t be possible for anything living to survive; life relies on increasing entropy, and in equilibrium there’s no more entropy left to generate.
For every one visible photon it receives from the sun, the Earth radiates approximately twenty infrared photons back into space, with approximately one-twentieth of the energy each. The Earth gives back the same amount of energy as it gets, but we increase the entropy of the solar radiation by twenty times before returning it to the universe.
This idea is sometimes labeled the anthropic principle, and the very mention of it tends to inflame passionate debate between its supporters and detractors. That’s too bad, because the basic concept is very simple, and practically indisputable. If we live in a world where conditions are very different from place to place, then there is a strong selection effect on what we will actually observe about that world: we will only ever find ourselves in a part of the world that allows for us to exist. There are several planets in the solar system, for example, and some of them are much larger than Earth. But nobody thinks it is weird or finely tuned that Earth is where we live; it’s the spot that is most hospitable to life. That’s the anthropic principle in action.
If you’ve spent much time swimming or diving, you know that you can’t see as far underwater as you can in air. The attenuation length—the distance past which light is mostly absorbed by the medium you are looking through—is tens of meters through clear water, while in air it’s practically infinite. (We have no trouble seeing the moon, or distant objects on our horizon.
Evolution and the Origins of Life
Complex structures can form, not despite the growth of entropy but because entropy is growing. Living organisms can maintain their structural integrity, not despite the second law but because of it.
Schrödinger’s picture of living organisms maintaining their structural integrity by using up free energy is impressively manifested in real-world biology. The sun sends us free energy, in the form of relatively high-energy visible-light photons. These are captured by plants and single-celled organisms that use photosynthesis to create ATP for themselves, as well as sugars and other edible compounds, which in turn store free energy that can be used by animals. This free energy is used to maintain order within the organism, as well as allowing it to move and think and react, all of the things that living beings do that distinguish them from nonliving things. The solar energy we started with is gradually degraded along the way, turning into disordered energy in the form of heat. That energy is ultimately radiated back to the universe as relatively low-energy infrared photons. Long live the second law of thermodynamics.
but in a famous experiment in 1952, Stanley Miller and Harold Urey took a flask full of some simple gases—hydrogen (H2), water (H2O), ammonia (NH3), and methane (CH4)—and zapped it with sparks. The idea was that these compounds may have been present in the atmosphere of the ancient Earth, and the sparks would simulate the effects of lightning. With a fairly simple setup, and after running for just a week without any special tinkering, Miller and Urey found that their experiment had produced a number of different amino acids, organic compounds that play a crucial role in the chemistry of life. Today we don’t think that Miller and Urey were correctly modeling conditions on the early Earth. Their experiment nevertheless demonstrated a crucial biochemical fact: it’s not that hard to make amino acids. To make life, the next step would be to assemble proteins, which do the heavy lifting in terms of biological function—they move things around inside the body, catalyze useful reactions, and help cells communicate with one another.
Crystals can grow by adding new atoms, and can then divide by the simple expedient of breaking in two. Each of the offspring will have inherited the structure of its parent crystal. That’s still not life, though we’re getting closer. While the basic crystalline structure can be inherited, variations in that structure—random mutations—cannot. Variations are certainly possible; real crystals are often riddled with impurities, or suffer from defects where the structure doesn’t follow the dominant pattern. But there’s no way to pass down knowledge of these variations to subsequent generations. What we want is a configuration that is crystal-like (in that there is a fixed structure that can be reproduced) but more elaborate than a simple repeating pattern.
There is no reason to think that we won’t be able to figure out how life started. No serious scientist working on the origin of life, even those who are personally religious, points to some particular process and says, “Here is the step where we need to invoke the presence of a nonphysical life-force, or some element of supernatural intervention.” There is a strong conviction that understanding abiogenesis is a matter of solving puzzles within the known laws of nature, not calling for help from outside of them.
The chance that higher life forms might have emerged in this way is comparable to the chance that a tornado sweeping through a junkyard might assemble a Boeing 747 from the materials therein.
His basic setup was—and is, as the experiment is still ongoing—a simple one. He started with twelve flasks containing growth medium: a liquid with a specific mixture of chemicals, including a bit of glucose to provide energy. He then introduced a population of identical E. coli bacteria into each of them. Every day, each flask goes from a few million to a few hundred million cells. One percent of the surviving bacteria are extracted and moved to new flasks with the same growth medium as before. The remaining bacteria are mostly disposed of, although every so often a sample is frozen for future examination, creating an experimental “fossil record.” (Unlike human beings, live bacteria can easily be frozen and revived at a later date using current technology.) The total population growth amounts to about six and a half generations in a day; the limiting resource is nutrition, not time (it takes less than an hour for a cell to divide). As of late 2015, this added up to more than 60,000 generations of bacteria—enough for some interesting evolutionary wrinkles to develop. Confined to this extremely specific and stable environment, the evolved bacteria are by now quite well adapted to their surroundings. They are now over twice the size of the individuals in the original population, and they reproduce more rapidly than before. They have become very good at metabolizing glucose, while generally decaying in their ability to thrive in more diverse nutrient environments. Most impressively, there have been qualitative as well as quantitative changes in the E. coli. Among the ingredients in the initial growth medium was citrate, an acid made of carbon, hydrogen, and oxygen. The original bacteria had no ability to use this compound. But around generation 31,000, Lenski and his collaborators noticed that the population in one particular flask had grown larger than the others. Looking more closely, they realized that some of the bacteria in that flask had developed the ability to metabolize citrate, rather than just glucose.
(Unlike human beings, live bacteria can easily be frozen and revived at a later date using current technology.)
We sometimes think of natural selection as “survival of the fittest.” But even before evolution in Darwin’s sense officially kicked in, there was a competition of sorts going on for the available free energy.
Computer scientists have recently shown that a simplified model of evolution (allowing for mixing via sexual reproduction, but not for mutations) is mathematically equivalent to an algorithm devised by game theorists years ago, known as multiplicative weight updates. Good ideas tend to show up in a variety of places.
The search procedure employed by evolution is so efficient that real human computer programmers often use an analogous process to develop their own strategies. This is a technique known as genetic algorithms. As with genomes, we can imagine the set of all possible algorithms of a certain length, at least within a fixed computer language. There will be a large number of them, and in principle we want to know which one is the best at solving some specified problem.
Yes, it can. Evolution easily found much better solutions than design. After only 250 generations, the computer was doing as well as the benchmark strategy, and after 1,000 generations, it had reached almost 97 percent of a perfect score.
An irreducibly complex system, in Behe’s definition, is one whose functioning involves a number of interacting parts, with the property that every one of the parts is necessary for the system to function. The idea is that certain systems are made of parts that are so intimately interconnected that they can’t arise gradually; they must have come together all at once. That’s not something we would expect from evolution. The problem is that the property of irreducible complexity isn’t readily measurable. To illustrate the concept, Behe mentions an ordinary mousetrap, with a spring mechanism and a release lever and so forth. Remove any one of the parts, he argues, and the mousetrap is useless; it must have been designed, rather than incrementally put together through small changes that were individually beneficial.
Irreducible complexity reflects a deep concern that many people have about evolution: the particular organisms we find in our biosphere are just too designed-looking to possibly have arisen through “random chance plus selection.”
So what about option 4, which avoids any particular evolutionary storytelling? It’s a true statement, but not a useful one in this context. From the poetic-naturalism perspective, natural selection provides a successful way of talking about emergent properties of the biological world. We don’t need to use a vocabulary of evolution and adaptation to correctly describe what happens, but doing so gives us important and useful knowledge.
Perhaps the most popular way of attempting to reconcile evolution with divine intervention is to take advantage of the probabilistic nature of quantum mechanics. A classical world, so the reasoning goes, would be perfectly deterministic from start to finish, and there would be no way for God to influence the evolution of life without straightforwardly violating the laws of physics. But quantum mechanics only predicts probabilities. In this view, God can simply choose certain quantum-mechanical outcomes to become real, without actually violating physical law; he is merely bringing physical reality into line with one of the many possibilities inherent in quantum dynamics. Along similar lines, Plantinga has suggested that quantum mechanics can help explain a number of cases of divine action, from miraculous healing to turning water into wine and parting the Red Sea. True, all of these seemingly miraculous occurrences would be allowed under the rules of quantum mechanics; they would simply be very unlikely. Very, extremely, outrageously unlikely. If we populated every planet circling every star in the universe with scientists, and let them do experiments continuously for many times the current age of the observable universe, it would be extraordinarily improbable that even one of them would witness a single drop of water changing into wine. But it’s possible.
But of course it can possibly happen, if God exists; God can do whatever he wants, no matter what the laws of physics may be. What theistic evolutionists are actually doing is using quantum indeterminacy as a fig leaf: it’s not that God is allowed to act in the world, it’s that they are allowed to imagine him acting in a way such that nobody would notice, leaving no fingerprints. It is unclear why God would place such a high value on acting in ways that human beings can’t notice. This approach reduces theism to the case of the angel steering the moon, which we considered in chapter 10. You can’t disprove the theory by any possible experiment, since it is designed precisely to be indistinguishable from ordinary physical evolution. But it doesn’t gain you anything either. It makes the most sense to place our credence in the idea that the divine influences simply aren’t there.
Fred Hoyle, the astronomical gadfly who liked to cast doubt on the Big Bang and the origin of life, wrote a science-fiction novel called The Black Cloud, in which the Earth is menaced by an immense, living, intelligent cloud of interstellar gas. Robert Forward, another scientist with a science-fictional bent, wrote Dragon’s Egg, about microscopic life-forms that live on the surface of a neutron star. Perhaps a trillion trillion years from now, long after the last star has winked out, the dark galaxy will be populated by diaphanous beings floating in the low-intensity light given off by radiating black holes, with the analogue of heartbeats that last a million years. Any one possibility seems remote, but we know of a number of physical systems that naturally develop complex behavior as entropy increases over time; it’s not at all hard to imagine that life could develop in unexpected places.
Almost 400 million years ago, a plucky little fish climbed onto land and decided to hang out rather than returning to the sea. Its descendants evolved into the species Tiktaalik roseae, fossils of which were first discovered in 2004 in the Canadian Arctic. If you were ever looking for a missing link between two major evolutionary stages, Tiktaalik is it; these adorable creatures represent a transitional form between water-based and land-based animal life.
If you’re a fish, you move through the water at a meter or two per second, and you see some tens of meters in front of you. Every few seconds you are entering a new perceptual environment. As something new looms into your view, you have only a very brief amount of time in which to evaluate how to react to it. Is it friendly, fearsome, or foodlike? Under those conditions, there is enormous evolutionary pressure to think fast. See something, respond almost immediately. A fish brain is going to be optimized to do just that. Quick reaction, not leisurely contemplation, is the name of the game.
Our ability to imagine the future is incredibly detailed and rich, but it’s not hard to imagine how it might have evolved gradually over the span of many generations.
The most important thing about life is that it occurs out of equilibrium, driven by the second law. To stay alive, we have to continually move, process information, and interact with our environment.
Biologists Robert Sapolsky and Lisa Share studied a group of Kenyan baboons who fed off the garbage from a nearby tourist lodge. The clan was dominated by high-status males, and females and lesser males would often go hungry. Then at one point, the clan ate infected meat from the garbage dump, which led to the deaths of most of the dominant males. Afterward, the “personality” of the troop completely changed: individuals were less aggressive, more likely to groom one another, and more egalitarian. This behavior persisted as long as the study continued, for over a decade.
Consciousness and the Mind
And even people who agree that there is only one kind of thing, and that the world is purely physical, might diverge when it comes to asking which aspects of that world are “real” versus “illusory.” (Are colors real? Is consciousness? Is morality?)
Are we sure that a unified physical reality could naturally give rise to life as we know it? Are we sure it is sufficient to describe consciousness, perhaps the most perplexing aspect of our manifest world?
We shouldn’t overestimate people’s rationality or willingness to look at new evidence as objectively as possible. For better or for worse, planets eventually develop highly sophisticated defense mechanisms. When you realize that you are holding two beliefs that are in conflict with each other, psychologists refer to the resulting discomfort as cognitive dissonance. It’s a sign that there is something not completely structurally sound about your planet of belief. Unfortunately, human beings are extremely good at maintaining the basic makeup of their planets, even under very extreme circumstances.
This theory was originally developed not for individual cells but as a way of thinking about how brains interact with the outside world. Our brains construct models of their surroundings, with the goal of not being surprised very often by new information. That process is precisely Bayesian reasoning—subconsciously, the brain carries with it a set of possible things that could happen next, and updates the likelihood of each of them as new data comes in. It is interesting that the same mathematical framework might apply to systems on the level of individual cells. Keeping the cell membrane intact and robust turns out to be a kind of Bayesian reasoning. As Friston puts it: The internal states (and their blanket) will appear to engage in active Bayesian inference. In other words, they will appear to model—and act on—their world to preserve their functional and structural integrity, leading to homeostasis [preserving stable internal conditions] and a simple form of autopoiesis [maintaining structure through self-regulation].
Any theist worth their salt could, admittedly, come up with a number of reasons why God would choose to associate immaterial souls with complex self-sustaining chemical reactions, at least for a time. Likewise, if we lived in a universe where life was not associated with matter in such a way, it wouldn’t be hard to come up with justifications for that. This is the problem with theories that are not well defined.
If there is any one aspect of reality that causes people to doubt a purely physical and naturalist conception of the world, it’s the existence of consciousness. And it can be hard to persuade the skeptics, since even the most optimistic neuroscientist doesn’t claim to have a complete and comprehensive theory of consciousness. Rather, what we have is an expectation that when we do achieve such an understanding, it will be one that is completely compatible with the basic tenets of the Core Theory—part of physical reality, not apart from it.
We could bring the Inside Out model closer to reality with two modifications. First, the various “modules” that contribute to our thought processes don’t map directly onto emotions. (Neither do they have charming personalities or colorful anthropomorphic bodies.) They are unconscious processes of various sorts—the kind of mental functions that could have naturally arisen over the course of biological evolution, well before the explicit development of consciousness. Second, while there is no dictator in the mind, there does seem to be a kind of prime minister of the parliament, a seat of cognition where the inputs from many modules are sewn together into a continuum of consciousness.
Daniel Kahneman, a psychologist who won the Nobel Prize in Economics for his work on decision making, has popularized dividing how we think into two modes of thought, dubbed System 1 and System 2. (The terms were originally introduced by Keith Stanovich and Richard West.) System 1 includes all the various modules churning away below the surface of our conscious awareness. It is automatic, “fast,” intuitive thinking, driven by unconscious reactions and heuristics—rough-and-ready strategies shaped by prior experience. When you manage to make your coffee in the morning or drive from home to work without really paying attention to what you are doing, it’s System 1 that is in charge. System 2 is our conscious, “slow,” rational mode of thinking. It demands attention; when you’re concentrating on a hard math problem, that’s System 2’s job. As we go through the day, the vast majority of work being done in our brain belongs to System 1, despite our natural tendency to give credit to our self-aware System 2. Kahneman compares System 2 to “a supporting character who believes herself to be the lead actor and often has little idea of what’s going on.”
There’s a lot going on beneath the deceptively simple idea of “making plans.” We have to have the ability to conceive of times in the future, not merely the present moment. We need to be able to represent the actions of both ourselves and the rest of the world in our mental pictures. We must reliably predict future actions and their likely responses. Finally, we must be able to do this for multiple scenarios simultaneously, and eventually compare and choose between them. The ability to plan ahead seems so basic that we take it for granted, but it’s quite a marvelous capacity of the human mind.
The “now” of your conscious perception is not the same as the current moment in which you are living. Though we sometimes think of consciousness as a unified essence guiding our thoughts and behavior, in fact it is stitched together out of inputs from different parts of the brain as well as our sensory perceptions. That stitching takes time. If you use one hand to touch your nose, and the other to touch one of your feet, you experience them as simultaneous, even though it takes longer for the nerve impulses to travel to your brain from your feet than from your nose. Your brain waits until all of the relevant inputs have been assembled, and only then presents them to you as your conscious perceptions. Typically, what you experience as “now” corresponds to what was actually happening some tens or hundreds of milliseconds in the past.
Recent work in neuroscience has lent credence to this idea. Researchers have been able to use functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) scans to pinpoint regions in the brain that are active while subjects are conducting various mental tasks. Interestingly, the tasks of “remember yourself in a particular situation in the past” and “imagine yourself in a particular hypothetical situation in the future” are seen to engage a very similar set of subsystems in the brain. Episodic memory and imagination engage the same neural machinery. Memories of past experiences, it turns out, are not like a video or film recording of an event, with individual sounds and images stored for each moment. What’s stored is more like a script. When we remember a past event, the brain pulls out the script and puts on a little performance of the sights and sounds and smells. Part of the brain stores the script, while others are responsible for the stage settings and props. This helps explain why memories can be completely false, yet utterly vivid and real-seeming to us—the brain can put on a convincing show from an incorrect script just as well as an accurate one. It also helps explain how our chronesthetic ability to imagine future events might have developed through natural selection.
While a capacity for mental time travel is important for some aspects of consciousness, it certainly isn’t the whole story. Kent Cochrane was an amnesiac, famous in the psychology literature as the patient “K. C.” When he was thirty years old, K. C. suffered a serious motorcycle accident. He survived, but during surgery he lost parts of his brain, including the hippocampus, and his medial temporal lobes were severely damaged. Afterward, he retained his semantic memory but completely lost his episodic memory. His ability to form new memories was almost completely absent, much like the character of Leonard Shelby in the movie Memento. K. C. knew that he owned a particular car, but had no recollection of ever driving in it. His basic mental capacities were intact, and he had no trouble carrying on a conversation. He just couldn’t remember anything he had ever seen or done. There’s little question that K. C. was “conscious” in some sense. He was awake, aware, and knew who he was. But consistent with the connection between memory and imagination, K. C. was completely unable to contemplate his own future. When asked about what would happen tomorrow or even later that day, he would simply report that it was blank. His personality underwent a significant change after the accident. He had, in some sense, become a different person.
There is some evidence that episodic memory doesn’t develop in children until they are about four years old, around the time they also seem to develop the capacity for modeling the mental states of other people. At younger ages, for example, children can learn new things, but they have trouble associating new knowledge with any particular event; when quizzed about something they just learned, they will claim that they have always known it.
Four hundred million years is a long time. The evolution of consciousness as we now know it took many steps. Chimpanzees can think and execute a plan, such as building a structure in order to get to a banana that is out of reach. That’s a kind of imaginative thought, though certainly not the whole story.
Impressively (or disturbingly, depending on your perspective), they have also been able to remove memories from mice by weakening specific synapses, and even artificially implanting false memories by directly stimulating individual nerve cells with electrodes. Memories are physical things, located in your brain.
Results like this are of much more than academic interest: doctors have long sought a way of telling whether a patient under anesthesia or suffering from brain damage was truly unconscious, or merely unable to move and communicate with the outside world.
As neurophysiologist Dante Chialvo put it, “A brain that is not critical is a brain that does exactly the same thing every minute, or, in the other extreme, is so chaotic that it does a completely random thing, no matter what the circumstances. That is the brain of an idiot.”
Consider what’s known as the Capgras delusion. Patients suffering from this syndrome have damage to the part of the brain that connects two other parts: the temporal cortex, associated with recognizing other people, and the limbic system, which is in charge of feelings and emotions. A person who develops Capgras delusion will be able to recognize people they know, but will no longer feel whatever emotional connection they used to have with them. (It is the flip side of prosopagnosia, which involves a loss of the ability to recognize people.)
There is structure in a computer architecture as well, both hardware and software, but it seems unlikely that the kind of structure a computer has would hit upon self-awareness essentially by accident. And what if it did? How would we know that a computer was actually “thinking,” as opposed to mindlessly pushing numbers around? (Is there a difference?)
The argument from consciousness seemed, to Turing, to ultimately be solipsistic: you could never know that anyone was conscious unless you actually were that person. How do you know that everyone else in the world is actually conscious at all, other than by how they behave? Turing was anticipating the idea of a philosophical zombie—someone who looks and acts just like a regular person but has no inner experience, or qualia.
Someone might think: “I know that I’m conscious, and other people are basically like me, so they’re probably conscious as well. Computers, however, are not like me, so I can be more skeptical.” I don’t think this is the right attitude, but it’s a logically consistent one. The question then becomes, are computers really so different? Is the kind of thinking done in my brain really qualitatively distinct from what happens inside a computer? Heinlein’s protagonist didn’t think so: “Can’t see it matters whether paths are protein or platinum.”
Imagine that we take one neuron in your brain, and study what it does until we have it absolutely figured out. We know precisely what signals it will send out in response to any conceivable signals that might be coming in. Then, without making any other changes to you, we remove that neuron and replace it with an artificial machine that behaves in precisely the same way, as far as inputs and outputs are concerned. A “neuristor,” as in Heinlein’s self-aware computer, Mike. But unlike Mike, you are almost entirely made of your ordinary biological cells, except for this one replacement neuristor. Are you still conscious? Most people would answer yes, a person with one neuron replaced by an equivalently behaving neuristor is still conscious. So what if we replace two neurons? Or a few hundred million? By hypothesis, all of your external actions will be unaltered—at least, if the world is wholly physical and your brain isn’t affected by interactions with any immaterial soul substance that
A form of multiple realizability must be true at some level. Like the Ship of Theseus, most of the individual atoms and many of the cells in any human body are replaced by equivalent copies each year. Not every one—the atoms in your tooth enamel are thought to be essentially permanent, for example. But who “you” are is defined by the pattern that your atoms form and the actions that they collectively take, not their specific identities as individual particles. It seems reasonable that consciousness would have the same property. And if we are creating a definition of consciousness, surely “how the system behaves over time” has to play a crucial role. If any element of consciousness is absolutely necessary, it should be the ability to have thoughts. That unmistakably involves evolution through time. The presence of consciousness also implies something about apprehending the outside world and interacting with it appropriately. A system that simply sits still, maintaining the same configuration at every moment of time, cannot be thought of as conscious, no matter how complex it may be or whatever it may represent.
A complete video and audio recording of the life of a human being wouldn’t be “conscious,” even if it precisely captured everything that person had done to date, because the recording wouldn’t be able to extrapolate that behavior into the future. We couldn’t ask it questions or interact with it. Many of the computer programs that have attempted to pass cut-rate versions of the Turing test have been souped-up chat bots—simple systems that can spit out preprogrammed sentences to a variety of possible questions. It is easy to fool them, not only because they don’t have the kind of detailed contextual knowledge of the outside world that any normal person would have, but because they don’t have memories even of the conversation they have been having, much less ways to integrate such memories into the rest of the discussion. In order to do so, they would have to have inner mental states that depended on their entire histories in an integrated way, as well as the ability to conjure up hypothetical future situations, all along distinguishing the past from the future, themselves from their environment, and reality from imagination. As Turing suggested, a program that was really good enough to convincingly sustain human-level interactions would have to be actually thinking.
Consider the color red. It is a useful concept, one that can apparently be recognized universally and objectively, at least by sighted people who are not prevented from seeing red by color blindness. The operational instruction “stop when the light is red” can be understood without ambiguity. But there is the famous lurking question: do you and I see the same thing when we see something red? That’s the question of phenomenal consciousness—what is it like to experience redness? The word qualia (plural of “quale,” which is pronounced KWAH-lay) is sometimes used to denote the subjective experience of the way something seems to us. “Red” is a color, a physically objective wavelength of light or appropriate combination thereof; but “the experience of the redness of red” is one of the qualia we would like to account for in a complete understanding of consciousness.
Australian philosopher David Chalmers has famously emphasized the difference between what he calls the Easy Problems and the Hard Problem of consciousness. The Easy Problems are manifold—explaining the difference between being awake and asleep, how we sense and store and integrate information, how we can recall the past and predict the future. The Hard Problem is explaining qualia, the subjective character of experience. It can be thought of as those aspects of consciousness that are irreducibly first-person; what we personally feel, not how we act and respond as seen by the rest of the world. The Easy Problems are about functioning; the Hard Problem is about experiencing.
Mary is a brilliant scientist who has been brought up under certain bizarre circumstances. She lives in a room that she has never left, and that room is completely devoid of color. Everything in the room is black, white, or some shade of gray. Her own skin is painted white, and all of her clothes are black. Curiously, given her environment, Mary grows up to become a specialist in the science of color. She has access to all of the equipment she would want, as well as to the entirety of the scientific literature on the subject of color. All of the color illustrations have been reduced to grayscale. Eventually, Mary knows everything there is to know about color, from a physical point of view. She knows about the physics of light, and about the neuroscience of how the eye transmits signals to the brain. She’s read up on art history, color theory, and the agricultural expertise involved in growing a perfect red tomato. She’s just never seen the color red. Jackson asks, what happens when Mary decides to leave her room and actually sees colors for the first time? In particular, does she learn anything new? He claims she does. What will happen when Mary is released from her black and white room or is given a color television monitor? Will she learn anything or not? It seems just obvious that she will learn something about the world and our visual experience of it. But then is it inescapable that her previous knowledge was incomplete. But she had all the physical information. Ergo there is more to have than that, and Physicalism is false.
As far as you can tell by talking to them, all of your friends and loved ones are secretly zombies. And they can’t be sure you’re not a zombie. Perhaps they have suspicions.
As long as zombies are conceivable or logically possible, Chalmers argues, then we know that consciousness is not purely physical, regardless of whether zombies could exist in our world. Because then we would know that consciousness can’t simply be attributed to what matter is doing: the same behavior of matter could happen with or without conscious experience.
The suggestion that consciousness pervades the universe, and is a part of every piece of matter, goes by the name of panpsychism. It’s an old idea, going back arguably as far as Thales and Plato in ancient Greece, as well as in certain Buddhist traditions.
A good Bayesian can therefore conclude that the zombie-photon world is the one we actually live in. We simply don’t gain anything by attributing the features of consciousness to individual particles. Doing so is not a useful way of talking about the world; it buys us no new insight or predictive power. All it does is add a layer of metaphysical complication onto a description that is already perfectly successful.
It Pays to Listen.
There Is No Natural Way to Be.
Free Will and Human Agency
Our need to justify our own beliefs can end up having a dramatic influence on what those beliefs actually are. Social psychologists Carol Tavris and Elliot Aronson talk about the “Pyramid of Choice.” Imagine two people with nearly identical beliefs, each confronted with a decision to make. One chooses one way, and the other goes in the other direction, though initially it was a close call either way. Afterward, inevitably, they work to convince themselves that the choice they made was the right one. They each justify what they did, and begin to think there wasn’t much of a choice at all. By the end of the process, these two people who started out almost the same have ended up on opposite ends of a particular spectrum of belief—and often defending their position with exceptionally fervent devotion. “It’s the people who almost decide to live in glass houses who throw the first stones,” as Tavris and Aronson put it.
Once we see how mental states can exert physical effects, it’s irresistible to ask, “Who is in charge of those mental states?” Am I, my emergent self, actually making choices? Or am I simply a puppet, pulled and pushed as my atoms jostle amongst themselves according to the laws of physics? Do I, at the end of the day, have free will? There’s a sense in which you do have free will. There’s also a sense in which you don’t. Which sense is the “right” one is an issue you’re welcome to decide for yourself (if you think you have the ability to make decisions).
If information is conserved through time, the entire future of the universe is already written, even if we don’t know it yet. Quantum mechanics predicts our future in terms of probabilities rather than certainties, but those probabilities themselves are absolutely fixed by the state of the universe right now. A quantum version of Laplace’s Demon could say with confidence what the probability of every future history will be, and no amount of human volition would be able to change it. There is no room for human choice, so there is no such thing as free will. We are just material objects who obey the laws of nature.
One popular definition of free will is “the ability to have acted differently.” In a world governed by impersonal laws, one can argue that there is no such ability. Given the quantum state of the elementary particles that make up me and my environment, the future is governed by the laws of physics. But in the real world, we are not given that quantum state. We have incomplete information; we know about the rough configuration of our bodies and we have some idea of our mental states. Given only that incomplete information—the information we actually have—it’s completely conceivable that we could have acted differently. This is the point at which free-will doubters will object that the stance we’ve defended here isn’t really free will at all. All we’ve done is redefine the notion to mean something completely different, presumably because we are too cowardly to face up to the desolate reality of an impersonal cosmos. I have no problem with the desolate reality of an impersonal cosmos. But it’s important to explore the most accurate and useful ways of talking about the world, on all relevant levels.
In a famous experiment in the 1980s, physiologist Benjamin Libet measured brain activity in subjects as they decided to move their hands. The volunteers were also observing a clock, and could report precisely when they made their decisions. Libet’s results seemed to indicate that there was a telltale pulse of brain activity before the subjects became consciously aware of their decision. To put it dramatically: part of their brain had seemingly made the decision before the people themselves became aware of it.
Where the issue becomes more than merely academic is when we confront the notions of blame and responsibility. Much of our legal system, and much of the way we navigate the waters of our social environment, hinges on the idea that individuals are largely responsible for their actions. At extreme levels of free-will denial, the idea of “responsibility” is as problematic as that of human choice. How can we assign credit or blame if people don’t choose their own actions? And if we can’t do that, what is the role of punishment or reward?
What matters here is not the extent to which this particular patient actually lost control over his choices, but the fact that such loss is possible. What that does to our notions of personal responsibility is a pressing real-world question, not an academic abstraction.
To the extent that neuroscience becomes better and better at predicting what we will do without reference to our personal volition, it will be less and less appropriate to treat people as freely acting agents. Predestination will become part of our real world.
The source of these values isn’t the outside world; it’s inside us. We’re part of the world, but we’ve seen that the best way to talk about ourselves is as thinking, purposeful agents who can make choices. One of those choices, unavoidably, is what kind of life we want to live.
Our ability to think has given us enormous leverage over the world around us. We won’t be able to stave off the heat death of the universe, but we can alter bodies, transform our planet, and someday spread life through the galaxy. It’s up to us to make wise choices and shape the world to be a better place.
Knowledge and Understanding
Laplace’s Demon
The poet Muriel Rukeyser once wrote, “The universe is made of stories, not of atoms.” The world is what exists and what happens, but we gain enormous insight by talking about it—telling its story—in different ways.
The poetic aspect comes to the fore when we start talking about that world. It can also be summarized in three points: There are many ways of talking about the world. All good ways of talking must be consistent with one another and with the world. Our purposes in the moment determine the best way of talking.
Principle of Sufficient Reason: For any true fact, there is a reason why it is so, and why something else is not so instead.
Abduction is a type of reasoning that can be contrasted with deduction and induction. With deduction, we start with some axioms whose truth we do not question, and derive rigorously necessary conclusions from them. With induction, we start with some examples we know about, and generalize to a wider context—rigorously, if we have some reason for believing that such a generalization is always correct, but often we don’t quite have that guarantee. With abduction, by contrast, we take all of our background knowledge about how the world works, and perhaps some preference for simple explanations over complex ones (Occam’s razor), and decide what possible explanation provides the best account of all the facts we have.
The laws themselves, as we’ve discussed, make no reference to “reasons” or “causes.” They are simply patterns that connect what happens at different places and times. Nevertheless, the concept of a “reason why” something is true is a very useful one in our daily lives.
Philosophers refer to this as modal reasoning—thinking not only about what does happen but about what could happen in possible worlds.
The question being addressed by Bayes and his subsequent followers is simple to state, yet forbidding in its scope: How well do we know what we think we know? If we want to tackle big-picture questions about the ultimate nature of reality and our place within it, it will be helpful to think about the best way of moving toward reliability in our understanding.
Among the small but passionate community of probability-theory aficionados, fierce debates rage over What Probability Really Is. In one camp are the frequentists, who think that “probability” is just shorthand for “how frequently something would happen in an infinite number of trials.” If you say that a flipped coin has a 50 percent chance of coming up heads, a frequentist will explain that what you really mean is that an infinite number of coin flips will give equal numbers of head and tails. In another camp are the Bayesians, for whom probabilities are simply expressions of your states of belief in cases of ignorance or uncertainty. For a Bayesian, saying there is a 50 percent chance of the coin coming up heads is merely to state that you have zero reason to favor one outcome over another. If you were offered to bet on the outcome of the coin flip, you would be indifferent to choosing heads or tails. The Bayesian will then helpfully explain that this is the only thing you could possibly mean by such a statement, since we never observe infinite numbers of trials, and we often speak about probabilities for things that happen only once, like elections or sporting events. The frequentist would then object that the Bayesian is introducing an unnecessary element of subjectivity and personal ignorance into what should be an objective conversation about how the world behaves, and they would be off.
Often—in fact all the time, if we’re being careful—we don’t hold our beliefs with 100 percent conviction. I believe the sun will rise in the east tomorrow, but I’m not absolutely certain of it. The Earth could be hit by a speeding black hole and completely destroyed. What we actually have are degrees of belief, which professional statisticians refer to as credences. If you think there’s a 1 in 4 chance it will rain tomorrow, your credence that it will rain is 25 percent. Every single belief we have has some credence attached to it, even it we don’t articulate it explicitly. Sometimes credences are just like probabilities, as when we say we have a credence of 50 percent that a fair coin will end up heads. Other times they simply reflect a lack of complete knowledge on our part. If a friend tells you that they really tried to call on your birthday but they were stuck somewhere with no phone service, there’s really no probability involved; it’s true or it isn’t. But you don’t know which is the case, so the best you can do is assign some credence to each possibility.
Say you’re playing poker with a friend. The game is five-card draw, so you each start with five cards, then choose to discard and replace a certain number of them. You can’t see their cards, so to begin, you have no idea what they have, other than knowing they don’t have any of the specific cards in your own hand. You’re not completely ignorant, however; you have some idea that some hands are more likely than others. A starting hand of one pair, or no pairs at all, is relatively likely; getting dealt a flush (five cards of the same suit) right off the bat is quite rare. Running the numbers, a random five-card hand will be “nothing” about 50 percent of the time, one pair about 42 percent of the time, and a flush less than 0.2 percent of the time, not to mention the other possibilities. These starting chances are known as your prior credences. They are the credences you have in mind to start, prior to learning anything new. But then something happens: your friend discards a certain number of cards, and draws an equal number of replacements. That’s new information, and you can use it to update your credences. Let’s say they choose to draw just one card. What does that tell us about their hand? It’s unlikely that they have one pair; if they
Prior credences are a starting point for further analysis, and it’s hard to say that any particular priors are “correct” or “incorrect.” There are, needless to say, some useful rules of thumb. Perhaps the most obvious is that simple theories should be given larger priors than complicated ones. That doesn’t mean that simpler theories are always correct; but if a simple theory is wrong, we will learn that by collecting data. As Albert Einstein put it: “The supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.”
Any of the various skeptical scenarios about the existence of external reality, and our knowledge thereof, might very well be true. But at the same time, that doesn’t mean we should attach high credence to them. The problem is that it is never useful to believe them. That’s what Wittgenstein means by “making sense.”
Radical skepticism is less useful to us; it gives us no way to go through life. All of our purported knowledge, and all of our goals and aspirations, might very well be tricks being played on us. But what then? We cannot actually act on such a belief, since any act we might think is reasonable would have been suggested to us by that annoying demon. Whereas, if we take the world roughly at face value, we have a way of moving forward. There are things we want to do, questions we want to answer, and strategies for making them happen. We have every right to give high credence to views of the world that are productive and fruitful, in preference to those that would leave us paralyzed with ennui.
What rescues our beliefs from being completely arbitrary is that one of the beliefs in a typical planet is something like “true statements correspond to actual elements of the real world.” If we believe that, and have some reliable data, and are sufficiently honest with ourselves, we can hope to construct belief systems that not only are coherent but also agree with those of other people and with external reality. At the very least, we can hold that up as a goal.
It’s worth highlighting two important cognitive biases that we can look to avoid as we put together our own planets. One is our tendency to give higher credences to propositions that we want to be true. This can show up at a very personal level, as what’s known as self-serving bias: when something good happens, we think it’s because we are talented and deserving, while bad things are attributed to unfortunate luck or uncontrollable external circumstances. At a broader level, we naturally gravitate toward theories of the world that somehow flatter ourselves, make us feel important, or provide us with comfort.
The other bias is our preference for preserving our planet of belief, rather than changing it around. This can also show up in many ways. Confirmation bias is our tendency to latch on to and highlight any information that confirms beliefs we already have, while disregarding evidence that may throw our beliefs into question. This tendency is so strong that it leads to the backfire effect—show someone evidence that contradicts what they believe, and studies show that they will usually come away holding their initial belief even more strongly. We cherish our beliefs, and work hard to protect them against outside threats.
We’re faced with the problem that the beliefs we choose to adopt are shaped as much, if not more, by the beliefs we already have than by correspondence with external reality. How can we guard ourselves against self-reinforcing irrationality? There is no perfect remedy, but there is a strategy. Knowing that cognitive biases exist, we can take that fact into account when doing our Bayesian inference. Do you want something to be true? That should count against it in your assignment of credences, not for it.
Consider the set of all the prime numbers: {2, 3, 5, 7, 11, 13 . . . }. Suppose that there is a largest prime, p. Then there are only a finite number of primes. Now consider the number X that we obtain by multiplying together all of the primes from our list, exactly once each, and adding 1 to the result. Then X is clearly larger than any of the primes in our list. But it is not divisible by any of them, since dividing by any of them yields a remainder 1. Therefore either X itself must be prime, or it must be divisible by a prime number larger than any in our list. In either case there must be a prime larger than p, which is a contradiction. Therefore there is no largest prime.
The truths of math and logic would be true in any possible world; the things science teaches us are true about our world, but could have been false in some other one. Most of the interesting things it is possible to know are not things we could ever hope to “prove,” in the strong sense.
We should always imagine that there is some nonzero likelihood for absolutely any observation in absolutely any theory.
The resolution is to admit that some credences are so small that they’re not worth taking seriously. It makes sense to act as if we know those possibilities to be false. So we take “I believe x” not to mean “I can prove x is the case,” but rather “I feel it would be counterproductive to spend any substantial amount of time and effort doubting x.” We can accumulate so much evidence in favor of a theory that maintaining skepticism about it goes from being “prudent caution” to being “crackpottery.” We should always be open to changing our beliefs in the face of new evidence, but the evidence required might need to be so overwhelmingly strong that it’s not worth the effort to seek it out.
While math is lumped together with science in many school curricula—and while they certainly enjoy a close and mutually beneficial relationship—at heart they are completely different endeavors. Math is all about proving things, but the things that math proves are not true facts about the actual world. They are the implications of various assumptions. A mathematical demonstration shows that given a particular set of assumptions (such as the axioms of Euclidean geometry or of number theory), certain statements inevitably follow (such as the angles inside a triangle adding up to 180 degrees, or there being no largest prime number). In this sense, logic and mathematics can be thought of as different aspects of the same underlying strategy. In logic, as in math, we start with axioms and derive results that inevitably follow from them. Though we casually speak of “logic” as a single set of results, it is actually a procedure for inferring conclusions from axioms. There are different possible sets of axioms from which one can draw logical conclusions, just as there are different sets of axioms one could use in geometry or number theory.
The statements we can prove based on explicitly stated axioms are known as theorems. But “theorem” doesn’t imply “something that is true”; it only means “something that definitely follows from the stated axioms.” For the conclusion of the theorem to be “true,” we would also require that the axioms themselves be true. That’s not always the case; Euclidean geometry is a marvelous edifice of mathematical results, and certainly useful in many real-world situations, but Einstein helped us see that the actual geometry of the world obeys a more general set of axioms, invented by Bernhard Riemann in the nineteenth century.
We can think of the difference between math and science in terms of possible worlds. Math is concerned with truths that would hold in any possible world: given these axioms, these theorems will follow. Science is all about discovering the actual world in which we live.
A related route to rationalism is based on the belief that the world has an underlying sensible or logical order, and from this order we can discern a priori principles that simply have to be true, without any need to check up on them by collecting data. Examples might include “for every effect there is a cause,” or “nothing comes from nothing.” One motivation for this view is our ability to abstract from individual things we see in the world to universal regularities that are obeyed more widely. If we were thinking deductively, like a mathematician or logician, we would say that no collection of particular facts suffices to derive a general principle, since the very next fact might contradict the principle. And yet we seem to do that all the time. This has prompted people like Gottfried Wilhelm Leibniz to suggest that we must secretly be relying on a kind of built-in intuition about how things work.
Because physics is so hard—it’s because we understand so
One form would be racial segregation within cities, but the basic idea would work for a variety of differences, from linguistic communities to boys and girls choosing seats in an elementary school classroom. Schelling asked us to imagine a square grid with two different kinds of symbols, X’s and O’s, as well as a few empty spaces. Suppose that the X’s and O’s aren’t completely intolerant of each other, but they get a little uncomfortable if they feel surrounded by symbols of the opposite type. If a symbol is unhappy—if an X has too many O neighbors, for example—it will get up and move to a randomly selected empty space. That happens over and over again, until everybody is happy. Spontaneous segregation in the Schelling model. Initial condition on the left, final condition on the right.
For the traveling-salesman problem, the number of possible routes grows roughly as the factorial of the number of cities involved. The factorial of a number n is equal to 1 times 2 times 3 times 4 . . . times (n – 1) times n. For twenty-seven cities, that’s about 1028 routes to search through. At a rate of a billion routes per second, that search would take you longer than the age of the observable universe.
After a genetic algorithm has evolved, we can go back and watch what it does, trying to figure out what made it so effective. This tricky bit of reverse-engineering is increasingly a real-world challenge. Many useful computer programs operate according to genetically constructed algorithms that no human programmer actually understands, which is a scary thought.
Our theories are inevitably influenced by what we already know about the world. To get a more fair view of what theism would naturally predict, we can simply look at what it did predict, before we made modern astronomical observations. The answer is: nothing like what we actually observe. Prescientific cosmologies tended to resemble the Hebrew conception illustrated in chapter 6, with Earth and humanity sitting at a special place in the cosmos. Nobody was able to use the idea of God to predict a vast space with hundreds of billions of stars and galaxies, scattered almost uniformly through the observable universe. Perhaps the closest was Giordano Bruno, who talked about an infinite cosmos among his many other heresies. He was burned at the stake.
At the risk of dramatic oversimplification, the gist of the Incompleteness Theorem is that within any consistent mathematical formal system—a set of axioms, and rules for deriving consequences from them—there will be statements that are true but cannot be proven within that system.
The unavoidable reality of our incomplete knowledge is responsible for why we find it useful to talk about the future using a language of choice and causation.
David Hume, the eighteenth-century Scottish thinker whom we’ve encountered before as a forefather of poetic naturalism, is widely regarded as a central figure of the Enlightenment.
There are many such ways, but let’s focus in on one of the simplest: the logical syllogism, paradigm of deductive reasoning. Syllogisms look like this: Socrates is a living creature. All living creatures obey the laws of physics. Therefore, Socrates obeys the laws of physics. This is just one example of the general form, which can be expressed as: X is true. If X is true, then Y is true. Therefore, Y is true. Syllogisms are not the only kind of logical argument—they’re just a particularly simple form that will suffice to make our point. The first two statements in a syllogism are the premises of the argument, while the third statement is the conclusion. An argument is said to be valid if the conclusion follows logically from the premises. In contrast, an argument is said to be sound if the conclusion follows from the premises and the premises themselves are true—a much higher standard to achieve.
Reality Guides Us.
Ethics and Morality
The real lives of people whose self-conceptions do not match those that society would like them to have can be extremely challenging, and their obstacles are highly personal. No amount of academic theorizing is going to solve those problems with a simple gesture. But if we insist on talking about such situations on the basis of outdated ontologies, chances are high that we’ll end up doing more harm than good.
That’s the problem with attempting to derive ought from is: it’s logically impossible. If someone tells you they have derived ought from is, it’s like someone telling you that they’ve added together two even numbers and obtained an odd number. You don’t have to check their math to know that they’ve made a mistake.
None of this is to say that we can’t address “ought” issues using the tools of reason and rationality. There is an entire form of logical thought called instrumental rationality, devoted to answering questions of the form “Given that we want to attain a certain goal, how do we go about doing it?” The trick is deciding what we want our goal to be.
It’s tempting to say, “Everyone agrees that killing puppies is wrong.” Except that there are people who do kill puppies. So maybe we mean “Every reasonable person agrees . . .” Then we need to define “reasonable,” and realize we haven’t really made much progress at all.
Martin Luther held that Abraham’s willingness to kill Isaac was correct, given one’s fundamental need to defer to God’s will. Immanuel Kant held that Abraham should have realized that there are no conditions under which it would have been justified to sacrifice his son—and therefore the command could not actually have come from God.
This is where we see Bill and Ted’s “Be excellent to each other” falling short when it comes to providing the basis for a fully articulated ethical system. Moral quandaries are real, even if they are usually not as stark as the trolley problem. How much of our income should we spend on our own pleasure, versus putting it toward helping the less fortunate? What are the best rules governing marriage, abortion, and gender identity? How do we balance the goal of freedom against that of security?
Philosophers find it useful to distinguish between ethics and meta-ethics. Ethics is about what is right and what is wrong, what moral guidelines we should adopt for our own behavior and that of others. A statement like “killing puppies is wrong” belongs to ethics. Meta-ethics takes a step back, and asks what it means to say that something is right or wrong, and why we should adopt one set of guidelines rather than some other set. “Our system of ethics should be based on improving the well-being of conscious creatures” is a meta-ethical claim, from which “killing puppies is wrong” might be derived. Poetic naturalism has little to say about ethics, other than perhaps for a few inspirational remarks. But it does have something to say about meta-ethics, namely: our ethical systems are things that are constructed by us human beings, not discovered out there in the world, and should be evaluated accordingly. To help with that kind of evaluation, we can contemplate some of the choices we have when it comes to ethics. Two ideas serve as a useful starting point: consequentialism and deontology. At the risk of vastly oversimplifying thousands of years of argument and contemplation, consequentialists believe that the moral implications of an action are determined by what consequences that action causes, while deontologists feel that actions are morally right or wrong in and of themselves, not because of what effects they may lead to. “The greatest good for the greatest number,” the famous maxim of utilitarianism, is a classic consequentialist way of thinking. “Do unto others as you would have them do unto you,” the Golden Rule, is an example of deontology in action.
Our minds have a System 1 that is built on heuristics, instincts, and visceral reactions, as well as a System 2 that is responsible for cognition and higher-level thoughts. Roughly speaking, System 1 tends to be responsible for our deontological impulses, and System 2 kicks in when we start thinking as consequentialists. In the words of psychologist Joshua Greene, we not only have “thinking fast and slow”; we also have “morality fast and slow.” System 2 thinks we should pull the switch, while System 1 is appalled by the idea.
Primatologist Frans de Waal has done studies to probe the origins of empathy, fairness, and cooperation in primates. In one famous experiment, he and collaborator Sarah Brosnan placed two capuchin monkeys in separate cages, each able to see the other one. When the monkeys performed a simple task, they were rewarded with a slice of cucumber. The capuchins were quite content with this setup, doing the task over and over, enjoying their cucumber. The experimenters then began rewarding one of the monkeys with grapes—a sweeter food than cucumbers, preferable in every way. The monkey who didn’t get the grapes, who was previously perfectly content with cucumbers, saw what was going on and refused to do the assigned task, outraged at the inequity of the new regime. Recent work by Brosnan’s group with chimpanzees shows cases where even the chimp who gets the grapes is unhappy—their sense of fairness is insulted. Some of our most advanced moral commitments have very old evolutionary roots.
Hume was right. We have no objective guidance on how to distinguish right from wrong: not from God, not from nature, not from the pure force of reason itself. Alive in the world, individual and contingent, we are burdened and blessed with all of the talents and inclinations and instincts that evolution and our upbringings have bequeathed to us. Those are the raw materials from which morals are constructed. Judging what is good and what is not is a quintessentially human act, and we need to face up to that reality. Morality exists only insofar as we make it so, and other people might not pass judgments in the same way that we do.
This kind of utilitarianism runs into a number of well-known problems. The attractive idea of “quantifying utility” becomes slippery when we try to put it into practice. What does it really mean to say that one person has 0.64 times the well-being of another person? How do we combine well-beings—is one person with a utility of 23 better or worse than two people with utilities of 18 each? As Derek Parfit has pointed out, if you believe that there is some positive utility in the very existence of a somewhat-satisfied human being, it follows that having a huge number of somewhat-satisfied people has more utility than a relatively smaller number of exquisitely happy people. It seems counter to our moral intuitions to think that utility can be increased just by making more people, even if they are less happy ones.
Utilitarianism doesn’t always do a good job of embodying our moral sentiments. There are some things we tend to think are just wrong, even if they increase the net happiness of the world, like going around and secretly murdering people who are lonely and unhappy.
Deontology and consequentialism, and for that matter virtue ethics and various other approaches, all capture something real about our moral impulses. We want to act in good ways; we want to make the world a better place; we want to be good people. But we also want to make sense and be internally consistent. That’s hard to do while accepting all of these competing impulses at once. In practice, moral philosophies tend to pick one approach and apply it universally. And as a result of that, we often end up with conclusions that don’t sit easily with the premises we started with.
Abraham was commanded by God to do something horrible. It was a great challenge to his humanity, but given his view of the world, the correct course of behavior was clear: if you are certain that God is telling you to do something, that’s what you do. Poetic naturalism refuses to offer us the consolation of objective moral certainty. There is no “right” answer to the trolley problem. How you should act depends on who you are.
We’re going to be faced with the kinds of moral questions that our ancestors could not possibly have contemplated, from human-machine interfacing to the exploration of new planets. Engineers working on self-driving cars have already begun to realize that the software is going to have to be programmed to solve certain kinds of trolley problems. Poetic naturalism doesn’t tell us how to behave, but it warns us away from the false complacency associated with the conviction that our morals are objectively the best. Our lives are changing in unpredictable ways; we need to be able to make judgments with clear eyes and an accurate picture of how the world operates. We don’t need an immovable place to stand; we need to make our peace with a universe that doesn’t care what we do, and take pride in the fact that we care anyway.
But I hope I never make the mistake of treating people who disagree with me about the fundamental nature of reality as my enemies.
Purpose and Meaning
There must be a reason why it happened. As horrible as the death of a child necessarily is, it becomes more sensible to us if it can somehow be explained as the result of someone’s actions, rather than simply random chance. Looking for causes and reasons is a deeply ingrained human impulse. We are pattern-recognizing creatures, quick to see faces in craters on Mars or connections between the location of Venus in the sky and the state of our love life.
Nothing puts human existence into context quite like contemplating the cosmos. What you might not guess, sitting comfortably in your living room with a glass of wine and a good book, is that what’s happening in your immediate neighborhood is dramatically affected by the evolution of the whole universe.
Science can help us live longer, or journey to the moon. But can it tell us what kind of life to live, or account for the feeling of awe that overcomes us when we contemplate the heavens? What becomes of meaning and purpose when we can’t rely on gods to provide them?
Theists think they have a better answer: God exists, and the reason why the universe exists in this particular way is because that’s how God wanted it to be. Naturalists tend to find this unpersuasive: Why does God exist? But there is an answer to that, or at least an attempted one, which we alluded to at the beginning of this chapter. The universe, according to this line of reasoning, is contingent; it didn’t have to exist, and it could have been otherwise, so its existence demands an explanation. But God is a necessary being; there is no optionality about his existence, so no further explanation is required. Except that God isn’t a necessary being, because there are no such things as necessary beings. All sorts of versions of reality are possible, some of which have entities one would reasonably identify with God, and some of which don’t. We can’t short-circuit the difficult task of figuring out what kind of universe we live in by relying on a priori principles.
It takes courage to face up to the finitude of our lives, and even more courage to admit the limits of purpose in our existence.
When our lives are in good shape, and we are enjoying health and leisure, what do we do? We play. Once the basic requirements of food and shelter have been met, we immediately invent games and puzzles and competitions. That’s a lighthearted and fun manifestation of a deeper impulse: we enjoy challenging ourselves, accomplishing things, having something to show for our lives.
The construction of meaning is a fundamentally individual, subjective, creative enterprise, and an intimidating responsibility. As Carl Sagan put it, “We are star stuff, which has taken its destiny into its own hands.”
Julian Barnes, in his novel A History of the World in 10 1/2 Chapters, imagines a version of what heaven would be like.
Desire Is Built into Life.
What Matters Is What Matters to People.
We Can Always Do Better.
It Takes All Kinds.
The Universe Is in Our Hands.
We Can Do Better Than Happiness.
The mistake we make in putting emphasis on happiness is to forget that life is a process, defined by activity and motion, and to search instead for the one perfect state of being. There can be no such state, since change is the essence of life. Scholars who study meaning in life distinguish between synchronic meaning and diachronic meaning. Synchronic meaning depends on your state of being at any one moment in time: you are happy because you are out in the sunshine. Diachronic meaning depends on the journey you are on: you are happy because you are making progress toward a college degree. If we permit ourselves to take inspiration from what we have learned about ontology, it might suggest that we focus more on diachronic meaning at the expense of synchronic. The essence of life is change, and we can aim to make change part of how we find meaning in it.
We have aspirations that reach higher than happiness. We’ve learned so much about the scope and workings of the universe, and about how to live together and find meaning and purpose in our lives, precisely because we are ultimately unwilling to take comforting illusions as final answers.
Albert Camus, the French existentialist novelist and philosopher, outlined some of his approach to life in his essay “The Myth of Sisyphus.”
This is life—a tiny sliver of the tangible, real experience of the world.
All lives are different, and some face hardships that others will never know. But we all share the same universe, the same laws of nature, and the same fundamental task of creating meaning and of mattering for ourselves and those around us in the brief amount of time we have in the world. Three billion heartbeats. The clock is ticking.
Time and Mortality
Everybody dies. Life is not a substance, like water or rock; it’s a process, like fire or a wave crashing on the shore. It’s a process that begins, lasts for a while, and ultimately ends. Long or short, our moments are brief against the expanse of eternity.
It’s that tendency for entropy to increase that is responsible for the existence of time’s arrow. It’s easy to break eggs, and hard to unbreak them; cream and coffee mix together, but don’t unmix; we were all born young, and gradually grow older; we remember what happened yesterday, but we don’t remember what will happen tomorrow. Most of all, what causes an event must precede the event, not come afterward.
Just as there is no reference to “causes” in the fundamental laws of physics, there isn’t an arrow of time, either. The laws treat the past and future on an equal footing. But the usefulness of our everyday language of explanation and causation is intimately tied to time’s arrow. Without it, those terms wouldn’t be a useful way of talking about the universe at all.
Even for these complicated processes, it turns out, there is a time-reversed process that is perfectly compatible with the laws of physics. Eggs could unbreak, perfume could go back into its bottle, cream and coffee could unmix. All we have to do is to imagine reversing the trajectory of every single particle of which our system (and anything it was interacting with) is made. None of these processes violates the laws of physics—it’s just that they are extraordinarily unlikely. The real question is not why we never see eggs unbreaking toward the future; it’s why we see them unbroken in the past.
And then along came Laplace to tell us differently. Information about the precise state of the universe is conserved over time; there is no fundamental difference between the past and the future. Nowhere in the laws of physics are there labels on different moments of time to indicate “has happened yet” and “has not happened yet.” Those laws refer equally well to any moment in time, and they tie all of the moments together in a unique order.
We can highlight three ways that the past and future seem radically different to us: We remember the past, but not the future. Causes precede their effects. We can make choices that affect the future, but not the past.
Think of walking down the street and noticing a broken egg lying on the sidewalk. Ask yourself what the future of that egg might have in store, in comparison with its recent past. In the future, the egg might wash away in a storm, or a dog might come by and lap it up, or it might just fester for a few more days. Many possibilities are open. In the past, however, the basic picture is much more constrained: it seems exceedingly likely that the egg used to be unbroken, and was dropped or thrown to this location. We don’t actually have any direct access to the past of the egg, any more than we do its future. But we think we know more about where it came from than where it might be going. Ultimately, even if we don’t realize it, the source of our confidence is the fact that entropy was lower in the past.
There is a much more profound implication of accepting the Core Theory as underlying the world of our everyday experience. Namely: there is no life after death. We each have a finite time as living creatures, and when it’s over, it’s over.
When my husband died, because he was so famous and known for not being a believer, many people would come up to me—it still sometimes happens—and ask me if Carl changed at the end and converted to a belief in an afterlife. They also frequently ask me if I think I will see him again. Carl faced his death with unflagging courage and never sought refuge in illusions. The tragedy was that we knew we would never see each other again. I don’t ever expect to be reunited with Carl. But, the great thing is that when we were together, for nearly twenty years, we lived with a vivid appreciation of how brief and precious life is. We never trivialized the meaning of death by pretending it was anything other than a final parting. Every single moment that we were alive and we were together was miraculous—not miraculous in the sense of inexplicable or supernatural. We knew we were beneficiaries of chance. . . . That pure chance could be so generous and so kind. . . . That we could find each other, as Carl wrote so beautifully in Cosmos, you know, in the vastness of space and the immensity of time. . . . That we could be together for twenty years. That is something which sustains me and it’s much more meaningful. . . . The way he treated me and the way I treated him, the way we took care of each other and our family, while he lived. That is so much more important than the idea I will see him someday. I don’t think I’ll ever see Carl again. But I saw him. We saw each other. We found each other in the cosmos, and that was wonderful.
Author
Mauro Sicard
CEO & Creative Director at BRIX Agency. My main interests are tech, science and philosophy.