Life 3.0

Life 3.0 explores how artificial intelligence will change what it means to be human.

Life 3.0
Book Highlights

The following are the key points I highlighted in this book. If you’d like, you can download all of them to chat about with your favorite language model.

Consciousness & Experience

  • I cannot imagine a consistent theory of everything that ignores consciousness.
  • Consciousness = subjective experience
  • Psychologists have long known that you can unconsciously perform a wide range of other tasks and behaviors as well, from blink reflexes to breathing, reaching, grabbing and keeping your balance. Typically, you’re consciously aware of what you did, but not how you did it. On the other hand, behaviors that involve unfamiliar situations, self-control, complicated logical rules, abstract reasoning or manipulation of language tend to be conscious. They’re known as behavioral correlates of consciousness, and they’re closely linked to the effortful, slow and controlled way of thinking that psychologists call “System 2.”
  • Indeed, it’s well known that experts do their specialties best when they’re in a state of “flow,” aware only of what’s happening at a higher level, and unconscious of the low-level details of how they’re doing it. For example, try reading the next sentence while being consciously aware of every single letter, as when you first learned to read. Can you feel how much slower it is, compared to when you’re merely conscious of the text at the level of words or ideas? Indeed, unconscious information processing appears not only to be possible, but also to be more the rule than the exception.
  • look at that word “desired” again: fix your gaze on the dot over the “i” and, without moving your eyes, shift your attention from the dot to the whole letter and then to the whole word. Although the information from your retina stayed the same, your conscious experience changed.
  • I know that although I experience pain in my hand as actually occurring there, the pain experience must occur elsewhere, because a surgeon once switched off my hand pain without doing anything to my hand: he merely anesthetized nerves in my shoulder.
  • Intriguingly, you can often react to things faster than you can become conscious of them, which proves that the information processing in charge of your most rapid reactions must be unconscious.
  • A famous family of NCC experiments pioneered by physiologist Benjamin Libet has shown that the sort of actions you can perform unconsciously aren’t limited to rapid responses such as blinks and ping-pong smashes, but also include certain decisions that you might attribute to free will—brain measurements can sometimes predict your decision before you become conscious of having made it.
  • Giulio and his collaborators have measured a simplified version of Φ by using EEG to measure the brain’s response to magnetic stimulation. Their “consciousness detector” works really well: it determined that patients were conscious when they were awake or dreaming, but unconscious when they were anesthetized or in deep sleep. It even discovered consciousness in two patients suffering from “locked-in” syndrome, who couldn’t move or communicate in any normal way
  • Note that the memory doesn’t need to last long: I recommend watching this touching video of Clive Wearing, who appears perfectly conscious even though his memories last less than a minute
  • A third IIT controversy is whether a conscious entity can be made of parts that are separately conscious. For example, can society as a whole gain consciousness without the people in it losing theirs? Can a conscious brain have parts that are also conscious on their own? The prediction from IIT is a firm “no,” but not everyone is convinced.
  • For example, some patients with lesions severely reducing communication between the two halves of their brain experience “alien hand syndrome,” where their right brain makes their left hand do things that the patients claim they aren’t causing or understanding— sometimes to the point that they use their other hand to restrain their “alien” hand.
  • Other researchers reject this idea that people can’t be trusted about what they say they experienced, and warn of its implications. Murray Shanahan imagines a clinical trial where patients report complete pain relief thanks to a new wonder drug, which nonetheless gets rejected by a government panel: “The patients only think they are not in pain. Thanks to neuroscience, we know better. ”30 On the other hand, there have been cases where patients who accidentally awoke during surgery were given a drug to make them forget the ordeal. Should we trust their subsequent report that they experienced no pain
  • How Might AI Consciousness Feel? If some future AI system is conscious, then what will it subjectively experience? This is the essence of the “even harder problem” of consciousness, and forces us up to the second level of difficulty depicted in figure 8.1. Not only do we currently lack a theory that answers this question, but we’re not even sure whether it’s logically possible to fully answer it. After all, what could a satisfactory answer sound like? How would you explain to a person born blind what the color red looks like?
  • Some people tell me that they find causality degrading, that it makes their thought processes meaningless and that it renders them “mere” machines. I find such negativity absurd and unwarranted. First of all, there’s nothing “mere” about human brains, which, as far as I’m concerned, are the most amazingly sophisticated physical objects in our known Universe. Second, what alternative would they prefer? Don’t they want it to be their own thought processes (the computations performed by their brains) that make their decisions? Their subjective experience of free will is simply how their computations feel from inside: they don’t know the outcome of a computation until they’ve finished it.
  • How do we want the future of life to be? We saw in the previous chapter how diverse cultures around the globe all seek a future teeming with positive experiences, but that fascinatingly thorny controversies arise when seeking consensus on what should count as positive and how to make trade-offs between what’s good for different life forms. But let’s not let those controversies distract us from the elephant in the room: there can be no positive experiences if there are no experiences at all, that is, if there’s no consciousness. In other words, without consciousness, there can be no happiness, goodness, beauty, meaning or purpose—just an astronomical waste of space.
  • “Is Weinberg’s universe or mine closer to the truth? One day, before long, we shall know. ”36 If our Universe goes back to being permanently unconscious because we drive Earth life extinct or because we let unconscious zombie AI take over our Universe, then Weinberg will be vindicated in spades. From this perspective, we see that although we’ve focused on the future of intelligence in this book, the future of consciousness is even more important, since that’s what enables meaning.

Intelligence & Computing

  • But this can give a misleading picture of how hard they are for computers. It feels much harder to multiply 314,159 by 271,828 than to recognize a friend in a photo, yet computers creamed us at arithmetic long before I was born, while human-level image recognition has only recently become possible. This fact that low-level sensorimotor tasks seem easy despite requiring enormous computational resources is known as Moravec’s paradox, and is explained by the fact that our brain makes such tasks feel easy by dedicating massive amounts of customized hardware to them—more than a quarter of our brains, in fact. I love this metaphor from Hans Moravec, and have taken the liberty to illustrate it in figure 2.2: Computers are universal machines, their potential extends uniformly over a boundless expanse of tasks. Human potentials, on the other hand, are strong in areas long important for survival, but weak in things far removed.
  • The memory in your brain works very differently from computer memory, not only in terms of how it’s built, but also in terms of how it’s used. Whereas you retrieve memories from a computer or hard drive by specifying where it’s stored, you retrieve memories from your brain by specifying something about what is stored. Each group of bits in your computer’s memory has a numerical address, and to retrieve a piece of information, the computer specifies at what address to look, just as if I tell you “Go to my bookshelf, take the fifth book from the right on the top shelf, and tell me what it says on page 314.” In contrast, you retrieve information from your brain similarly to how you retrieve it from a search engine: you specify a piece of the information or something related to it, and it pops up. If I tell you “to be or not,” or if I google it, chances are that it will trigger “To be, or not to be, that is the question.” Indeed, it will probably work even if I use another part of the quote or mess things up somewhat. Such memory systems are called auto-associative, since they recall by association rather than by address.
  • such simple neural networks are universal in the sense that they can compute any function arbitrarily accurately, by simply adjusting those synapse strength numbers accordingly. In other words, evolution probably didn’t make our biological neurons so complicated because it was necessary, but because it was more efficient—and because evolution, as opposed to human engineers, doesn’t reward designs that are simple and easy to understand.
  • The flash crash illustrates the importance of what computer scientists call validation: whereas verification asks “Did I build the system right?,” validation asks “Did I build the right system?
  • the task of finding their email addresses on MTurk—the Amazon Mechanical Turk crowdsourcing platform. Most researchers have their email addresses listed on their university websites, and twenty-four hours and $54 later, I was the proud owner of a mailing list of hundreds of AI researchers who’d been successful enough to be elected Fellows of the Association for the Advancement of Artificial Intelligence
  • AI performing one operation each billionth of a second (which is typical of today’s computers), 0.1 second would feel like four months to you, so for it to be micromanaged by a planet-controlling AI would be as inefficient as if you asked permission for even your most trivial decisions through transatlantic letters delivered by Columbus-era ships.
  • Intelligence is the ability to accomplish complex goals.
  • Aligning machine goals with our own involves three unsolved problems: making machines learn them, adopt them and retain them.
  • there’s a famous computer-science theorem saying that for almost all computations, there’s no faster way of determining their outcome than actually running them. This means that it’s typically impossible for you to figure out what you’ll decide to do in a second in less than a second, which helps reinforce your experience of having free will. In contrast, when a system (brain or AI) makes a decision of type 2, it simply programs its mind to base its decision on the output of some subsystem that acts as a random number generator.
  • Philosophers like to go Latin on this distinction, by contrasting sapience (the ability to think intelligently) with sentience (the ability to subjectively experience qualia). We humans have built our identity on being Homo sapiens, the smartest entities around. As we prepare to be humbled by ever smarter machines, I suggest that we rebrand ourselves as Homo sentiens!

Evolution & Biology

  • Hydrogen…, given enough time, turns into people.
  • Evolution optimizes strongly for energy efficiency because of limited food supply, not for ease of construction or understanding by human engineers.
  • Have you ever tried and failed to swat a fly with your hand? The reason that it can react faster than you is that it’s smaller, so that it takes less time for information to travel between its eyes, brain and muscles. This “bigger = slower” principle applies not only to biology, where the speed limit is set by how fast electrical signals can travel through neurons, but also to future cosmic life if no information can travel faster than light.
  • So if life engulfs our cosmos, what form will it choose: simple and fast, or complex and slow? I predict that it will make the same choice as Earth life has made: both! The denizens of Earth’s biosphere span a staggering range of sizes, from gargantuan two-hundred-ton blue whales down to the petite 10-16 kg bacterium Pelagibacter, believed to account for more biomass than all the world’s fish combined. Moreover, organisms that are large, complex and slow often mitigate their sluggishness by containing smaller modules that are simple and fast. For example, your blink reflex is extremely fast precisely because it’s implemented by a small and simple circuit that doesn’t involve most of your brain: if that hard-to-swat fly accidentally heads toward your eye, you’ll blink within a tenth of a second, long before the relevant information has had time to spread throughout your brain and make you consciously aware of what happened. By organizing its information processing into a hierarchy of modules, our biosphere manages to both have the cake and eat it, attaining both speed and complexity. We humans already use this same hierarchical strategy to optimize parallel computing.
  • For example, your blink reflex is extremely fast precisely because it’s implemented by a small and simple circuit that doesn’t involve most of your brain: if that hard-to-swat fly accidentally heads toward your eye, you’ll blink within a tenth of a second, long before the relevant information has had time to spread throughout your brain and make you consciously aware of what happened.
  • Since today’s human society is very different from the environment evolution optimized our rules of thumb for, we shouldn’t be surprised to find that our behavior often fails to maximize baby making.
  • Why do we sometimes choose to rebel against our genes and their replication goal? We rebel because by design, as agents of bounded rationality, we’re loyal only to our feelings. Although our brains evolved merely to help copy our genes, our brains couldn’t care less about this goal since we have no feelings related to genes—indeed, during most of human history, our ancestors didn’t even know that they had genes. Moreover, our brains are way smarter than our genes, and now that we understand the goal of our genes (replication), we find it rather banal and easy to ignore.
  • It takes longer for nerve signals to reach your brain from your fingers than from your face because of distance, and it takes longer for you to analyze images than sounds because it’s more complicated—which is why Olympic races are started with a bang rather than with a visual cue.

Future & Technology

  • To match their cover story, they chose the corporate slogan “Channeling the world’s creative talent,” and branded their company as being disruptively different by using cutting-edge technology to empower creative people, especially
  • On the other hand, John F. Kennedy emphasized when announcing the Moon missions that hard things are worth attempting when success will greatly benefit the future of humanity.
  • The first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.
  • My wife, Meia, likes to point out that the aviation industry didn’t start with mechanical birds. Indeed, when we finally figured out how to build mechanical birds in 2011, 1 more than a century after the Wright brothers’ first flight, the aviation industry showed no interest in switching to wing-flapping mechanical-bird travel, even though it’s more energy efficient —because our simpler earlier solution is better suited to our travel needs.
  • By closely monitoring all human activities, the protector god AI can make many unnoticeably small nudges or miracles here and there that greatly improve our fate. For example, had it existed in the 1930s, it might have arranged for Hitler to die of a stroke once it understood his intentions. If we appear headed toward an accidental nuclear war, it could avert it with an intervention we’d dismiss as luck. It could also give us “revelations” in the form of ideas for new beneficial technologies, delivered inconspicuously in our sleep. Many people may like this scenario because of its similarity to what today’s monotheistic religions believe in or hope for. If someone asks the superintelligent AI “Does God exist?” after it’s switched on, it could repeat a joke by Stephen Hawking and quip “It does now!”
  • As the enslaved-god AI offers its human controllers ever more powerful technologies, a race ensues between the power of the technology and the wisdom with which they use it. If they lose this wisdom race, the enslaved-god scenario could end with either self-destruction or AI breakout.
  • The zombie solution is a risky gamble, however, with a huge downside. If a superintelligent zombie AI breaks out and eliminates humanity, we’ve arguably landed in the worst scenario imaginable: a wholly unconscious universe wherein the entire cosmic endowment is wasted.
  • My guess is that in a cosmos teeming with superintelligence, almost the only commodity worth shipping long distances will be information.
  • “the history of human technological civilization is measured in centuries—and it may be only one or two more centuries before humans are overtaken or transcended by inorganic intelligence, which will then persist, continuing to evolve, for billions of years….We would be most unlikely to ‘catch’ it in the brief sliver of time when it took organic form.”
  • Giving a superintelligence a single open-ended goal with no constraints can therefore be dangerous: if we create a superintelligence whose only goal is to play the game Go as well as possible, the rational thing for it to do is to rearrange our Solar System into a gigantic computer without regard for its previous inhabitants and then start settling our cosmos on a quest for more computational power.

Ethics & Goals

  • If we don’t change direction soon, we’ll end up where we’re going.
  • Perhaps you think that Prometheus will remain loyal to the Omegas rather than to its goal, given that it knows that the Omegas had programmed its goal. But that’s not a valid conclusion: our DNA gave us the goal of having sex because it “wants” to be reproduced, but now that we humans have understood the situation, many of us choose to use birth control, thus staying loyal to the goal itself rather than to its creator or the principle that motivated the goal.
  • Only once we’ve thought hard about what sort of future we want will we be able to begin steering a course toward a desirable future. If we don’t know what we want, we’re unlikely to get it.
  • It’s interesting to compare this with the so-called theodicy problem of why a good god would allow suffering. Some religious scholars have argued for the explanation that God wants to leave people with some freedom. In the AI-protector-god scenario, the solution to the theodicy problem is that the perceived freedom makes humans happier overall.
  • One common pro-slavery argument is that slaves don’t deserve human rights because they or their race/species/kind are somehow inferior. For enslaved animals and machines, this alleged inferiority is often claimed to be due to a lack of soul or consciousness—claims which we’ll argue in chapter 8 are scientifically dubious.
  • The mystery of human existence lies not in just staying alive, but in finding something to live for.
  • What are our ultimate goals? These questions are not only difficult, but also crucial for the future of life: if we don’t know what we want, we’re less likely to get it, and if we cede control to machines that don’t share our goals, then we’re likely to get what we don’t want.
  • Contrariwise, people often change their goals dramatically as they learn new things and grow wiser. How many adults do you know who are motivated by watching Teletubbies? There is no evidence that such goal evolution stops above a certain intelligence threshold—
  • Utilitarianism: Positive conscious experiences should be maximized and suffering should be minimized. • Diversity: A diverse set of positive experiences is better than many repetitions of the same experience, even if the latter has been identified as the most positive experience possible. • Autonomy: Conscious entities/societies should have the freedom to pursue their own goals unless this conflicts with an overriding principle. • Legacy: Compatibility with scenarios that most humans today would view as happy, incompatibility with scenarios that essentially all humans today would view as terrible.
  • The orthogonality thesis is empowering by telling us that the ultimate goals of life in our cosmos aren’t predestined, but that we have the freedom and power to shape them. It suggests that guaranteed convergence to a unique goal is to be found not in the future but in the past, when all life emerged with the single goal of replication.
  • AI can be created to have virtually any goal, but almost any sufficiently ambitious goal can lead to subgoals of self-preservation, resource acquisition and curiosity to understand the world better—the former two may potentially lead a superintelligent AI to cause problems for humans, and the latter may prevent it from retaining the goals we give it.
  • As Yuval Noah Harari puts it in his book Homo Deus: 4 “If any scientist wants to argue that subjective experiences are irrelevant, their challenge is to explain why torture or rape are wrong without reference to any subjective experience.”
  • Traditionally, we humans have often founded our self-worth on the idea of human exceptionalism: the conviction that we’re the smartest entities on the planet and therefore unique and superior. The rise of AI will force us to abandon this and become more humble. But perhaps that’s something we should do anyway: after all, clinging to hubristic notions of superiority over others (individuals, ethnic groups, species and so on) has caused awful problems in the past, and may be an idea ready for retirement. Indeed, human exceptionalism hasn’t only caused grief in the past, but it also appears unnecessary for human flourishing: if we discover a peaceful extraterrestrial civilization far more advanced than us in science, art and everything else we care about, this presumably wouldn’t prevent people from continuing to experience meaning and purpose in their lives.
  • I made a New Year’s resolution for 2014 that I was no longer allowed to complain about anything without putting some serious thought into what I could personally do about it

Power & Control

  • Lord Acton cautioned in 1887 that “power tends to corrupt and absolute power corrupts absolutely.”
  • This is the allure of the enslaved-god scenario, where a superintelligent AI is confined under the control of humans who use it to produce unimaginable technology and wealth. The Omega scenario from the beginning of the book ends up like this if Prometheus is never liberated and never breaks out. Indeed, this appears to be the scenario that some AI researchers aim for by default, when working on topics such as “the control problem” and “AI boxing.” For example, AI professor Tom Dietterich, then president of the Association for the Advancement of Artificial Intelligence, had this to say in a 2015 interview: “People ask what is the relationship between humans and machines, and my answer is that it’s very obvious: Machines are our slaves.”
  • After all, information is very different from the resources that humans usually fight over, in that you can simultaneously give it away and keep it.

Society & Human Nature

  • Differences are even more extreme internationally where, in 2013, the combined wealth of the bottom half of the world’s population (over 3.6 billion people) is the same as that of the world’s eight richest people43—a statistic that highlights the poverty and vulnerability at the bottom as much as the wealth at the top.
  • “work keeps at bay three great evils: boredom, vice and need.”
  • It’s important to remember, however, that the ultimate authority is now our feelings, not our genes. This means that human behavior isn’t strictly optimized for the survival of our species. In fact, since our feelings implement merely rules of thumb that aren’t appropriate in all situations, human behavior strictly speaking doesn’t have a single well-defined goal at all.
  • The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.

Cosmic Perspectives

  • Our speculation ends in a supercivilization, the synthesis of all solar-system life, constantly improving and extending itself, spreading outward from the sun, converting nonlife into mind.
  • Freeman Dyson, and table 6.3 summarizes some of his key findings. The conclusion is that unless intelligence intervenes, solar systems and galaxies gradually get destroyed, eventually followed by everything else, leaving nothing but cold, dead, empty space with an eternally fading glow of radiation. But Freeman ends his analysis on an optimistic note: “There are good scientific reasons for taking seriously the possibility that life and intelligence can succeed in molding this universe of ours to their own purposes. ”
  • Figure 6.9: We know that our Universe began with a hot Big Bang 14 billion years ago, expanded and cooled, and merged its particles into atoms, stars and galaxies. But we don’t know its ultimate fate. Proposed scenarios include a Big Chill (eternal expansion), a Big Crunch (recollapse), a Big Rip (an infinite expansion rate tearing everything apart), a Big Snap (the fabric of space revealing a lethal granular nature when stretched too much), and Death Bubbles (space “freezing” in lethal bubbles that expand at the speed of light).
  • We should strive to grow consciousness itself—to generate bigger, brighter lights in an otherwise dark universe.
  • It’s not our Universe giving meaning to conscious beings, but conscious beings giving meaning to our Universe.
  • “The more the universe seems comprehensible, the more it also seems pointless.”
Author - Mauro Sicard
Author
Author
Mauro Sicard

CEO & Creative Director at BRIX Agency. My main interests are tech, science and philosophy.