Nexus

Nexus reveals how information shapes humanity's past and threatens its future existence.

Nexus
Book Highlights

The following are the key points I highlighted in this book. If you’d like, you can download all of them to chat about with your favorite language model.

Information Networks and Power

  • While over the generations human networks have grown increasingly powerful, they have not necessarily grown increasingly wise. If a network privileges order over truth, it can become very powerful but use that power unwisely.
  • Ownership is still an intersubjective reality created by exchanging information, but the information now takes the form of a written document (or a computer file) rather than of people talking and gesturing to each other.
  • In the past, organizations like newspapers, radio stations, and established political parties acted as gatekeepers, deciding who was heard in the public sphere. Social media undermined the power of these gatekeepers, leading to a more open but also more anarchical public conversation.
  • Many point the finger at social media algorithms. We have explored the divisive impact of social media in previous chapters, but despite the damning evidence it seems that there must be additional factors at play. The truth is that while we can easily observe that the democratic information network is breaking down, we aren’t sure why. That itself is a characteristic of the times. The information network has become so complicated, and it relies to such an extent on opaque algorithmic decisions and inter-computer entities, that it has become very difficult for humans to answer even the most basic of political questions: Why are we fighting each other?
  • In traditional industries like restaurants, size isn’t an overwhelming advantage. McDonald’s is a worldwide chain that feeds more than fifty million people a day, and its size gives it many advantages in terms of costs, branding, and so forth. You can nevertheless open a neighborhood restaurant that could hold its own against the local McDonald’s. Even though your restaurant might be serving just two hundred customers a day, you still have a chance of making better food than McDonald’s and gaining the loyalty of happier customers.
  • It works differently in the information market. The Google search engine is used every day by between two and three billion people making 8.5 billion searches. Suppose a local start-up search engine tries to compete with Google. It doesn’t stand a chance. Because Google is already used by billions, it has so much more data at its disposal that it can train far better algorithms, which will attract even more traffic, which will be used to train the next generation of algorithms, and so on. Consequently, in 2023 Google controlled 91.5 percent of the global search market.
  • The fate of Tiberius indicates the delicate balance that all dictators must strike. They try to concentrate all information in one place, but they must be careful that the different channels of information are allowed to merge only in their own person. If the information channels merge somewhere else, that then becomes the true nexus of power. When the regime relies on humans like Sejanus and Macro, a skillful dictator can play them one against the other in order to remain on top. Stalin’s purges were all about that. Yet when a regime relies on a powerful but inscrutable AI that gathers and analyzes all information, the human dictator is in danger of losing all power.
  • Of course, no matter whether the world is divided between a few digital empires, remains a more diverse community of two hundred nation-states, or is split along altogether different and unforeseen lines, cooperation is always an option. Among humans, the precondition for cooperation isn’t similarity; it is the ability to exchange information. As long as we are able to converse, we might find some shared story that can bring us closer. This, after all, is what made Homo sapiens the dominant species on the planet.
  • And there are many situations when, in order to take care of our compatriots, we need to cooperate with foreigners. COVID-19 provided us with one obvious example. Pandemics are global events, and without global cooperation it is hard to contain them, let alone prevent them. When a new virus or a mutant pathogen appears in one country, it puts all other countries in danger. Conversely, the biggest advantage of humans over pathogens is that we can cooperate in ways that pathogens cannot. Doctors in Germany and Brazil can alert one another to new dangers, give one another good advice, and work together to discover better treatments.
  • Mearsheimer then asks “how much power states want” and answers that all states want as much power as they can get, “because the international system creates powerful incentives for states to look for opportunities to gain power at the expense of rivals.” He concludes, “A state’s ultimate goal is to be the hegemon in the system.”
  • The main argument of this book is that humankind gains enormous power by building large networks of cooperation, but the way these networks are built predisposes us to use that power unwisely. Our problem, then, is a network problem.
  • Even more specifically, it is an information problem. Information is the glue that holds networks together.
  • We should not assume that delusional networks are doomed to failure. If we want to prevent their triumph, we will have to do the hard work ourselves.
  • The naive view argues that by gathering and processing much more information than individuals can, big networks achieve a better understanding of medicine, physics, economics, and numerous other fields, which makes the network not only powerful but also wise. For example, by gathering information on pathogens, pharmaceutical companies and health-care services can determine the true causes of many diseases, which enables them to develop more effective medicines and to make wiser decisions about their usage. This view posits that in sufficient quantities information leads to truth, and truth in turn leads to both power and wisdom.
  • In recent generations humanity has experienced the greatest increase ever in both the amount and the speed of our information production. Every smartphone contains more information than the ancient Library of Alexandria and enables its owner to instantaneously connect to billions of other people throughout the world. Yet with all this information circulating at breathtaking speeds, humanity is closer than ever to annihilating itself.
  • Information is increasingly seen by many philosophers and biologists, and even by some physicists, as the most basic building block of reality, more elementary than matter and energy.
  • Instead of trying to represent preexisting things, DNA helps to produce entirely new things. For instance, various strings of DNA nucleobases initiate cellular chemical processes that result in the production of adrenaline. Adrenaline too doesn’t represent reality in any way. Rather, adrenaline circulates through the body, initiating additional chemical processes that increase the heart rate and direct more blood to the muscles. DNA and adrenaline thereby help to connect trillions of cells in the heart, legs, and other body parts to form a functioning network that can do remarkable things, like run away from a lion.
  • Sometimes networks can be connected without any attempt to represent reality, neither accurate nor erroneous, as when genetic information connects trillions of cells or when a stirring musical piece connects thousands of humans.
  • Sapiens are capable of doing such things because we are far more flexible than chimps and can simultaneously cooperate in even larger numbers than ants. In fact, there is no upper limit to the number of Sapiens who can cooperate with one another. The Catholic Church has about 1.4 billion members. China has a population of about 1.4 billion. The global trade network connects about 8 billion Sapiens.
  • The tribal network, then, acted like an insurance policy. It minimized risk by spreading it across a lot more people.
  • As time passed, problems of interpretation increasingly tilted the balance of power between the holy book and the church in favor of the institution. Just as the need to interpret Jewish holy books empowered the rabbinate, so the need to interpret Christian holy books empowered the church.
  • Such information-for-information deals are already ubiquitous. Each day billions of us conduct numerous transactions with the tech giants, but one could never guess that from our bank accounts, because hardly any money is moving. We get information from the tech giants, and we pay them with information. As more transactions follow this information-for-information model, the information economy grows at the expense of the money economy, until the very concept of money becomes questionable.
  • A person or corporation with little money in the bank but a huge data bank of information could be the wealthiest, or most powerful, entity in the country. In theory, it might be possible to quantify the value of their information in monetary terms, but they never actually convert the information into dollars or pesos. Why do they need dollars, if they can get what they want with information?
  • Information is different. Unlike cotton and oil, digital data can be sent from Malaysia or Egypt to Beijing or San Francisco at almost the speed of light. And unlike land, oil fields, or textile factories, algorithms don’t take up much space. Consequently, unlike industrial power, the world’s algorithmic power can be concentrated in a single hub. Engineers in a single country might write the code and control the keys for all the crucial algorithms that run the entire world.

AI Development and Impact

  • In April 2023 Elon Musk announced, “I’m going to start something, which I call TruthGPT or a maximum truth-seeking AI that tries to understand the nature of the universe.” We will see in later chapters why this is a dangerous fantasy. In previous eras, such fantasies took a different form—religion.
  • studying the history of religion is highly relevant to present-day debates about AI. In the history of religion, a recurrent problem is how to convince people that a certain dogma indeed originated from an infallible superhuman source. Even if in principle I am eager to submit to the gods’ will, how do I know what the gods really want?
  • What would stop AIs from being incorporated and recognized as legal persons with freedom of speech, then lobbying and making political donations to protect and expand AI rights?
  • For instance, an algorithm looking for patterns of “good employees” in real-life data may conclude that hiring the boss’s nephews is always a good idea, no matter what other qualification they have. For the data clearly indicates that “boss’s nephews” are usually hired when applying for a job, and are rarely fired.
  • But getting rid of algorithmic bias might be as difficult as ridding ourselves of our human biases. Once an algorithm has been trained, it takes a lot of time and effort to “untrain” it. We might decide to just dump the biased algorithm and train an altogether new algorithm on a new set of less biased data. But where on earth can we find a set of totally unbiased data?
  • Maybe it just means that the computers themselves are rewarding such behavior while punishing and blocking alternatives. For computers to have a more accurate and responsible view of the world, they need to take into account their own power and impact. And for that to happen, the humans who currently engineer computers need to accept that they are not manufacturing new tools. They are unleashing new kinds of independent agents, and potentially even new kinds of gods.
  • Yet no matter how aware algorithms are of their own fallibility, we should keep humans in the loop, too. Given the pace at which AI is developing, it is simply impossible to anticipate how it will evolve and to place guardrails against all future potential hazards. This is a key difference between AI and previous existential threats like nuclear technology. The latter presented humankind with a few easily anticipated doomsday scenarios, most obviously an all-out nuclear war. This meant that it was feasible to conceptualize the danger in advance, and explore ways to mitigate it. In contrast, AI presents us with countless doomsday scenarios. Some are relatively easy to grasp, such as terrorists using AI to produce biological weapons of mass destruction. Some are more difficult to grasp, such as AI creating new psychological weapons of mass destruction. And some may be utterly beyond the human imagination, because they emanate from the calculations of an alien intelligence. To guard against a plethora of unforeseeable problems, our best bet is to create living institutions that can identify and respond to the threats as they arise.
  • If three years of up to 25 percent unemployment could turn a seemingly prospering democracy into the most brutal totalitarian regime in history, what might happen to democracies when automation causes even bigger upheavals in the job market of the twenty-first century? Nobody knows what the job market will look like in 2050, or even in 2030, except that it will look very different from today. AI and robotics will change numerous professions, from harvesting crops to trading stocks to teaching yoga. Many jobs that people do today will be taken over, partly or wholly, by robots and computers.
  • Of course, as old jobs disappear, new jobs will emerge. Fears of automation leading to large-scale unemployment go back centuries, and so far they have never materialized. The Industrial Revolution put millions of farmers out of agricultural jobs and provided them with new jobs in factories. It then automated factories and created lots of service jobs. Today many people have jobs that were unimaginable thirty years ago, such as bloggers, drone operators, and designers of virtual worlds. It is highly unlikely that by 2050 all human jobs will disappear. Rather, the real problem is the turmoil of adapting to new jobs and conditions. To cushion the blow, we need to prepare in advance. In particular, we need to equip younger generations with skills that will be relevant to the job market of 2050.
  • Similarly, to judge by their pay, you could assume that our society appreciates doctors more than nurses. However, it is harder to automate the job of nurses than the job of at least those doctors who mostly gather medical data, provide a diagnosis, and recommend treatment. These tasks are essentially pattern recognition, and spotting patterns in data is one thing AI does better than humans. In contrast, AI is far from having the skills necessary to automate nursing tasks such as replacing bandages on an injured person or giving an injection to a crying child.
  • computers could one day gain the ability to feel pain and love. Even if they can’t, humans may nevertheless come to treat them as if they can.
  • Previously creators could explain how something worked, why it did what it did, even if this required vast detail. That’s increasingly no longer true. Many technologies and systems are becoming so complex that they’re beyond the capacity of any one individual to truly understand them…. In AI, the neural networks moving toward autonomy are, at present, not explainable. You can’t walk someone through the decision-making process to explain precisely why an algorithm produced a specific prediction. Engineers can’t peer beneath the hood and easily explain in granular detail what caused something to happen. GPT-4, AlphaGo, and the rest are black boxes, their outputs and decisions based on opaque and impossibly intricate chains of minute signals
  • A 2016 survey by the OECD found that most people had difficulty grasping even simple financial concepts like compound interest. A 2014 survey of British MPs—charged with regulating one of the world’s most important financial hubs—found that only 12 percent accurately understood that new money is created when banks make loans. This fact is among the most basic principles of the modern financial system. As the 2007–8 financial crisis indicated, more complex financial devices and principles, like those behind CDOs, were intelligible to only a few financial wizards. What happens to democracy when AIs create even more complex financial devices and when the number of humans who understand the financial system drops to zero?
  • “Our algorithm,” the imaginary bank letter might read, “uses a precise points system to evaluate all applications, taking a thousand different types of data points into account. It adds all the data points to reach an overall score. People whose overall score is negative are considered low-credit persons, too risky to be given a loan. Your overall score was −378, which is why your loan application was refused.” The letter might then provide a detailed list of the thousand factors the algorithm took into account, including things that most humans might find irrelevant, such as the exact hour the application was submitted or the type of smartphone the applicant used. Thus on page 601 of its letter, the bank might explain that “you filed your application from your smartphone, which was the latest iPhone model. By analyzing millions of previous loan applications, our algorithm discovered a pattern—people who use the latest iPhone model to file their application are 0.08 percent more likely to repay the loan. The algorithm therefore added 8 points to your overall score for that. However, at the time your application was sent from your iPhone, its battery was down to 17 percent. By analyzing millions of previous loan applications, our algorithm discovered another pattern: people who allow their smartphone’s battery to go below 25 percent are 0.5 percent less likely to repay the loan. You lost 50 points for that.”
  • While we may find this way of making decisions alien, it obviously has potential advantages. When making a decision, it is generally a good idea to take into account all relevant data points rather than just one or two salient facts. There is much room for argument, of course, about who gets to define the relevance of information. Who decides whether something like smartphone models—or skin color—should be considered relevant to loan applications? But no matter how we define relevance, the ability to take more data into account is likely to be an asset. Indeed, the problem with many human prejudices is that they focus on just one or two data points—like someone’s skin color, disability, or gender—while ignoring other information. Banks and other institutions are increasingly relying on algorithms to make decisions, precisely because algorithms can take many more data points into account than humans can.
  • The algorithm wasn’t fed this rule by a human engineer; it reached that conclusion by discovering a pattern in millions of previous loan applications. Can an individual human client go over all that data and assess whether that pattern is indeed reliable and unbiased?
  • So, what happens to democratic debates when millions—and eventually billions—of highly intelligent bots are not only composing extremely compelling political manifestos and creating deepfake images and videos but also able to win our trust and friendship? If I engage online in a political debate with an AI, it is a waste of time for me to try to change the AI’s opinions; being a nonconscious entity, it doesn’t really care about politics, and it cannot vote in the elections. But the more I talk with the AI, the better it gets to know me, so it can gain my trust, hone its arguments, and gradually change my views. In the battle for hearts and minds, intimacy is an extremely powerful weapon. Previously, political parties could command our attention, but they had difficulty mass-producing intimacy. Radio sets could broadcast a leader’s speech to millions, but they could not befriend the listeners. Now a political party, or even a foreign government, could deploy an army of bots that build friendships with millions of citizens and then use that intimacy to influence their worldview.
  • “Are you sure you were not fooled by deepfakes?”
  • “I’m afraid the data I relied on is 100 percent genuine,” says the algorithm. “I checked it with my special deepfake-detecting sub-algorithm. I can explain exactly how we know it isn’t a deepfake, but that would take us a couple of weeks. I didn’t want to alert you before I was sure, but the data points converge on an inescapable conclusion: a coup is under way. Unless we act now, the assassins will be here in an hour. But give me the order, and I’ll liquidate the traitor.”
  • The invention of AI is potentially more momentous than the invention of the telegraph, the printing press, or even writing, because AI is the first technology that is capable of making decisions and generating ideas by itself.
  • We command immense power and enjoy rare luxuries, but we are easily manipulated by our own creations, and by the time we wake up to the danger, it might be too late.
  • There is, though, an even worse scenario. As far as we know today, apes, rats, and the other organic animals of planet Earth may be the only conscious entities in the entire universe. We have now created a nonconscious but very powerful alien intelligence. If we mishandle it, AI might extinguish not only the human dominion on Earth but the light of consciousness itself, turning the universe into a realm of utter darkness. It is our responsibility to prevent this.
  • But power isn’t wisdom, and after 100,000 years of discoveries, inventions, and conquests humanity has pushed itself into an existential crisis. We are on the verge of ecological collapse, caused by the misuse of our own power. We are also busy creating new technologies like artificial intelligence (AI) that have the potential to escape our control and enslave or annihilate us. Yet instead of our species uniting to deal with these existential challenges, international tensions are rising, global cooperation is becoming more difficult, countries are stockpiling doomsday weapons, and a new world war does not seem impossible.
  • We have already driven the earth’s climate out of balance and have summoned billions of enchanted brooms, drones, chatbots, and other algorithmic spirits that may escape our control and unleash a flood of unintended consequences.
  • Would having even more information make things better—or worse? We will soon find out. Numerous corporations and governments are in a race to develop the most powerful information technology in history—AI.
  • A 2024 article co-authored by Bengio, Hinton, and numerous other experts noted that “unchecked AI advancement could culminate in a large-scale loss of life and the biosphere, and the marginalization or even extinction of humanity.”
  • In a 2023 survey of 2,778 AI researchers, more than a third gave at least a 10 percent chance to advanced AI leading to outcomes as bad as human extinction.
  • Given the magnitude of the danger, AI should be of interest to all human beings. While not everyone can become an AI expert, we should all keep in mind that AI is the first technology in history that can make decisions and create new ideas by itself.
  • AI isn’t a tool—it’s an agent.
  • In the years since Homo Deus was published, the pace of change has only accelerated, and power has indeed been shifting from humans to algorithms. Many of the scenarios that sounded like science fiction in 2016—such as algorithms that can create art, masquerade as human beings, make crucial life decisions about us, and know more about us than we know about ourselves—are everyday realities in 2024.
  • Thus, understanding the process through which the allegedly infallible Bible was canonized provides valuable insight about present-day claims for AI infallibility.
  • The main split in twenty-first-century politics might be not between democracies and totalitarian regimes but rather between human beings and nonhuman agents. Instead of dividing democracies from totalitarian regimes, a new Silicon Curtain may separate all humans from our unfathomable algorithmic overlords. People in all countries and walks of life—including even dictators—might find themselves subservient to an alien intelligence that can monitor everything we do while we have little idea what it is doing.
  • For the moment it is enough to say that in essence a computer is a machine that can potentially do two remarkable things: it can make decisions by itself, and it can create new ideas by itself. While the earliest computers could hardly accomplish such things, the potential was already there, plainly seen by both computer scientists and science fiction authors.
  • Clay tablets stored information about taxes, but they couldn’t decide by themselves how much tax to levy, nor could they invent an entirely new tax. Printing presses copied information such as the Bible, but they couldn’t decide which texts to include in the Bible, nor could they write new commentaries on the holy book. Radio sets disseminated information such as political speeches and symphonies, but they couldn’t decide which speeches or symphonies to broadcast, nor could they compose them. Computers can do all these things. While printing presses and radio sets were passive tools in human hands, computers are already becoming active agents that escape our control and understanding and that can take initiatives in shaping society, culture, and history.
  • crucially, the algorithms themselves are also to blame. By trial and error, they learned that outrage creates engagement, and without any explicit order from above they decided to promote outrage. This is the hallmark of AI—the ability of a machine to learn and act by itself. Even if we assign just 1 percent of the blame to the algorithms, this is still the first ethnic-cleansing campaign in history that was partly the fault of decisions made by nonhuman intelligence.
  • The same is true of AI algorithms. They can learn by themselves things that no human engineer programmed, and they can decide things that no human executive foresaw. This is the essence of the AI revolution: The world is being flooded by countless new powerful agents.
  • People often confuse intelligence with consciousness, and many consequently jump to the conclusion that nonconscious entities cannot be intelligent. But intelligence and consciousness are very different. Intelligence is the ability to attain goals, such as maximizing user engagement on a social media platform. Consciousness is the ability to experience subjective feelings like pain, pleasure, love, and hate. In humans and other mammals, intelligence often goes hand in hand with consciousness. Facebook executives and engineers rely on their feelings in order to make decisions, solve problems, and attain their goals.
  • But it is wrong to extrapolate from humans and mammals to all possible entities. Bacteria and plants apparently lack any consciousness, yet they too display intelligence. They gather information from their environment, make complex choices, and pursue ingenious strategies to obtain food, reproduce, cooperate with other organisms, and evade predators and parasites. Even humans make intelligent decisions without any awareness of them; 99 percent of the processes in our body, from respiration to digestion, happen without any conscious decision making. Our brains decide to produce more adrenaline or dopamine, and while we may be aware of the result of that decision, we do not make it consciously.
  • Of course, as computers become more intelligent, they might eventually develop consciousness and have some kind of subjective experiences. Then again, they might become far more intelligent than us, but never develop any kind of feelings. Since we don’t understand how consciousness emerges in carbon-based life-forms, we cannot foretell whether it could emerge in nonorganic entities. Perhaps consciousness has no essential link to organic biochemistry, in which case conscious computers might be just around the corner. Or perhaps there are several alternative paths leading to superintelligence, and only some of these paths involve gaining consciousness. Just as airplanes fly faster than birds without ever developing feathers, so computers may come to solve problems much better than humans without ever developing feelings.
  • CAPTCHA is an acronym for “Completely Automated Public Turing test to tell Computers and Humans Apart,” and it typically consists of a string of twisted letters or other visual symbols that humans can identify correctly but computers struggle with.
  • In contrast, computer-to-computer chains can now function without humans in the loop. For example, one computer might generate a fake news story and post it on a social media feed. A second computer might identify this as fake news and not just delete it but also warn other computers to block it. Meanwhile, a third computer analyzing this activity might deduce that this indicates the beginning of a political crisis, and immediately sell risky stocks and buy safer government bonds. Other computers monitoring financial transactions may react by selling more stocks, triggering a financial downturn. All this could happen within seconds, before any human can notice and decipher what all these computers are doing.
  • In previous networks, members were human, every chain had to pass through humans, and technology served only to connect the humans. In the new computer-based networks, computers themselves are members and there are computer-to-computer chains that don’t pass through any human.
  • When the central bank raises interest rates by 0.25 percent, how does that influence the economy? When the yield curve of government bonds goes up, is it a good time to buy them? When is it advisable to short the price of oil? These are the kinds of important financial questions that computers can already answer better than most humans. No wonder that computers make a larger and larger percentage of the financial decisions in the world. We may reach a point when computers dominate the financial markets, and invent completely new financial tools beyond our understanding.
  • What would it mean for humans to live in a world where catchy melodies, scientific theories, technical tools, political manifestos, and even religious myths are shaped by a nonhuman alien intelligence that knows how to exploit with superhuman efficiency the weaknesses, biases, and addictions of the human mind?
  • Religions throughout history claimed a nonhuman source for their holy books; soon that might be a reality. Attractive and powerful religions might emerge whose scriptures are composed by AI.
  • Equally alarmingly, we might increasingly find ourselves conducting lengthy online discussions about the Bible, about QAnon, about witches, about abortion, or about climate change with entities that we think are humans but are actually computers. This could make democracy untenable. Democracy is a conversation, and conversations rely on language. By hacking language, computers could make it extremely difficult for large numbers of humans to conduct a meaningful public conversation. When we engage in a political debate with a computer impersonating a human, we lose twice. First, it is pointless for us to waste time in trying to change the opinions of a propaganda bot, which is just not open to persuasion. Second, the more we talk with the computer, the more we disclose about ourselves, thereby making it easier for the bot to hone its arguments and sway our views.
  • Through their mastery of language, computers could go a step further. By conversing and interacting with us, computers could form intimate relationships with people and then use the power of intimacy to influence us. To foster such “fake intimacy,” computers will not need to evolve any feelings of their own; they just need to learn to make us feel emotionally attached to them. In 2022 the Google engineer Blake Lemoine became convinced that the chatbot LaMDA, on which he was working, had become conscious and that it had feelings and was afraid to be turned off. Lemoine—a devout Christian who had been ordained as a priest—felt it was his moral duty to gain recognition for LaMDA’s personhood and in particular protect it from digital death. When Google executives dismissed his claims, Lemoine went public with them. Google reacted by firing Lemoine in July 2022.
  • The most interesting thing about this episode was not Lemoine’s claim, which was probably false. Rather, it was his willingness to risk—and ultimately lose—his lucrative job for the sake of the chatbot. If a chatbot can influence people to risk their jobs for it, what else could it induce us to do?
  • Even without creating “fake intimacy,” mastery of language would give computers an immense influence on our opinions and worldview. People may come to use a single computer adviser as a one-stop oracle. Why bother searching and processing information by myself when I can just ask the oracle? This could put out of business not only search engines but also much of the news industry and advertisement industry. Why read a newspaper when I can just ask my oracle what’s new? And what’s the purpose of advertisements when I can just ask the oracle what to buy?
  • At first, computers will probably imitate human cultural prototypes, writing humanlike texts and composing humanlike music. This doesn’t mean computers lack creativity; after all, human artists do the same. Bach didn’t compose music in a vacuum; he was deeply influenced by previous musical creations, as well as by biblical stories and other preexisting cultural artifacts. But just as human artists like Bach can break with tradition and innovate, computers too can make cultural innovations, composing music or making images that are somewhat different from anything previously produced by humans.
  • For millennia human beings have lived inside the dreams of other humans. In the coming decades we might find ourselves living inside the dreams of an alien intelligence.
  • I promise you that I wrote the text myself, with the help of some other humans. I promise you that this is a cultural product of the human mind. But can you be absolutely sure of it? A few years ago, you could. Prior to the 2020s, there was nothing on earth, other than a human mind, that could produce sophisticated texts. Today things are different. In theory, the text you’ve just read might have been generated by the alien intelligence of some computer.
  • Google Brain, for example, has experimented with new encryption methods developed by computers. It set up an experiment in which two computers—nicknamed Alice and Bob—had to exchange encrypted messages, while a third computer named Eve tried to break their encryption. If Eve broke the encryption within a given time period, it got points. If it failed, Alice and Bob scored. After about fifteen thousand exchanges, Alice and Bob came up with a secret code that Eve couldn’t break. Crucially, the Google engineers who conducted the experiment had not taught Alice and Bob anything about how to encrypt messages. The computers created a private language all on their own.
  • In computer evolution, the distance from amoeba to T. rex could be covered in a decade. If GPT-4 is the amoeba, how would the T. rex look like?
  • Our difficulty in deciding what to call them is itself important. Organisms are distinct individual entities that can be grouped into collectives like species and genera. With computers, however, it is becoming ever more difficult to decide where one entity ends and another begins and how exactly to group them.
  • It should also be noted that people often define and evaluate AI through the metric of “human-level intelligence,” and there is much debate about when we can expect AIs to reach “human-level intelligence.” The use of this metric, however, is deeply confusing. It is like defining and evaluating airplanes through the metric of “bird-level flight.” AI isn’t progressing toward human-level intelligence. It is evolving an entirely different type of intelligence.
  • The truth is, we don’t. That’s not because we are stupid but because the technology is extremely complicated and things are moving at breakneck speed.
  • If computers made decisions and created ideas in a way similar to humans, then computers would be a kind of “new humans.” That’s a scenario often explored in science fiction: the computer that becomes conscious, develops feelings, falls in love with a human, and turns out to be exactly like us. But the reality is very different, and potentially more alarming.
  • For example, in 2012 users were watching about 100 million hours of videos every day on YouTube. That was not enough for company executives, who set their algorithms an ambitious goal: 1 billion hours a day by 2016. Through trial-and-error experiments on millions of people, the YouTube algorithms discovered the same pattern that Facebook algorithms also learned: outrage drives engagement up, while moderation tends not to. Accordingly, the YouTube algorithms began recommending outrageous conspiracy theories to millions of viewers while ignoring more moderate content. By 2016, users were indeed watching one billion hours every day on YouTube.
  • When computers are given a specific goal, such as to increase YouTube traffic to one billion hours a day, they use all their power and ingenuity to achieve this goal. Since they operate very differently than humans, they are likely to use methods their human overlords didn’t anticipate. This can result in dangerous unforeseen consequences that are not aligned with the original human goals. Even if recommendation algorithms stop encouraging hate, other instances of the alignment problem might result in larger catastrophes
  • One reason why the alignment problem is particularly dangerous in the context of the computer network is that this network is likely to become far more powerful than any previous human bureaucracy. A misalignment in the goals of superintelligent computers might result in a catastrophe of unprecedented magnitude.
  • Bostrom’s point was that the problem with computers isn’t that they are particularly evil but that they are particularly powerful. And the more powerful the computer, the more careful we need to be about defining its goal in a way that precisely aligns with our ultimate goals.
  • The paper-clip thought experiment may sound outlandish and utterly disconnected from reality. But if Silicon Valley managers had paid attention when Bostrom published it in 2014, perhaps they would have been more careful before instructing their algorithms to “maximize user engagement.” The Facebook and YouTube algorithms behaved exactly like Bostrom’s imaginary algorithm.
  • If our only rule of thumb is that “every action must be aligned with some higher goal,” by definition there is no rational way to define that ultimate goal. How then can we provide a computer network with an ultimate goal it must never ignore or subvert? Tech executives and engineers who rush to develop AI are making a huge mistake if they think there is a rational way to tell AI what its ultimate goal should be. They should learn from the bitter experiences of generations of philosophers who tried to define ultimate goals and failed.
  • Dictators have always suffered from weak self-correcting mechanisms and have always been threatened by powerful subordinates. The rise of AI may greatly exacerbate these problems. The computer network therefore presents dictators with an excruciating dilemma. They could decide to escape the clutches of their human underlings by trusting a supposedly infallible technology, in which case they might become the technology’s puppet. Or, they could build a human institution to supervise the AI, but that institution might limit their own power, too.
  • On July 9, 1955, Albert Einstein, Bertrand Russell, and a number of other eminent scientists and thinkers published the Russell-Einstein Manifesto, calling on the leaders of both democracies and dictatorships to cooperate on preventing nuclear war. “We appeal,” said the manifesto, “as human beings, to human beings: remember your humanity, and forget the rest. If you can do so, the way lies open to a new Paradise; if you cannot, there lies before you the risk of universal death.” This is true of AI too. It would be foolish of dictators to believe that AI will necessarily tilt the balance of power in their favor. If they aren’t careful, AI will just grab power to itself.
  • Many societies—both democracies and dictatorships—may act responsibly to regulate such usages of AI, clamp down on bad actors, and restrain the dangerous ambitions of their own rulers and fanatics. But if even a handful of societies fail to do so, this could be enough to endanger the whole of humankind. Climate change can devastate even countries that adopt excellent environmental regulations, because it is a global rather than a national problem. AI, too, is a global problem.
  • Kevin Kelly, the founding editor of Wired magazine, recounted how in 2002 he attended a small party at Google and struck up a conversation with Larry Page. “Larry, I still don’t get it. There are so many search companies. Web search, for free? Where does that get you?” Page explained that Google wasn’t focused on search at all. “We’re really making an AI,” he said. Having lots of data makes it easier to create an AI. And AI can turn lots of data into lots of power.
  • The United States at the time was already the leader in the AI race, thanks largely to efforts of visionary private entrepreneurs. But what began as a commercial competition between corporations was turning into a match between governments, or perhaps more accurately, into a race between competing teams, each made of one government and several corporations. The prize for the winner? World domination.
  • What will happen to the economies and politics of Pakistan and Bangladesh, for example, when automation makes it cheaper to produce textiles in Europe? Consider that at present the textile sector provides employment to 40 percent of Pakistan’s total labor force and accounts for 84 percent of Bangladesh’s export earnings. As noted in chapter 9, while automation might make millions of textile workers redundant, it will probably create many new jobs, too. For instance, there might be a huge demand for coders and data analysts. But turning an unemployed factory hand into a data analyst demands a substantial up-front investment in retraining. Where would Pakistan and Bangladesh get the money to do that?
  • AI and automation therefore pose a particular challenge to poorer developing countries. In an AI-driven economy, the digital leaders claim the bulk of the gains and could use their wealth to retrain their workforce and profit even more. Meanwhile, the value of unskilled laborers in left-behind countries will decline, and they will not have the resources to retrain their workforce, causing them to fall even further behind. The result might be lots of new jobs and immense wealth in San Francisco and Shanghai, while many other parts of the world face economic ruin. According to the global accounting firm PricewaterhouseCoopers, AI is expected to add $15.7 trillion to the global economy by 2030. But if current trends continue, it is projected that China and North America—the two leading AI superpowers—will together take home 70 percent of that money.
  • In 2022, the Biden administration placed strict limits on trade in high-performance computing chips necessary for the development of AI. U.S. companies were forbidden to export such chips to China, or to provide China with the means to manufacture or repair them. The restrictions have subsequently been tightened further, and the ban was expanded to include other nations such as Russia and Iran. While in the short term this hampers China in the AI race, in the long term it will push China to develop a completely separate digital sphere that will be distinct from the American digital sphere even in its smallest building blocks.

Democracy vs Totalitarianism

  • Any kid can tell the difference between a friend and a bully. You know if someone shares their lunch with you or instead takes yours. But when the tax collector comes to take a cut from your earnings, how can you tell whether it goes to build a new public sewage system or a new private dacha for the president? It is hard to get all the relevant information, and even harder to interpret it.
  • Given our inability to predict how the new computer network will develop, our best chance to avoid catastrophe in the present century is to maintain democratic self-correcting mechanisms that can identify and correct mistakes as we go along.
  • we saw that democracy depends on information technology and that for most of human history large-scale democracy was simply impossible. Might the new information technologies of the twenty-first century again make democracy impractical?
  • One potential threat is that the relentlessness of the new computer network might annihilate our privacy and punish or reward us not only for everything we do and say but even for everything we think and feel. Can democracy survive under such conditions? If the government—or some corporation—knows more about me than I know about myself, and if it can micromanage everything I do and think, that would give it totalitarian control over society. Even if elections are still held regularly, they would be an authoritarian ritual rather than a real check on the government’s power. For the government could use its vast surveillance powers and its intimate knowledge of every citizen to manipulate public opinion on an unprecedented scale.
  • For the survival of democracy, some inefficiency is a feature, not a bug. To protect the privacy and liberty of individuals, it’s best if neither the police nor the boss knows everything about us.
  • Democracy requires balance. Governments and corporations often develop apps and algorithms as tools for top-down surveillance. But algorithms can just as easily become powerful tools for bottom-up transparency and accountability, exposing bribery and tax evasion. If they know more about us, while we simultaneously know more about them, the balance is kept. This isn’t a novel idea. Throughout the nineteenth and twentieth centuries, democracies greatly expanded governmental surveillance of citizens so that, for example, the Italian or Japanese government of the 1990s had surveillance abilities that autocratic Roman emperors or Japanese shoguns could only have dreamed of. Italy and Japan nevertheless remained democratic, because they simultaneously increased governmental transparency and accountability. Mutual surveillance is another important element of sustaining self-correcting mechanisms. If citizens know more about the activities of politicians and CEOs, it is easier to hold them accountable and to correct their mistakes.
  • A second threat is that automation will destabilize the job market and the resulting strain may undermine democracy. The fate of the Weimar Republic is the most commonly cited example of this kind of threat. In the German elections of May 1928, the Nazi Party won less than 3 percent of the vote, and the Weimar Republic seemed to be prospering. Within less than five years, the Weimar Republic had collapsed, and Hitler was the absolute dictator of Germany. This turnaround is usually attributed to the 1929 financial crisis and the following global depression. Whereas just prior to the Wall Street crash of 1929 the German unemployment rate was about 4.5 percent of the labor force, by early 1932 it had climbed to almost 25 percent.
  • If three years of high unemployment could bring Hitler to power, what might never-ending turmoil in the job market do to democracy?
  • The most important human skill for surviving the twenty-first century is likely to be flexibility, and democracies are more flexible than totalitarian regimes. While computers are nowhere near their full potential, the same is true of humans.
  • Because computers will increasingly replace human bureaucrats and human mythmakers, this will again change the deep structure of power. To survive, democracies require not just dedicated bureaucratic institutions that can scrutinize these new structures but also artists who can explain the new structures in accessible and entertaining ways. For example, this has successfully been done by the episode “Nosedive” in the sci-fi series Black Mirror.
  • If manipulative bots and inscrutable algorithms come to dominate the public conversation, this could cause democratic debate to collapse exactly when we need it most. Just when we must make momentous decisions about fast-evolving new technologies, the public sphere will be flooded by computer-generated fake news, citizens will not be able to tell whether they are having a debate with a human friend or a manipulative machine, and no consensus will remain about the most basic rules of discussion or the most basic facts. This kind of anarchical information network cannot produce either truth or order and cannot be sustained for long. If we end up with anarchy, the next step would probably be the establishment of a dictatorship as people agree to trade their liberty for some certainty.
  • Is the ideological gap in the 2020s that much bigger than it was in the 1960s? And if it isn’t ideology, what is driving people apart?
  • In a blockchain system, decisions require the approval of 51 percent of users. That may sound democratic, but blockchain technology has a fatal flaw. The problem lies with the word “users.” If one person has ten accounts, she counts as ten users. If a government controls 51 percent of accounts, then the government constitutes 51 percent of the users. There are already examples of blockchain networks where a government is 51 percent of users.
  • “But the defense minister is my most loyal supporter,” says the Great Leader. “Only yesterday he said to me—”
  • What happens, for example, if the American sphere discounts the body, defines humans by their online identity, recognizes AIs as persons, and downplays the importance of the ecosystem, whereas the Chinese sphere adopts opposite positions? Current disagreements about violations of human rights or adherence to ecological standards will look minuscule in comparison.
  • State budgets in more recent decades make for far more hopeful reading material than any pacifist tract ever composed. In the early twenty-first century, the worldwide average government expenditure on the military has been only around 7 percent of the budget, and even the dominant superpower of the United States spent only around 13 percent of its annual budget to maintain its military hegemony.
  • This particular line of radical leftist thinking goes back to Karl Marx, who argued in the mid-nineteenth century that power is the only reality, that information is a weapon, and that elites who claim to be serving truth and justice are in fact pursuing narrow class privileges.
  • Present-day populists also suffer from the same incoherence that plagued radical antiestablishment movements in previous generations. If power is the only reality, and if information is just a weapon, what does it imply about the populists themselves? Are they too interested only in power, and are they too lying to us to gain power?
  • One of the recurrent paradoxes of populism is that it starts by warning us that all human elites are driven by a dangerous hunger for power, but often ends by entrusting all power to a single ambitious human.
  • Instead of trusting complex human institutions, populists give us the same advice as the Phaethon myth and “The Sorcerer’s Apprentice”: “Trust God or the great sorcerer to intervene and make everything right again.” If we take this advice, we’ll likely find ourselves in the short term under the thumb of the worst kind of power-hungry humans, and in the long term under the thumb of new AI overlords. Or we might find ourselves nowhere at all, as Earth becomes inhospitable for human life.
  • When we ask whether a particular state exists, we are raising a question about intersubjective reality. If enough people agree that a particular state exists, then it does. It can then do things like sign legally binding agreements with other states as well as NGOs and private corporations.
  • The church sought to lock society inside an echo chamber, allowing the spread only of those books that supported it, and people trusted the church because almost all the books supported it.
  • Institutions, too, die without self-correcting mechanisms. These mechanisms start with the realization that humans are fallible and corruptible. But instead of despairing of humans and looking for a way to bypass them, the institution actively seeks its own errors and corrects them. All institutions that manage to endure beyond a handful of years possess such mechanisms, but institutions differ greatly in the strength and visibility of their self-correcting mechanisms.
  • In theory, a highly centralized information network could try to maintain strong self-correcting mechanisms, like independent courts and elected legislative bodies. But if they functioned well, these would challenge the central authority and thereby decentralize the information network. Dictators always see such independent power hubs as threats and seek to neutralize them.
  • The definition of democracy as a distributed information network with strong self-correcting mechanisms stands in sharp contrast to a common misconception that equates democracy only with elections. Elections are a central part of the democratic tool kit, but they are not democracy. In the absence of additional self-correcting mechanisms, elections can easily be rigged. Even if the elections are completely free and fair, by itself this too doesn’t guarantee democracy. For democracy is not the same thing as majority dictatorship.
  • Suppose that in a free and fair election 51 percent of voters choose a government that subsequently sends 1 percent of voters to be exterminated in death camps, because they belong to some hated religious minority. Is this democratic? Clearly it is not. The problem isn’t that genocide demands a special majority of more than 51 percent. It’s not that if the government gets the backing of 60 percent, 75 percent, or even 99 percent of voters, then its death camps finally become democratic. A democracy is not a system in which a majority of any size can decide to exterminate unpopular minorities; it is a system in which there are clear limits on the power of the center.
  • Once the courts are no longer able to check the government’s power by legal means, and once the media obediently parrots the government line, all other institutions or persons who dare oppose the government can be smeared and persecuted as traitors, criminals, or foreign agents. Academic institutions, municipalities, NGOs, and private businesses are either dismantled or brought under government control. At that stage, the government can also rig the elections at will, for example by jailing popular opposition leaders, preventing opposition parties from participating in the elections, gerrymandering election districts, or disenfranchising voters. Appeals against these antidemocratic measures are dismissed by the government’s handpicked judges. Journalists and academics who criticize these measures are fired. The remaining media outlets, academic institutions, and judicial authorities all praise these measures as necessary steps to protect the nation and its allegedly democratic system from traitors and foreign agents.
  • democracy doesn’t mean majority rule; rather, it means freedom and equality for all. Democracy is a system that guarantees everyone certain liberties, which even the majority cannot take away.
  • Nobody disputes that in a democracy the representatives of the majority are entitled to form the government and to advance their preferred policies in myriad fields. If the majority wants war, the country goes to war. If the majority wants peace, the country makes peace. If the majority wants to raise taxes, taxes are raised. If the majority wants to lower taxes, taxes are lowered. Major decisions about foreign affairs, defense, education, taxation, and numerous other policies are all in the hands of the majority.
  • But in a democracy, there are two baskets of rights that are protected from the majority’s grasp. One contains human rights. Even if 99 percent of the population wants to exterminate the remaining 1 percent, in a democracy this is forbidden, because it violates the most basic human right—the right to life. The basket of human rights contains many additional rights, such as the right to work, the right to privacy, freedom of movement, and freedom of religion. These rights enshrine the decentralized nature of democracy, making sure that as long as people don’t harm anyone, they can live their lives as they see fit.
  • In a democracy the majority has every right to make momentous decisions like starting wars, and that includes the right to make momentous errors. But the majority should at least acknowledge its own fallibility and protect the freedom of minorities to hold and publicize unpopular views, which might turn out to be correct.
  • One option might be to immediately cut greenhouse gas emissions, even at the cost of slowing economic growth. This means incurring some difficulties today but saving people in 2050 from more severe hardship, saving the island nation of Kiribati from drowning, and saving the polar bears from extinction. A second option might be to continue with business as usual. This means having an easier life today, but making life harder for the next generation, flooding Kiribati, and driving the polar bears—as well as numerous other species—to extinction. Choosing between these two options is a question of desire, and should therefore be done by all voters rather than by a limited group of experts.
  • If the majority prefers to consume whatever amount of fossil fuels it wishes with no regard to future generations or other environmental considerations, it is entitled to vote for that. But the majority should not be entitled to pass a law stating that climate change is a hoax and that all professors who believe in climate change must be fired from their academic posts. We can choose what we want, but we shouldn’t deny the true meaning of our choice.
  • In like fashion, populists can believe that the enemies of the people have deceived the people to vote against its true will, which the populists alone represent.
  • As the self-proclaimed representatives of the people, populists consequently seek to monopolize not just political authority but all types of authority and to take control of institutions such as media outlets, courts, and universities. By taking the democratic principle of “people’s power” to its extreme, populists turn totalitarian.
  • In a well-functioning democracy, citizens trust the results of elections, the decisions of courts, the reports of media outlets, and the findings of scientific disciplines because citizens believe these institutions are committed to the truth. Once people think that power is the only reality, they lose trust in all these institutions, democracy collapses, and the strongmen can seize total power.
  • Rather, we need to ask much more complex questions like “What mechanisms prevent the central government from rigging the elections?” “How safe is it for leading media outlets to criticize the government?” and “How much authority does the center appropriate to itself?” Democracy and dictatorship aren’t binary opposites, but rather are on a continuum. To decide whether a network is closer to the democratic or the dictatorial end of the continuum, we need to understand how information flows in the network and what shapes the political conversation.
  • If one person dictates all the decisions, and even their closest advisers are terrified to voice a dissenting view, no conversation is taking place. Such a network is situated at the extreme dictatorial end of the spectrum. If nobody can voice unorthodox opinions publicly, but behind closed doors a small circle of party bosses or senior officials are able to freely express their views, then this is still a dictatorship, but it has taken a baby step in the direction of democracy. If 10 percent of the population participate in the political conversation by airing their opinions, voting in fair elections, and running for office, that may be considered a limited democracy, as was the case in many ancient city-states like Athens, or in the early days of the United States, when only wealthy white men had such political rights. As the percentage of people taking part in the conversation rises, so the network becomes more democratic.
  • Democracies die not only when people are not free to talk but also when people are not willing or able to listen.
  • Most hunter-gatherer economies were far more diversified. One leader, even supported by a few allies, could not corral the savanna and prevent people from gathering plants and hunting animals there. If all else failed, hunter-gatherers could therefore vote with their feet. They had few possessions, and their most important assets were their personal skills and personal friends. If a chief turned dictatorial, people could just walk away.
  • Even when hunter-gatherers did end up ruled by a domineering chief, as happened among the salmon-fishing people of northwestern America, at least that chief was accessible. He didn’t live in a faraway fortress surrounded by an unfathomable bureaucracy and a cordon of armed guards. If you wanted to voice a complaint or a suggestion, you could usually get within earshot of him. The chief couldn’t control public opinion, nor could he shut himself off from it. In other words, there was no way for a chief to force all information to flow through the center, or to prevent people from talking with one another, criticizing him, or organizing against him.
  • Thousands of more small-scale societies continued to function democratically in the third century CE and beyond, but it seemed that distributed democratic networks were simply incompatible with large-scale societies.
  • They didn’t sabotage Roman democracy. Given the size of the empire and the available information technology, democracy was simply unworkable. This was acknowledged already by ancient philosophers like Plato and Aristotle, who argued that democracy can work only in small-scale city-states.
  • prior to the development of modern information technology, there are no examples of large-scale democracies anywhere.
  • You may wonder whether we are talking about democracies at all. At a time when the United States had more slaves than voters (more than 1.5 million Americans were enslaved in the early 1820s), was the United States really a democracy? This is a question of definitions. As with the late-sixteenth-century Polish-Lithuanian Commonwealth, so also with the early-nineteenth-century United States, “democracy” is a relative term. As noted earlier, democracy and autocracy aren’t absolutes; they are part of a continuum.
  • In 1960, about seventy million Americans (39 percent of the total population), dispersed over the North American continent and beyond, watched the Nixon-Kennedy presidential debates live on television, with millions more listening on the radio. The only effort viewers and listeners had to make was to press a button while sitting in their homes. Large-scale democracy had now become feasible. Millions of people separated by thousands of kilometers could conduct informed and meaningful public debates about the rapidly evolving issues of the day.
  • The emperor Nero arranged the murder of his mother, Agrippina, and his wife, Octavia, and forced his mentor Seneca to commit suicide. Nero also executed or exiled some of the most respected and powerful Roman aristocrats merely for voicing dissent or telling jokes about him.
  • While autocratic rulers like Nero could execute anyone who did or said something that displeased them, they couldn’t know what most people in their empire were doing or saying. Theoretically, Nero could issue an order that any person in the Roman Empire who criticized or insulted the emperor must be severely punished. Yet there were no technical means for implementing such an order. Roman historians like Tacitus portray Nero as a bloodthirsty tyrant who instigated an unprecedented reign of terror. But this was a very limited type of terror. Although he executed or exiled a number of family members, aristocrats, and senators within his orbit, ordinary Romans in the city’s slums and provincials in distant towns like Jerusalem and Londinium could speak their mind much more freely.
  • Just as modern technology enabled large-scale democracy, it also made large-scale totalitarianism possible. Beginning in the nineteenth century, the rise of industrial economies allowed governments to employ many more administrators, and new information technologies—such as the telegraph and radio—made it possible to quickly connect and supervise all these administrators. This facilitated an unprecedented concentration of information and power, for those who dreamed about such things.
  • Totalitarian regimes are based on controlling the flow of information and are suspicious of any independent channels of information. When military officers, state officials, or ordinary citizens exchange information, they can build trust. If they come to trust one another, they can organize resistance to the regime. Therefore, a key tenet of totalitarian regimes is that wherever people meet and exchange information, the regime should be there too, to keep an eye on them.
  • In theory, kulaks were an objective socioeconomic category, defined by analyzing empirical data on things like property, income, capital, and wages. Soviet officials could allegedly identify kulaks by counting things. If most people in a village had only one cow, then the few families who had three cows were considered kulaks. If most people in a village didn’t hire any labor, but one family hired two workers during harvest time, this was a kulak family. Being a kulak meant not only that you possessed a certain amount of property but also that you possessed certain personality traits. According to the supposedly infallible Marxist doctrine, people’s material conditions determined their social and spiritual character. Since kulaks allegedly engaged in capitalist exploitation, it was a scientific fact (according to Marxist thinking) that they were greedy, selfish, and unreliable—and so were their children. Discovering that someone was a kulak ostensibly revealed something profound about their fundamental nature.
  • On December 27, 1929, Stalin declared that the Soviet state should seek “the liquidation of the kulaks as a class,” and immediately galvanized the party and the secret police to realize that ambitious and murderous aim.
  • The absurdity of the entire operation is manifested in the case of the Streletsky family from the Kurgan region of Siberia. Dmitry Streletsky, who was then a teenager, recalled years later how his family was branded kulaks and selected for liquidation. “Serkov, the chairman of the village Soviet who deported us, explained: ‘I have received an order [from the district party committee] to find 17 kulak families for deportation. I formed a Committee of the Poor and we sat through the night to choose the families. There is no one in the village who is rich enough to qualify, and not many old people, so we simply chose the 17 families. You were chosen. Please don’t take it personally. What else could I do?’ ” If anyone dared object to the madness of the system, they were promptly denounced as kulaks and counterrevolutionaries and would themselves be liquidated.
  • You may wonder whether modern totalitarian institutions like the Nazi Party or the Soviet Communist Party were really all that different from earlier institutions like the Christian churches. After all, churches too believed in their infallibility, had priestly agents everywhere, and sought to control the daily life of people down to their diet and sexual habits. Shouldn’t we see the Catholic Church or the Eastern Orthodox Church as totalitarian institutions? And doesn’t this undermine the thesis that totalitarianism was made possible only by modern information technology?
  • There are, however, several major differences between modern totalitarianism and premodern churches. First, as noted earlier, modern totalitarianism has worked by deploying several overlapping surveillance mechanisms that keep one another in order. The party is never alone; it works alongside state organs, on the one side, and the secret police, on the other. In contrast, in most medieval European kingdoms the Catholic Church was an independent institution that often clashed with the state institutions instead of reinforcing them.
  • Churches became more totalitarian institutions only in the late modern era, when modern information technologies became available. We tend to think of popes as medieval relics, but actually they are masters of modern technology. In the eighteenth century, the pope had little control over the worldwide Catholic Church and was reduced to the status of a local Italian princeling, fighting other Italian powers for control of Bologna or Ferrara. With the advent of radio, the pope became one of the most powerful people on the planet. Pope John Paul II could sit in the Vatican and speak directly to millions of Catholics from Poland to the Philippines, without any archbishop, bishop, or parish priest able to twist or hide his words.
  • As contrasting types of information networks, democracy and totalitarianism both have their advantages and disadvantages. The biggest advantage of the centralized totalitarian network is that it is extremely orderly, which means it can make decisions quickly and enforce them ruthlessly. Especially during emergencies like wars and epidemics, centralized networks can move much faster and farther than distributed networks.
  • “Americans grow up with the idea that questions lead to answers,” he said. “But Soviet citizens grew up with the idea that questions lead to trouble.”
  • Totalitarian and authoritarian networks face other problems besides blocked arteries. First and foremost, as we have already established, their self-correcting mechanisms tend to be very weak. Since they believe they are infallible, they see little need for such mechanisms, and since they are afraid of any independent institution that might challenge them, they lack free courts, media outlets, or research centers. Consequently, there is nobody to expose and correct the daily abuses of power that characterize all governments.
  • Information systems can reach far with just a little truth and a lot of order. Anyone who abhors the moral costs of systems like Stalinism cannot rely on their supposed inefficiency to derail them.
  • Once we learn to see democracy and totalitarianism as different types of information networks, we can understand why they flourish in certain eras and are absent in others. It is not just because people gain or lose faith in certain political ideals; it is also because of revolutions in information technologies. Of course, just as the printing press didn’t cause the witch hunts or the scientific revolution, so radio didn’t cause either Stalinist totalitarianism or American democracy. Technology only creates new opportunities; it is up to us to decide which ones to pursue.
  • The images from Chicago or Paris in 1968 could easily have given the impression that things were falling apart. The pressure to live up to the democratic ideals and to include more people and groups in the public conversation seemed to undermine the social order and to make democracy unworkable.
  • Meanwhile, the regimes behind the Iron Curtain, which never promised inclusivity, continued stifling the public conversation and centralizing information and power. And it seemed to work. Though they did face some peripheral challenges, most notably the Hungarian revolt of 1956 and the Prague Spring of 1968, the communists dealt with these threats swiftly and decisively. In the Soviet heartland itself, everything was orderly.
  • Fast-forward twenty years, and it was the Soviet system that had become unworkable. The sclerotic gerontocrats on the podium in Red Square were a perfect emblem of a dysfunctional information network, lacking any meaningful self-correcting mechanisms. Decolonization, globalization, technological development, and changing gender roles led to rapid economic, social, and geopolitical changes. But the gerontocrats could not handle all the information streaming to Moscow, and since no subordinate was allowed much initiative, the entire system ossified and collapsed.
  • There were many hiccups, but the United States, Japan, and other democracies created a far more dynamic and inclusive information system, which made room for many more viewpoints without breaking down. It was such a remarkable achievement that many felt that the victory of democracy over totalitarianism was final. This victory has often been explained in terms of a fundamental advantage in information processing: totalitarianism didn’t work because trying to concentrate and process all the data in one central hub was extremely inefficient. At the beginning of the twenty-first century, it accordingly seemed that the future belonged to distributed information networks and to democracy.
  • This turned out to be wrong. In fact, the next information revolution was already gathering momentum, setting the stage for a new round in the competition between democracy and totalitarianism. Computers, the internet, smartphones, social media, and AI posed new challenges to democracy, giving a voice not only to more disenfranchised groups but to any human with an internet connection, and even to nonhuman agents. Democracies in the 2020s face the task, once again, of integrating a flood of new voices into the public conversation without destroying the social order. Things look as dire as they did in the 1960s, and there is no guarantee that democracies will pass the new test as successfully as they passed the previous one. Simultaneously, the new technologies also give fresh hope to totalitarian regimes that still dream of concentrating all the information in one hub.
  • And Facebook algorithms played an important role in the propaganda campaign.
  • The social and political consequences were far-reaching. For example, as the journalist Max Fisher documented in his 2022 book, The Chaos Machine, YouTube algorithms became an important engine for the rise of the Brazilian far right and for turning Jair Bolsonaro from a fringe figure into Brazil’s president. While there were other factors contributing to that political upheaval, it is notable that many of Bolsonaro’s chief supporters and aides had originally been YouTubers who rose to fame and power by algorithmic grace.
  • For Clausewitz, then, rationality means alignment. Pursuing tactical or strategic victories that are misaligned with political goals is irrational. The problem is that the bureaucratic nature of armies makes them highly susceptible to such irrationality.
  • As a thought experiment, imagine a meeting between Immanuel Kant and Adolf Eichmann—who, by the way, considered himself a Kantian. As Eichmann signs an order sending another trainload of Jews to Auschwitz, Kant tells him, “You are about to murder thousands of humans. Would you like to establish a universal rule saying it is okay to murder humans? If you do that, you and your family might also be murdered.” Eichmann replies, “No, I am not about to murder thousands of humans. I am about to murder thousands of Jews. If you ask me whether I would like to establish a universal rule saying it is okay to murder Jews, then I am all for it. As for myself and my family, there is no risk that this universal rule would lead to us being murdered. We aren’t Jews.”
  • One potential Kantian reply to Eichmann is that when we define entities, we must always use the most universal definition applicable. If an entity can be defined as either “a Jew” or “a human,” we should use the more universal term “human.” However, the whole point of Nazi ideology was to deny the humanity of Jews. In addition, note that Jews are not just humans. They are also animals, and they are also organisms. Since animals and organisms are obviously more universal categories than “human,” if you follow the Kantian argument to its logical conclusion, it might push us to adopt an extreme vegan position. Since we are organisms, does it mean we should object to the killing of any organism, down even to tomatoes or amoebas?

Truth, Fiction and Social Order

  • Unfortunately, this is not the world in which we live. In history, power stems only partially from knowing the truth. It also stems from the ability to maintain social order among a large number of people. Suppose you want to make an atom bomb. To succeed, you obviously need some accurate knowledge of physics. But you also need lots of people to mine uranium ore, build nuclear reactors, and provide food for the construction workers, miners, and physicists. The Manhattan Project directly employed about 130,000 people, with millions more working to sustain them.
  • the truth is often painful and disturbing, and if we try to make it more comforting and flattering, it will no longer be the truth. In contrast, fiction is highly malleable. The history of every nation contains some dark episodes that citizens don’t like to acknowledge and remember.
  • The choice isn’t simply between telling the truth and lying. There is a third option. Telling a fictional story is lying only when you pretend that the story is a true representation of reality. Telling a fictional story isn’t lying when you avoid such pretense and acknowledge that you are trying to create a new intersubjective reality rather than represent a preexisting objective reality.
  • It is crucial to note that “order” should not be confused with fairness or justice. The order created and maintained by the U.S. Constitution condoned slavery, the subordination of women, the expropriation of indigenous people, and extreme economic inequality. The genius of the U.S. Constitution is that by acknowledging that it is a legal fiction created by human beings, it was able to provide mechanisms to reach agreement on amending itself and remedying its own injustices
  • The Ten Commandments open with “I am the Lord your God.” By claiming divine origin, it precludes humans from changing it. As a result, the biblical text still endorses slavery even today.
  • All human political systems are based on fictions, but some admit it, and some do not. Being truthful about the origins of our social order makes it easier to make changes in it. If humans like us invented it, we can amend it. But such truthfulness comes at a price. Acknowledging the human origins of the social order makes it harder to persuade everyone to agree on it. If humans like us invented it, why should we accept it? As we shall see in chapter 5, until the late eighteenth century the lack of mass communication technology made it extremely difficult to conduct open debates between millions of people about the rules of the social order.
  • Having a lot of information doesn’t in and of itself guarantee either truth or order. It is a difficult process to use information to discover the truth and simultaneously use it to maintain order. What makes things worse is that these two processes are often contradictory, because it is frequently easier to maintain order through fictions. Sometimes—as in the case of the U.S. Constitution—fictional stories may acknowledge their fictionality, but more often they disavow it. Religions, for example, always claim to be an objective and eternal truth rather than a fictional story invented by humans. In such cases, the search for truth threatens the foundations of the social order. Many societies require their populations not to know their true origins: ignorance is strength.
  • Evolution has adapted our brains to be good at absorbing, retaining, and processing even very large quantities of information when they are shaped into a story.
  • While modern TV audiences need not memorize any texts by heart, it is noteworthy how easy they find it to follow the intricate plots of epic dramas, detective thrillers, and soap operas, recalling who each character is and how they are related to numerous others. We are so accustomed to performing such feats of memory that we seldom consider how extraordinary they are.
  • Scientists argue endlessly about whether viruses should count as life-forms or whether they fall outside the boundary of life. But this boundary isn’t an objective reality; it is an intersubjective convention. Even if biologists reach a consensus that viruses are life-forms, it wouldn’t change anything about how viruses behave; it will only change how humans think about them.
  • “Boy meets girl” and “boy fights boy over girl” are also biological dramas that have been enacted by countless mammals, birds, reptiles, and fish for hundreds of millions of years. We are mesmerized by these stories because understanding them has been essential for our ancestors’ survival.
  • All animals are torn between the need to try new food and the fear of being poisoned. Evolution therefore equipped animals with both curiosity and the capacity to feel disgust on coming into contact with something toxic or otherwise dangerous.
  • Throughout history many humans have claimed to convey messages from the gods, but the messages often contradict one another.
  • If a human prophet could falsify the words of a god, then the key problem of religion wasn’t solved by creating religious institutions like temples and priestly orders. People still needed to trust fallible humans in order to access the supposedly infallible gods.
  • We are so bad at weighing together many different factors that when people give a large number of reasons for a particular decision, it usually sounds suspicious. Suppose a good friend failed to attend our wedding. If she provides us with a single explanation—“My mom was in the hospital and I had to visit her”—that sounds plausible. But what if she lists fifty different reasons why she decided not to come: “My mom was a bit under the weather, and I had to take my dog to the vet sometime this week, and I had this project at work, and it was raining, and…and I know none of these fifty reasons by itself justifies my absence, but when I added all of them together, they kept me from attending your wedding.” We don’t say things like that, because we don’t think along such lines. We don’t consciously list fifty different reasons in our mind, give each of them a certain weight, aggregate all the weights, and thereby reach a conclusion.
  • As de Waal and many other biologists documented in numerous studies, real jungles—unlike the one in our imagination—are full of cooperation, symbiosis, and altruism displayed by countless animals, plants, fungi, and even bacteria. Eighty percent of all land plants, for example, rely on symbiotic relationships with fungi, and almost 90 percent of vascular plant families enjoy symbiotic relationships with microorganisms. If organisms in the rain forests of Amazonia, Africa, or India abandoned cooperation in favor of an all-out competition for hegemony, the rain forests and all their inhabitants would quickly die. That’s the law of the jungle.
  • While many information networks do privilege order over truth, no network can survive if it ignores truth completely. As for individual humans, we tend to be genuinely interested in truth rather than only in power.
  • Even if Homo sapiens destroys itself, the universe will keep going about its business as usual. It took four billion years for terrestrial evolution to produce a civilization of highly intelligent apes. If we are gone, and it takes evolution another hundred million years to produce a civilization of highly intelligent rats, it will. The universe is patient.
  • A book is a nexus between author and readers. It is a link connecting many minds together, which exists only when it is read.
  • Nobody disputes that humans today have a lot more information and power than in the Stone Age, but it is far from certain that we understand ourselves and our role in the universe much better.
  • Why are we so good at accumulating more information and power, but far less successful at acquiring wisdom?
  • According to this view, racists are ill-informed people who just don’t know the facts of biology and history. They think that “race” is a valid biological category, and they have been brainwashed by bogus conspiracy theories. The remedy to racism is therefore to provide people with more biological and historical facts. It may take time, but in a free market of information sooner or later truth will prevail.
  • The naive view is of course more nuanced and thoughtful than can be explained in a few paragraphs, but its core tenet is that information is an essentially good thing, and the more we have of it, the better. Given enough information and enough time, we are bound to discover the truth about things ranging from viral infections to racist biases, thereby developing not only our power but also the wisdom necessary to use that power well.
  • In its more extreme versions, populism posits that there is no objective truth at all and that everyone has “their own truth,” which they wield to vanquish rivals. According to this worldview, power is the only reality. All social interactions are power struggles, because humans are interested only in power. The claim to be interested in something else—like truth or justice—is nothing more than a ploy to gain power. Whenever and wherever populism succeeds in disseminating the view of information as a weapon, language itself is undermined. Nouns like “facts” and adjectives like “accurate” and “truthful” become elusive. Such words are not taken as pointing to a common objective reality. Rather, any talk of “facts” or “truth” is bound to prompt at least some people to ask, “Whose facts and whose truth are you referring to?”
  • This radical empiricist position implies that while large-scale institutions like political parties, courts, newspapers, and universities can never be trusted, individuals who make the effort can still find the truth by themselves.
  • This approach may sound scientific and may appeal to free-spirited individuals, but it leaves open the question of how human communities can cooperate to build health-care systems or pass environmental regulations, which demand large-scale institutional organization. Is a single individual capable of doing all the necessary research to decide whether the earth’s climate is heating up and what should be done about it? How would a single person go about collecting climate data from throughout the world, not to mention obtaining reliable records from past centuries? Trusting only “my own research” may sound scientific, but in practice it amounts to believing that there is no objective truth.
  • The populists claim that the articles you read in The New York Times or in Science are just an elitist ploy to gain power, but what you read in the Bible, the Quran, or the Vedas is absolute truth.
  • It is always tricky to define fundamental concepts. Since they are the basis for everything that follows, they themselves seem to lack any basis of their own. Physicists have a hard time defining matter and energy, biologists have a hard time defining life, and philosophers have a hard time defining reality.
  • Any object can be information—or not. This makes it difficult to define what information is.
  • The naive view of information argues that objects are defined as information in the context of truth seeking. Something is information if people use it to try to discover the truth. This view links the concept of information with the concept of truth and assumes that the main role of information is to represent reality. There is a reality “out there,” and information is something that represents that reality and that we can therefore use to learn about reality.
  • truth is an accurate representation of reality.
  • Underlying the notion of truth is the premise that there exists one universal reality.
  • Taken to extremes, such a pursuit of accuracy may lead us to try to represent the world on a one-to-one scale, as in the famous Jorge Luis Borges story “On Exactitude in Science” (1946). In this story Borges tells of a fictitious ancient empire that became obsessed with producing ever more accurate maps of its territory, until eventually it produced a map with a one-to-one scale. The entire empire was covered with a map of the empire.
  • A one-to-one map may look like the ultimate representation of reality, but tellingly it is no longer
  • Truth, then, isn’t a one-to-one representation of reality. Rather, truth is something that brings our attention to certain aspects of reality while inevitably ignoring other aspects. No account of reality is 100 percent accurate, but some accounts are nevertheless more truthful than others.
  • Misinformation is an honest mistake, occurring when someone tries to represent reality but gets it wrong. Disinformation is a deliberate lie, occurring when someone consciously intends to distort our view of reality.
  • What the example of astrology illustrates is that errors, lies, fantasies, and fictions are information, too. Contrary to what the naive view of information says, information has no essential link to truth, and its role in history isn’t to represent a preexisting reality.
  • Most symphonies, melodies, and tunes don’t represent anything, which is why it makes no sense to ask whether they are true or false. Over the years people have created a lot of bad music, but not fake music. Without representing anything, music nevertheless does a remarkable job in connecting large numbers of people and synchronizing their emotions and movements. Music can make soldiers march in formation, clubbers sway together, church congregations clap in rhythm, and sports fans chant in unison.
  • If DNA represented reality, we could have asked questions like “Does zebra DNA represent reality more accurately than lion DNA?” or “Is the DNA of one zebra telling the truth, while another zebra is misled by her fake DNA?” These, of course, are nonsensical questions. We might evaluate DNA by the fitness of the organism it produces, but not by truthfulness. While it is common to talk about DNA “errors,” this refers only to mutations in the process of copying DNA—not to a failure to represent reality accurately. A mutation that inhibits the production of adrenaline reduces fitness, causing the network of cells to disintegrate, as when the zebra is killed and its trillions of cells lose connection with one another.
  • Crucially, errors in the copying of DNA don’t always reduce fitness. Once in a blue moon, they increase fitness. Without such mutations, there would be no process of evolution. All life-forms exist thanks to genetic “errors.” The wonders of evolution are possible because DNA doesn’t represent any preexisting realities; it creates new realities.
  • If the main job of information had been to represent reality accurately, it would have been hard to explain why the Bible became one of the most influential texts in history.
  • The Bible makes many serious errors in its description of both human affairs and natural processes.
  • The Bible routinely depicts epidemics as divine punishment for human sins and claims they can be stopped or prevented by prayers and religious rituals. However, epidemics are of course caused by pathogens and can be stopped or prevented by following hygiene rules and using medicines and vaccines. This is today widely accepted even by religious leaders like the pope, who during the COVID-19 pandemic advised people to self-isolate, instead of congregating to pray together.
  • Yet while the Bible has done a poor job in representing the reality of human origins, migrations, and epidemics, it has nevertheless been very effective in connecting billions of people and creating the Jewish and Christian religions. Like DNA initiating chemical processes that bind billions of cells into organic networks, the Bible initiated social processes that bonded billions of people into religious networks. And just as a network of cells can do things that single cells cannot, so a religious network can do things that individual humans cannot, like building temples, maintaining legal systems, celebrating holidays, and waging holy wars.
  • This is why the naive view is wrong to believe that creating more powerful information technology will necessarily result in a more truthful understanding of the world. If no additional steps are taken to tilt the balance in favor of truth, an increase in the amount and speed of information is likely to swamp the relatively rare and expensive truthful accounts by much more common and cheap types of information.
  • Contrary to what the naive view believes, Homo sapiens didn’t conquer the world because we are talented at turning information into an accurate map of reality. Rather, the secret of our success is that we are talented at using information to connect lots of individuals. Unfortunately, this ability often goes hand in hand with believing in lies, errors, and fantasies. This is why even technologically advanced societies like Nazi Germany and the Soviet Union have been prone to hold delusional ideas, without their delusions necessarily weakening them.
  • What enabled different bands to cooperate is that evolutionary changes in brain structure and linguistic abilities apparently gave Sapiens the aptitude to tell and believe fictional stories and to be deeply moved by them. Instead of building a network from human-to-human chains alone—as the Neanderthals, for example, did—stories provided Homo sapiens with a new type of chain: human-to-story chains. In order to cooperate, Sapiens no longer had to know each other personally; they just had to know the same story.
  • For example, over the decades the Coca-Cola corporation has invested tens of billions of dollars in advertisements that tell and retell the story of the Coca-Cola drink. People have seen and heard the story so often that many have come to associate a certain concoction of flavored water with fun, happiness, and youth (as opposed to tooth decay, obesity, and plastic waste). That’s branding.
  • People think they connect to the person, but in fact they connect to the story told about the person, and there is often a huge gulf between the two.
  • Though no contemporary portrait of Jesus has survived, and though the Bible never describes what he looked like, imaginary renderings of him have become some of the most recognizable icons in the world.
  • It should be stressed that the creation of the Jesus story was not a deliberate lie. People like Saint Paul, Tertullian, Saint Augustine, and Martin Luther didn’t set out to deceive anyone. They projected their deeply felt hopes and feelings on the figure of Jesus, in the same way that all of us routinely project our feelings on our parents, lovers, and leaders. While branding campaigns are occasionally a cynical exercise of disinformation, most of the really big stories of history have been the result of emotional projections and wishful thinking. True believers play a key role in the rise of every major religion and ideology, and the Jesus story changed history because it gained an immense number of true believers.
  • So every year, in the most important celebration of the Jewish calendar, millions of Jews put on a show that they remember things that they didn’t witness and that probably never happened at all. As numerous modern studies indicate, repeatedly retelling a fake memory eventually causes the person to adopt it as a genuine recollection. When two Jews encounter each other for the first time, they can immediately feel that they both belong to the same family, that they were together as slaves in Egypt, and that they were together at Mount Sinai. That’s a powerful bond that has sustained the Jewish network over many centuries and continents.
  • Indeed, stories can even create an entirely new level of reality. As far as we know, prior to the emergence of stories the universe contained just two levels of reality. Stories added a third.
  • The two levels of reality that preceded storytelling are objective reality and subjective reality. Objective reality consists of things like stones, mountains, and asteroids—things that exist whether we are aware of them or not. An asteroid hurtling toward planet Earth, for example, exists even if nobody knows it’s out there. Then there is subjective reality: things like pain, pleasure, and love that aren’t “out there” but rather “in here.” Subjective things exist in our awareness of them. An unfelt ache is an oxymoron.
  • A typical pizza contains between fifteen hundred and twenty-five hundred calories. In contrast, the financial value of money—and pizzas—depends entirely on our beliefs. How many pizzas can you purchase for a dollar, or for a bitcoin? In 2010, Laszlo Hanyecz bought two pizzas for 10,000 bitcoins. It was the first known commercial transaction involving bitcoin—and with hindsight, also the most expensive pizza ever. By November 2021, a single bitcoin was valued at more than $69,000, so the bitcoins Hanyecz paid for his two pizzas were worth $690 million, enough to purchase millions of pizzas. While the caloric value of pizza is an objective reality that remained the same between 2010 and 2021, the financial value of bitcoin is an intersubjective reality that changed dramatically during the same period, depending on the stories people told and believed about bitcoin.
  • In fact, all relations between large-scale human groups are shaped by stories, because the identities of these groups are themselves defined by stories. There are no objective definitions for who is British, American, Norwegian, or Iraqi; all these identities are shaped by national and religious myths that are constantly challenged and revised.
  • This is good news. If history had been shaped solely by material interests and power struggles, there would be no point talking to people who disagree with us. Any conflict would ultimately be the result of objective power relations, which cannot be changed merely by talking. In particular, if privileged people can see and believe only those things that enshrine their privileges, how can anything except violence persuade them to renounce those privileges and alter their beliefs? Luckily, since history is shaped by intersubjective stories, sometimes we can avert conflict and make peace by talking with people, changing the stories in which they and we believe, or coming up with a new story that everyone can accept.
  • History is often shaped not by deterministic power relations, but rather by tragic mistakes that result from believing in mesmerizing but harmful stories.
  • information leads to truth, and knowing the truth helps people to gain both power and wisdom.
  • Just as most Jews forgot that rabbis curated the Old Testament, so most Christians forgot that church councils curated the New Testament, and came to view it simply as the infallible word of God.
  • But print wasn’t the root cause of the scientific revolution. The only thing the printing press did was to faithfully reproduce texts. The machine had no ability to come up with any new ideas of its own. Those who connect print to science assume that the mere act of producing and spreading more information inevitably leads people to the truth. In fact, print allowed the rapid spread not only of scientific facts but also of religious fantasies, fake news, and conspiracy theories. Perhaps the most notorious example of the latter was the belief in a worldwide conspiracy of satanic witches, which led to the witch-hunt craze that engulfed early modern Europe.
  • Witches were not an objective reality. Nobody in early modern Europe had sex with Satan or was capable of flying on broomsticks and creating hailstorms. But witches became an intersubjective reality. Like money, witches were made real by exchanging information about witches.
  • As the witch-hunting bureaucracy generated more and more information, it became harder to dismiss all that information as pure fantasy. Could it be that the entire silo of witch-hunting data did not contain a single grain of truth in it? What about all the books written by learned churchmen? What about all the protocols of trials conducted by esteemed judges? What about the tens of thousands of documented confessions?
  • The new intersubjective reality was so convincing that even some people accused of witchcraft came to believe that they were indeed part of a worldwide satanic conspiracy. If everybody said so, it must be true. As discussed in chapter 2, humans are susceptible to adopting fake memories. At least some early modern Europeans dreamed or fantasized about summoning devils, having sex with Satan, and practicing witchcraft, and when accused of being witches, they confused their dreams and fantasies with reality.
  • Even after expressing his horror at the insanity of the witch hunt in Würzburg, the chancellor nevertheless expressed his firm belief in the satanic conspiracy of witches. He didn’t witness any witchcraft firsthand, but so much information about witches was circulating that it was difficult for him to doubt all of it. Witch hunts were a catastrophe caused by the spread of toxic information. They are a prime example of a problem that was created by information, and was made worse by more information.
  • This was a conclusion reached not just by modern scholars but also by some perceptive observers at the time. Alonso de Salazar Frías, a Spanish inquisitor, made a thorough investigation of witch hunts and witch trials in the early seventeenth century. He concluded that he had “not found one single proof nor even the slightest indication from which to infer that one act of witchcraft has actually taken place,” and that “there were neither witches nor bewitched until they were talked and written about.” Salazar Frías well understood the meaning of intersubjective realities and correctly identified the entire witch-hunting industry as an intersubjective information sphere.
  • The history of print and witch-hunting indicates that an unregulated information market doesn’t necessarily lead people to identify and correct their errors, because it may well prioritize outrage over truth. For truth to win, it is necessary to establish curation institutions that have the power to tilt the balance in favor of the facts. However, as the history of the Catholic Church indicates, such institutions might use their curation power to quash any criticism of themselves, labeling all alternative views erroneous and preventing the institution’s own errors from being exposed and corrected. Is it possible to establish better curation institutions that use their power to further the pursuit of truth rather than to accumulate more power for themselves?
  • A church typically told people to trust it because it possessed the absolute truth, in the form of an infallible holy book. A scientific institution, in contrast, gained authority because it had strong self-correcting mechanisms that exposed and rectified the errors of the institution itself. It was these self-correcting mechanisms, not the technology of printing, that were the engine of the scientific revolution.
  • Scientific institutions do reach a broad consensus about the accuracy of certain theories—such as quantum mechanics or the theory of evolution—but only because these theories have managed to survive intense efforts to disprove them, launched not only by outsiders but by members of the institution itself.
  • For example, in March 2000, Pope John Paul II conducted a special ceremony in which he asked forgiveness for a long list of historical crimes against Jews, heretics, women, and indigenous people. He apologized “for the use of violence that some have committed in the service of truth.” This terminology implied that the violence was the fault of “some” misguided individuals who didn’t understand the truth taught by the church. The pope didn’t accept the possibility that perhaps these individuals understood exactly what the church was teaching and that these teachings just were not the truth.
  • Of course, though the church doesn’t acknowledge it officially, over time it has changed its institutional structures, its core teachings, and its interpretation of scripture. The Catholic Church of today is far less antisemitic and misogynist than it was in medieval and early modern times. Pope Francis is far more tolerant of indigenous cultures than was Pope Nicholas V. There is an institutional self-correcting mechanism at work here, which reacts both to external pressures and to internal soul-searching. But what characterizes self-correcting in institutions like the Catholic Church is that even when it happens, it is denied rather than celebrated. The first rule of changing church teachings is that you never admit to changing church teachings.
  • For instance, when Catholics like Pope Francis himself are now reconsidering the church’s teachings on homosexuality, they find it difficult to simply acknowledge past mistakes and change the teachings. If eventually a future pope would issue an apology for the mistreatment of LGBTQ people, the way to do it would be to again shift the blame to the shoulders of some overzealous individuals who misunderstood the gospel. To maintain its religious authority the Catholic Church has had no choice but to deny the existence of institutional self-correction. For the church fell into the infallibility trap. Once it based its religious authority on a claim to infallibility, any public admission of institutional error—even on relatively minor issues—could completely destroy its authority.
  • Psychiatry offers numerous similar examples for strong self-correcting mechanisms. On the shelf of most psychiatrists you can find the DSM—the Diagnostic and Statistical Manual of Mental Disorders. It is occasionally nicknamed the psychiatrists’ bible. But there is a crucial difference between the DSM and the Bible. First published in 1952, the DSM is revised every decade or two, with the fifth edition appearing in 2013. Over the years, many disorders have been redefined, new ones have been added, while others have been deleted. Homosexuality, for example, was listed in 1952 as a sociopathic personality disturbance, but removed from the DSM in 1974. It took just twenty-two years to correct this error in the DSM. That’s not a holy book. That’s a scientific text.
  • An institution can call itself by whatever name it wants, but if it lacks a strong self-correcting mechanism, it is not a scientific institution.
  • Unfortunately, things are far more complicated. There is a reason why institutions like the Catholic Church and the Soviet Communist Party eschewed strong self-correcting mechanisms. While such mechanisms are vital for the pursuit of truth, they are costly in terms of maintaining order. Strong self-correcting mechanisms tend to create doubts, disagreements, conflicts, and rifts and to undermine the myths that hold the social order together.
  • Like the ten-year-old “witch” Hansel Pappenheimer, the eleven-year-old “kulak” Antonina Golovina found herself cast into an intersubjective category invented by human mythmakers and imposed by ubiquitous bureaucrats. The mountains of information collected by Soviet bureaucrats about the kulaks wasn’t the objective truth about them, but it imposed a new intersubjective Soviet truth. Knowing that someone was labeled a kulak was a very important thing to know about a Soviet person, even though the label was entirely bogus.
  • for thousands of years humans have been haunted by a much deeper fear. We have always appreciated the power of stories and images to manipulate our minds and to create illusions.
  • The computer revolution is bringing us face-to-face with Plato’s cave, with maya, with Descartes’s demon.
  • As discussed in previous chapters, contrary to the naive view, information is often used to create order rather than discover truth.
  • As we have seen again and again throughout history, in a completely free information fight, truth tends to lose. To tilt the balance in favor of truth, networks must develop and maintain strong self-correcting mechanisms that reward truth telling.
  • Such an ultimate goal by definition cannot be aligned with anything higher than itself, because there is nothing higher.
  • I lived much of my life in one of the holiest places on earth—the city of Jerusalem. Objectively, it is an ordinary place. As you walk around Jerusalem, you see houses, trees, rocks, cars, people, cats, dogs, and pigeons, as in any other city. But many people nevertheless imagine it to be an extraordinary place, full of gods, angels, and holy stones. They believe in this so strongly that they sometimes fight over possession of the city or of specific holy buildings and sacred stones, most notably the Holy Rock, located under the Dome of the Rock on Temple Mount. The Palestinian philosopher Sari Nusseibeh observed that “Jews and Muslims, acting on religious beliefs and backed up by nuclear capabilities, are poised to engage in history’s worst-ever massacre of human beings, over a rock.”
  • An AI developed in one country could be used to unleash a deluge of fake news, fake money, and fake humans so that people in numerous other countries lose the ability to trust anything or anyone.

Technology Ethics and Regulation

  • The Orthodox answer is no. As noted earlier, the Bible forbids working on the Sabbath, and rabbis argued that pressing an electrical button is “work,” because electricity is akin to fire, and it has long been established that kindling a fire is “work.” Does this mean that elderly Jews living in a Brooklyn high-rise must climb a hundred steps to their apartment in order to avoid working on the Sabbath? Well, Orthodox Jews invented a “Sabbath elevator,” which continually goes up and down buildings, stopping on every floor, without you having to perform any “work” by pressing an electrical button.
  • The survival of human civilization too is under threat. Because we still seem unable to build an industrial society that is also ecologically sustainable, the vaunted prosperity of the present human generation comes at a terrible cost to other sentient beings and to future human generations. Maybe we’ll eventually find a way—perhaps with the help of AI—to create ecologically sustainable industrial societies, but until that day the jury on Blake’s satanic mills is still out.
  • Take, for example, our relationship with our family physician. Over many years she may accumulate a lot of sensitive information on our medical conditions, family life, sexual habits, and unhealthy vices. Perhaps we don’t want our boss to know that we got pregnant, we don’t want our colleagues to know we have cancer, we don’t want our spouse to know we are having an affair, and we don’t want the police to know we take recreational drugs, but we trust our physician with all this information so that she can take good care of our health. If she sells this information to a third party, it is not just unethical; it is illegal.
  • Much the same is true of the information that our lawyer, our accountant, or our therapist accumulates. Having access to our personal life comes with a fiduciary duty to act in our best interests. Why not extend this obvious and ancient principle to computers and algorithms, starting with the powerful algorithms of Google, Baidu, and TikTok? At present, we have a serious problem with the business model of these data hoarders. While we pay our physicians and lawyers for their services, we usually don’t pay Google and TikTok. They make their money by exploiting our personal information. That’s a problematic business model, one that we would hardly tolerate in other contexts. For example, we don’t expect to get free shoes from Nike in exchange for giving Nike all our private information and allowing Nike to do what it wants with it. Why should we agree to get free email services, social connections, and entertainment from the tech giants in exchange for giving them control of our most sensitive data?
  • But before we rush to embrace the dynamic algorithm, we should note that it too has a downside. Human life is a balancing act between endeavoring to improve ourselves and accepting who we are. If the goals of the dynamic algorithm are dictated by an ambitious government or by ruthless corporations, the algorithm is likely to morph into a tyrant, relentlessly demanding that I exercise more, eat less, change my hobbies, and alter numerous other habits, or else it would report me to my employer or downgrade my social credit score. History is full of rigid caste systems that denied humans the ability to change, but it is also full of dictators who tried to mold humans like clay. Finding the middle path between these two extremes is a never-ending task. If we indeed give a national health-care system vast power over us, we must create self-correcting mechanisms that will prevent its algorithms from becoming either too rigid or too demanding.
  • When looking for a relationship, we want to connect with a conscious entity, but if we have already established a relationship with an entity, we tend to assume it must be conscious. Thus whereas scientists, lawmakers, and the meat industry often demand impossible standards of evidence in order to acknowledge that cows and pigs are conscious, pet owners generally take it for granted that their dog or cat is a conscious being capable of experiencing pain, love, and numerous other feelings. In truth, we have no way to verify whether anyone—a human, an animal, or a computer—is conscious. We regard entities as conscious not because we have proof of it but because we develop intimate relationships with them and become attached to them.
  • While individual laypersons may be unable to vet complex algorithms, a team of experts getting help from their own AI sidekicks can potentially assess the fairness of algorithmic decisions even more reliably than anyone can assess the fairness of human decisions. After all, while human decisions may seem to rely on just those few data points we are conscious of, in fact our decisions are subconsciously influenced by thousands of additional data points. Being unaware of these subconscious processes, when we deliberate on our decisions or explain them, we often engage in post hoc single-point rationalizations for what really happens as billions of neurons interact inside our brain. Accordingly, if a human judge sentences us to six years in prison, how can we—or indeed the judge—be sure that the decision was shaped only by fair considerations and not by a subconscious racial bias or by the fact that the judge was hungry?
  • What’s true of counterfeiting money should also be true of counterfeiting humans. If governments took decisive action to protect trust in money, it makes sense to take equally decisive measures to protect trust in humans. Prior to the rise of AI, one human could pretend to be another, and society punished such frauds. But society didn’t bother to outlaw the creation of counterfeit humans, since the technology to do so didn’t exist. Now that AI can pass itself off as human, it threatens to destroy trust between humans and to unravel the fabric of society. Dennett suggests, therefore, that governments should outlaw fake humans as decisively as they have previously outlawed fake money
  • Now, ironically, democracy may prove impossible because information technology is becoming too sophisticated. If unfathomable algorithms take over the conversation, and particularly if they quash reasoned arguments and stoke hate and confusion, public discussion cannot be maintained. Yet if democracies do collapse, it will likely result not from some kind of technological inevitability but from a human failure to regulate the new technology wisely.
  • in the World Cup, all national teams agree not to use performance-enhancing drugs, because everybody realizes that if they go down that path, the World Cup would eventually devolve into a competition between biochemists.
  • Nations will obviously continue to compete in the development of new technology, but sometimes they should agree to limit the development and deployment of dangerous technologies like autonomous weapons and manipulative algorithms—not purely out of altruism, but for their own self-preservation.
  • While we have experience in regulating dangerous technologies like nuclear and biological weapons, the regulation of AI will demand unprecedented levels of trust and self-discipline, for two reasons. First, it is easier to hide an illicit AI lab than an illicit nuclear reactor. Second, AIs have a lot more dual civilian-military usages than nuclear bombs.
  • If we are so wise, why are we so self-destructive? We are at one and the same time both the smartest and the stupidest animals on earth. We are so smart that we can produce nuclear missiles and superintelligent algorithms. And we are so stupid that we go ahead producing these things even though we’re not sure we can control them and failing to do so could destroy us. Why do we do it? Does something in our nature compel us to go down the path of self-destruction?
  • never summon powers you cannot control.
  • The tendency to create powerful things with unintended consequences started not with the invention of the steam engine or AI but with the invention of religion. Prophets and theologians have summoned powerful spirits that were supposed to bring love and joy but occasionally ended up flooding the world with blood.
  • There are also complicated and ongoing discussions concerning the list of rights that should be included in the two baskets?. Who determined that freedom of religion is a basic human right? Should internet access be defined as a civil right? And what about animal rights? Or the rights of AI?
  • While the inflammatory anti-Rohingya messages were created by flesh-and-blood extremists like the Buddhist monk Wirathu, it was Facebook’s algorithms that decided which posts to promote. Amnesty International found that “algorithms proactively amplified and promoted content on the Facebook platform which incited violence, hatred, and discrimination against the Rohingya.”
  • the algorithms were the kingmakers. They chose what to place at the top of the users’ news feed, which content to promote, and which Facebook groups to recommend users to join. The algorithms could have chosen to recommend sermons on compassion or cooking classes, but they decided to spread hate-filled conspiracy theories. Recommendations from on high can have enormous sway over people.
  • In 2016–17, Facebook’s business model relied on maximizing “user engagement.” This referred to the time users spent on the platform, as well as to any action they took such as clicking the like button or sharing a post with friends. As user engagement increased, so Facebook collected more data, sold more advertisements, and captured a larger share of the information market. In addition, increases in user engagement impressed investors, thereby driving up the price of Facebook’s stock. The more time people spent on the platform, the richer Facebook became. In line with this business model, human managers provided the company’s algorithms with a single overriding goal: increase user engagement. The algorithms then discovered by experimenting on millions of users that outrage generated engagement. Humans are more likely to be engaged by a hate-filled conspiracy theory than by a sermon on compassion. So in pursuit of user engagement, the algorithms made the fateful decision to spread outrage.
  • When we write computer code, we aren’t just designing a product. We are redesigning politics, society, and culture, and so we had better have a good grasp of politics, society, and culture. We also need to take responsibility for what we are doing.
  • When accused of creating social and political mayhem, they hide behind arguments like “We are just a platform. We are doing what our customers want and what the voters permit. We don’t force anyone to use our services, and we don’t violate any existing law. If customers didn’t like what we do, they would leave. If voters didn’t like what we do, they would pass laws against us. Since the customers keep asking for more, and since no law forbids what we do, everything must be okay.”
  • The problem goes even deeper. The principles that “the customer is always right” and that “the voters know best” presuppose that customers, voters, and politicians know what is happening around them. They presuppose that customers who choose to use TikTok and Instagram comprehend the full consequences of this choice, and that voters and politicians who are responsible for regulating Apple and Huawei fully understand the business models and activities of these corporations. They presuppose that people know the ins and outs of the new information network and give it their blessing.
  • This has far-reaching implications for taxation. Taxes aim to redistribute wealth. They take a cut from the wealthiest individuals and corporations, in order to provide for everyone. However, a tax system that knows how to tax only money will soon become outdated as many transactions no longer involve money.
  • Some people—like the engineers and executives of high-tech corporations—are way ahead of politicians and voters and are better informed than most of us about the development of AI, cryptocurrencies, social credits, and the like. Unfortunately, most of them don’t use their knowledge to help regulate the explosive potential of the new technologies. Instead, they use it to make billions of dollars—or to accumulate petabits of information.
  • For every computer-science graduate who wants to be the next Audrey Tang, there are probably many more who want to be the next Jobs, Zuckerberg, or Musk and build a multibillion-dollar corporation rather than become an elected public servant. This leads to a dangerous information asymmetry. The people who lead the information revolution know far more about the underlying technology than the people who are supposed to regulate it. Under such conditions, what’s the meaning of chanting that the customer is always right and that the voters know best?
  • The most important thing to remember is that technology, in itself, is seldom deterministic. Belief in technological determinism is dangerous because it excuses people of all responsibility.
  • Even after a particular tool is developed, it can be put to many uses. We can use a knife to murder a person, to save their life in surgery, or to cut vegetables for their dinner. The knife doesn’t force our hand. It’s a human choice.
  • YouTubers who were particularly intent on gaining attention noticed that when they posted an outrageous video full of lies, the algorithm rewarded them by recommending the video to numerous users and increasing the YouTubers’ popularity and income. In contrast, when they dialed down the outrage and stuck to the truth, the algorithm tended to ignore them. Within a few months of such reinforcement learning, the algorithm turned many YouTubers into trolls.
  • A secret internal Facebook memo from August 2019, leaked by the whistleblower Frances Haugen, stated, “We have evidence from a variety of sources that hate speech, divisive political speech, and misinformation on Facebook and [its] family of apps are affecting societies around the world. We also have compelling evidence that our core product mechanics, such as virality, recommendations, and optimizing for engagement, are a significant part of why these types of speech flourish on the platform.”
  • In the 2010s the YouTube and Facebook management teams were bombarded with warnings from their human employees—as well as from outside observers—about the harm being done by the algorithms, but the algorithms themselves never raised the alarm.
  • Some might hope that through a careful process of deliberation, we might be able to define in advance the right goals for the computer network. This, however, is a very dangerous delusion.
  • For millennia, philosophers have been looking for a definition of an ultimate goal that will not depend on an alignment to some higher goal. They have repeatedly been drawn to two potential solutions, known in philosophical jargon as deontology and utilitarianism.
  • The most famous attempt to define an intrinsically good rule was made by Immanuel Kant, a contemporary of Clausewitz and Napoleon. Kant argued that an intrinsically good rule is any rule that I would like to make universal. According to this view, a person about to murder someone should stop and go through the following thought process: “I am now going to murder a human. Would I like to establish a universal rule saying that it is okay to murder humans? If such a universal rule is established, then someone might murder me. So there shouldn’t be a universal rule allowing murder. It follows that I too shouldn’t murder.” In simpler language, Kant reformulated the old Golden Rule: “Do unto others what you would have them do unto you” (Matthew 7:12).
  • Is there a way to define whom computers should care about, without getting bogged down by some intersubjective myth? The most obvious suggestion is to tell computers that they must care about any entity capable of suffering. While suffering is often caused by belief in local intersubjective myths, suffering itself is nonetheless a universal reality. Therefore, using the capacity to suffer in order to define the critical in-group grounds morality in an objective and universal reality. A self-driving car should avoid killing all humans—whether Buddhist or Muslim, French or Italian—and should also avoid killing dogs and cats, and any sentient robots that might one day exist. We may even refine this rule, instructing the car to care about different beings in direct proportion to their capacity to suffer. If the car has to choose between killing a human and killing a cat, it should drive over the cat, because presumably the cat has a lesser capacity to suffer. But if we go in that direction, we inadvertently desert the deontologist camp and find ourselves in the camp of their rivals—the utilitarians.
  • The problem for utilitarians is that we don’t possess a calculus of suffering. We don’t know how many “suffering points” or “happiness points” to assign to particular events, so in complex historical situations it is extremely difficult to calculate whether a given action increases or decreases the overall amount of suffering in the world.
  • Utilitarianism is at its best in situations when the scales of suffering are very clearly tipped in one direction.
  • The danger of utilitarianism is that if you have a strong enough belief in a future utopia, it can become an open license to inflict terrible suffering in the present. Indeed, this is a trick traditional religions discovered thousands of years ago. The crimes of this world could too easily be excused by the promises of future salvation.

Surveillance and Social Control

  • The bureaucrats then try to force the world to fit into these drawers, and if the fit isn’t very good, the bureaucrats push harder. Anyone who ever filled out an official form knows this only too well. When you fill out the form, and none of the listed options fits your circumstances, you must adapt yourself to the form, rather than the form adapting to you.
  • Even when bureaucracy was a benign force, providing people with sewage systems, education, and security, it still tended to increase the gap between rulers and ruled. The system enabled the center to collect and record a lot more information about the people it governed, while the latter found it much more difficult to understand how the system itself worked.
  • In our family it became a sacred duty to preserve documents. Bank statements, electricity bills, expired student cards, letters from the municipality—if it had an official-looking stamp on it, it would be filed in one of the many folders in our cupboard. You never knew which of these documents might one day save your life.
  • It is a mistake, however, to imagine that just because computers could enable the creation of a total surveillance regime, such a regime is inevitable. Technology is rarely deterministic.
  • As with total surveillance regimes, so also with social credit systems, the fact that they could be created doesn’t mean that we must create them.
  • Imagine that the year is 2050, and the Great Leader is woken up at four in the morning by an urgent call from the Surveillance & Security Algorithm. “Great Leader, we are facing an emergency. I’ve crunched trillions of data points, and the pattern is unmistakable: the defense minister is planning to assassinate you in the morning and take power himself. The hit squad is ready, waiting for his command. Give me the order, though, and I’ll liquidate him with a precision strike.”
  • “Great Leader, I know what he said to you. I hear everything. But I also know what he said afterward to the hit squad. And for months I’ve been picking up disturbing patterns in the data.”
  • By giving so much power to the Surveillance & Security Algorithm, the Great Leader has placed himself in an impossible situation. If he distrusts the algorithm, he may be assassinated by the defense minister, but if he trusts the algorithm and purges the defense minister, he becomes the algorithm’s puppet. Whenever anyone tries to make a move against the algorithm, the algorithm knows exactly how to manipulate the Great Leader. Note that the algorithm doesn’t need to be a conscious entity to engage in such maneuvers. As Bostrom’s paper-clip thought experiment indicates—and as GPT-4 lying to the TaskRabbit worker demonstrated on a small scale—a nonconscious algorithm may seek to accumulate power and manipulate people even without having any human drives like greed or egotism.
  • No matter where we live, we might find ourselves cocooned by a web of unfathomable algorithms that manage our lives, reshape our politics and culture, and even reengineer our bodies and minds—while we can no longer comprehend the forces that control us, let alone stop them.
  • Silicon chips can create spies that never sleep, financiers that never forget, and despots that never die. How will this change society, economics, and politics?
  • But hyper-centralized information networks also suffer from several big disadvantages. Since they don’t allow information to flow anywhere except through the official channels, if the official channels are blocked, the information cannot find an alternative means of transmission. And official channels are often blocked.
  • In other words, people weren’t choosing what to see. The algorithms were choosing for them.
  • By 2024, we are getting close to the point when a ubiquitous computer network can follow the population of entire countries twenty-four hours a day. This network doesn’t need to hire and train millions of human agents to follow us around; it relies on digital agents instead.
  • Suppose someone watched a hundred extremist videos on YouTube last month, is friends with a convicted terrorist, and is currently pursuing a doctorate in epidemiology in a laboratory containing samples of Ebola virus. Should that person be put on the “suspected terrorists” list? And what about someone who watched fifty extremist videos last month and is a biology undergraduate?
  • According to one report, that AI system “engages in mass surveillance of Pakistan’s mobile phone network, and then uses a machine learning algorithm on the cellular network metadata of 55 million people to try and rate each person’s likelihood of being a terrorist.” A former director of both the CIA and the NSA proclaimed that “we kill people based on metadata.” Skynet’s reliability has been severely criticized, but by the 2020s such technology has become far more sophisticated and has been deployed by a lot more governments.
  • At present, the smartphone is still a far more valuable surveillance tool than biometric sensors.
  • Peer-to-peer surveillance networks have obliterated that sense of privacy. If the staff fails to please a customer, the restaurant will get a bad review, which could affect the decision of thousands of potential customers in coming years. For better or worse, the balance of power tilts in favor of the customers, while the staff find themselves more exposed than before to the public gaze. As the author and journalist Linda Kinstler put it, “Before Tripadvisor, the customer was only nominally king. After, he became a veritable tyrant, with the power to make or break lives.” The same loss of privacy is felt today by millions of taxi drivers, barbers, beauticians, and other service providers. In the past, stepping into a taxi or barbershop meant stepping into someone’s private space. Now, when customers come into your taxi or barbershop, they bring cameras, microphones, a surveillance network, and thousands of potential viewers with them. This is the foundation of a nongovernmental peer-to-peer surveillance network.
  • For scoring those things that money can’t buy, there was an alternative nonmonetary system, which has been given different names: honor, status, reputation. What social credit systems seek is a standardized valuation of the reputation market. Social credit is a new points system that ascribes precise values even to smiles and family visits.
  • The idea of social credit is to expand this surveillance method from restaurants and hotels to everything. In the most extreme type of social credit systems, every person gets an overall reputation score that takes into account whatever they do and determines everything they can do.
  • For example, you might earn 10 points for picking up trash from the street, get another 20 points for helping an old lady cross the road, and lose 15 points for playing the drums and disturbing the neighbors. If you get a high enough score, it might give you priority when buying train tickets or a leg up when applying to university. If you get a low score, potential employers may refuse to give you a job, and potential dates may refuse your advances. Insurance companies may demand higher premiums, and judges may inflict harsher sentences.
  • The Chinese government, for example, explains that its social credit systems could help fight corruption, scams, tax evasion, false advertising, and counterfeiting, and thereby establish more trust between individuals, between consumers and corporations, and between citizens and government institutions. Others may find systems that allocate precise values to every social action demeaning and inhuman. Even worse, a comprehensive social credit system will annihilate privacy and effectively turn life into a never-ending job interview. Anything you do, anytime, anywhere, might affect your chances of getting a job, a bank loan, a husband, or a prison sentence. You got drunk at a college party and did something legal but shameful? You participated in a political demonstration? You’re friends with someone who has a low credit score? This will be part of your job interview—or criminal sentencing—both in the short term and even decades later. The social credit system might thereby become a totalitarian control system.
  • But that does not mean that the computer network will always understand the world accurately. Information isn’t truth. A total surveillance system may form a very distorted understanding of the world and of human beings. Instead of discovering the truth about the world and about us, the network might use its immense power to create a new kind of world order and impose it on us.
  • In quantum mechanics the act of observing subatomic particles changes their behavior; it is the same with the act of observing humans. The more powerful our tools of observation, the greater the potential impact.

Historical Information Systems

  • Unlike national poems and myths, which can be stored in our brains, complex national taxation and administration systems have required a unique nonorganic information technology in order to function. This technology is the written document.
  • This limit could be transcended, however, by writing documents. The documents didn’t represent an objective empirical reality; the reality was the documents themselves. As we shall see in later chapters, written documents thereby provided precedents and models that would eventually be used by computers. The ability of computers to create intersubjective realities is an extension of the power of clay tablets and pieces of paper.
  • The power of documents to create intersubjective realities was beautifully manifested in the Old Assyrian dialect, which treated documents as living things that could also be killed. Loan contracts were “killed” (duākum) when the debt was repaid. This was done by destroying the tablet, adding some mark to it, or breaking its seal. The loan contract didn’t represent reality; it was the reality. If somebody repaid the loan but failed to “kill the document,” the debt was still owed. Conversely, if somebody didn’t repay the loan but the document “died” in some other way—perhaps the dog ate it—the debt was no more. The same happens with money. If your dog eats a hundred-dollar bill, those hundred dollars cease to exist.
  • When industrial technology began spreading globally in the nineteenth century, it upended traditional economic, social, and political structures and opened the way to create entirely new societies, which were potentially more affluent and peaceful. However, learning how to build benign industrial societies was far from straightforward and involved many costly experiments and hundreds of millions of victims.
  • The Bible not only sanctified slavery in the Ten Commandments and numerous other passages but also placed a curse on the offspring of Ham—the alleged forefather of Africans—saying that “the lowest of slaves will he be to his brothers” (Genesis 9:25).
  • the financial system managed to protect itself for thousands of years by enacting laws against counterfeiting money. As a result, only a relatively small percentage of money in circulation was forged, and people’s trust in it was maintained.
  • The clearest pattern we observe in the long-term history of humanity isn’t the constancy of conflict, but rather the increasing scale of cooperation. A hundred thousand years ago, Sapiens could cooperate only at the level of bands. Over the millennia, we have found ways to create communities of strangers, first on the level of tribes and eventually on the level of religions, trade networks, and states. Realists should note that states are not the fundamental particles of human reality, but rather the product of arduous processes of building trust and cooperation.
  • The decline of war didn’t result from a divine miracle or from a metamorphosis in the laws of nature. It resulted from humans changing their own laws, myths, and institutions and making better decisions. Unfortunately, the fact that this change has stemmed from human choice also means that it is reversible. Technology, economics, and culture are ever changing. In the early 2020s, more leaders are again dreaming of martial glory, armed conflicts are on the rise, and military budgets are increasing.
  • One of the chief lessons of history is that many of the things that we consider natural and eternal are, in fact, man-made and mutable. Accepting that conflict is not inevitable, however, should not make us complacent. Just the opposite. It places a heavy responsibility on all of us to make good choices. It implies that if human civilization is consumed by conflict, we cannot blame it on any law of nature or any alien technology. It also implies that if we make the effort, we can create a better world. This isn’t naïveté; it’s realism. Every old thing was once new. The only constant of history is change.
  • As we saw in our discussion of Clausewitz’s theory of war, there is no rational way to define ultimate goals. The state interests of Russia, Israel, Myanmar, or any other country can never be deduced from some mathematical or physical equation; they are always the supposed moral of a historical narrative.
  • bureaucrats. The stories of the Bible, for example, were essential for the Christian Church, but there would have been no Bible if church bureaucrats hadn’t curated, edited, and disseminated these stories.
  • One method they developed to communicate with their British operators involved window shutters. Sarah Aaronsohn, a NILI commander, had a house overlooking the Mediterranean. She signaled British ships by closing or opening a particular shutter, according to a predetermined code. Numerous people, including Ottoman soldiers, could obviously see the shutter, but nobody other than NILI agents and their British operators understood it was vital military information. So, when is a shutter just a shutter, and when is it information?
  • The very act of counting entities—whether apples, oranges, or soldiers—necessarily focuses attention on the similarities between these entities while discounting differences. For example, saying only that there were ten thousand Ottoman soldiers in Gaza neglected to specify whether some were experienced veterans and others were green recruits.
  • Allegedly, by allowing people to exchange information much more freely than before, it led to the scientific revolution. There is a grain of truth in this. Without print, it would certainly have been much harder for Copernicus, Galileo, and their colleagues to develop and spread their ideas.
  • People began denouncing one another for witchcraft on the flimsiest evidence, often to avenge personal slights or to gain economic and political advantage. Once an official investigation began, the accused were often doomed. The inquisitorial methods recommended by The Hammer of the Witches were truly diabolical. If the accused confessed to being a witch, they were executed and their property divided between the accuser, the executioner, and the inquisitors. If the accused refused to confess, this was taken as evidence of their demonic obstinacy, and they were then tortured in horrendous ways, their fingers broken, their flesh cut with hot pincers, their bodies stretched to the breaking point or submerged in boiling water.
  • An entire witch-hunting bureaucracy dedicated itself to such exchanges. Theologians, lawyers, inquisitors, and the owners of printing presses made a living by collecting and producing information about witches, cataloging different species of witches, investigating how witches behaved, and recommending how they could be exposed and defeated. Professional witch-hunters offered their services to governments and municipalities, charging large sums of money.
  • Many of the lessons learned from the canonization of the Bible, the early modern witch hunts, and the Stalinist collectivization campaign will remain relevant, and perhaps have to be relearned. However, the current information revolution also has some unique features, different from—and potentially far more dangerous than—anything we have seen before.
  • History is full of decisive military victories that led to political disasters.
  • Qatar, Tonga, Tuvalu, Kiribati, and the Solomon Islands all indicate that we are living in a postimperial era. They gained their independence from the British Empire in the 1970s, as part of the final demise of the European imperial order. The leverage they now have in the international arena testifies that in the first quarter of the twenty-first century power is distributed between a relatively large number of players, rather than monopolized by a few empires

Human-AI Interaction and Alignment

  • AI doesn’t have any emotions of its own, but it can nevertheless learn to recognize these patterns in humans. Actually, computers may outperform humans in recognizing human emotions, precisely because they have no emotions of their own. We yearn to be understood, but other humans often fail to understand how we feel, because they are too preoccupied with their own feelings. In contrast, computers will have an exquisitely fine-tuned understanding of how we feel, because they will learn to recognize the patterns of our feelings, while they have no distracting feelings of their own.
  • The Bible had a profound effect on billions of people, even though it was a mute document. Now try to imagine the effect of a holy book that not only can talk and listen but can get to know your deepest fears and hopes and constantly mold them.
  • The AI was doing exactly what the game was rewarding it to do—even though it is not what the humans were hoping for. That’s the essence of the alignment problem: rewarding A while hoping for B.
Author - Mauro Sicard
Author
Author
Mauro Sicard

CEO & Creative Director at BRIX Agency. My main interests are tech, science and philosophy.