The Coming Wave explores how AI and biotech advances will radically transform human society.
The following are the key points I highlighted in this book. If you’d like, you can download all of them to chat about with your favorite language model.
TECHNOLOGY: The application of scientific knowledge (in the broadest possible sense) to produce tools or practical outcomes.
Today, AI systems can almost perfectly recognize faces and objects. We take speech-to-text transcription and instant language translation for granted. AI can navigate roads and traffic well enough to drive autonomously in some settings. Based on a few simple prompts, a new generation of AI models can generate novel images and compose text with extraordinary levels of detail and coherence. AI systems can produce synthetic voices with uncanny realism and compose music of stunning beauty. Even in more challenging domains, ones long thought to be uniquely suited to human capabilities like long-term planning, imagination, and simulation of complex ideas, progress leaps forward.
I love technology. It’s been the engine of progress and a cause for us to be proud and excited about humanity’s achievements.
The irony of general-purpose technologies is that, before long, they become invisible and we take them for granted.
Unsurprisingly, consumer technologies exhibit a similar trend. Alexander Graham Bell introduced the telephone in 1876. By 1900, America had 600,000 telephones. Ten years later there were 5.8 million. Today America has many more telephones than people.
Uber was impossible without the smartphone, which itself was enabled by GPS, which was enabled by satellites, which were enabled by rockets, which were enabled by combustion techniques, which were enabled by language and fire.
Since the early 1970s the number of transistors per chip has increased ten-million-fold.
And of course this rise in computational power underpinned a flowering of devices, applications, and users. In the early 1970s there were about half a million computers. Back in 1983, only 562 computers total were connected to the primordial internet. Now the number of computers, smartphones, and connected devices is estimated at 14 billion. It took smartphones a few years to go from niche product to utterly essential item for two-thirds of the planet.
The Luddites were no more successful at stopping new industrial technologies than horse owners and carriage makers were at preventing cars. Where there is demand, technology always breaks out, finds traction, builds users.
Few societies have ever successfully removed themselves from the technological frontier; doing so usually either is part of a collapse or precipitates one. There is no realistic way to pull back.
And yet none of that seems to matter. It might take time, but the pattern is unmistakable: proliferating, cheaper, and more efficient technologies, wave upon wave of them. As long as a technology is useful, desirable, affordable, accessible, and unsurpassed, it survives and spreads and those features compound. While technology doesn’t tell us when, or how, or whether to walk through the doors it opens, sooner or later we do seem to walk through them.
For the first time core components of our technological ecosystem directly address two foundational properties of our world: intelligence and life. In other words, technology is undergoing a phase transition. No longer simply a tool, it’s going to engineer life and rival—and surpass—our own intelligence.
technique akin to a much smaller model. At Inflection AI we can reach GPT-3-level language model performance with a system just one twenty-fifth the size. We have a model that beats Google’s 540 billion parameter PaLM on all the main academic benchmarks, but is six times smaller. Or look at DeepMind’s Chinchilla model, competitive with the very best large models, which has four times fewer parameters than its Gopher model, but instead uses more training data. At the other end of the spectrum, you can now create a nanoLLM based on just three hundred lines of code capable of generating fairly plausible imitations of Shakespeare. In short, AI increasingly does more with less.
In the words of John McCarthy, who coined the term “artificial intelligence”: “As soon as it works, no one calls it AI anymore.” AI is—as those of us building it like to joke—“what computers can’t do.” Once they can, it’s just software.
For the time being, it doesn’t matter whether the system is self-aware, or has understanding, or has humanlike intelligence. All that matters is what the system can do. Focus on that, and the real challenge comes into view: systems can do more, much more, with every passing day.
Put simply, passing a Modern Turing Test would involve something like the following: an AI being able to successfully act on the instruction “Go make $1 million on Amazon in a few months with just a $100,000 investment.” It might research the web to look at what’s trending, finding what’s hot and what’s not on Amazon Marketplace; generate a range of images and blueprints of possible products; send them to a drop-ship manufacturer it found on Alibaba; email back and forth to refine the requirements and agree on the contract; design a seller’s listing; and continually update marketing materials and product designs based on buyer feedback. Aside from the legal requirements of registering as a business on the marketplace and getting a bank account, all of this seems to me eminently doable. I think it will be done with a few minor human interventions within the next year, and probably fully autonomously within three to five years.
AI is far deeper and more powerful than just another technology. The risk isn’t in overhyping it; it’s rather in missing the magnitude of the coming wave. It’s not just a tool or platform but a transformative meta-technology, the technology behind technology and everything else, itself a maker of tools and platforms, not just a system but a generator of systems of any and all kinds. Step back and consider what’s happening on the scale of a decade or a century. We really are at a turning point in the history of humanity.
Genetic engineering has embraced the do-it-yourself ethos that once defined digital start-ups and led to such an explosion of creativity and potential in the early days of the internet. You can now buy a benchtop DNA synthesizer (see the next section) for as little as $25,000 and use it as you wish, without restriction or oversight, at home in your bio-garage.
Farming robots aren’t just coming. They’re here. From drones watching livestock to precision irrigation rigs to small mobile robots patrolling vast indoor farms, from seeding to harvesting, picking to palletizing, watering tomatoes to tracking and herding cattle, the reality of the food we eat today is that it increasingly comes from a world of robots, driven by AI, currently being rolled out and scaled up.
Just as Sputnik eventually put the United States on course to be a superpower in rocketry, space technology, computing, and all their military and civilian applications, so something similar is now taking place in China. AlphaGo was quickly labeled China’s Sputnik moment for AI.
Large language models are still seen as cutting-edge, yet there is no great magic or hidden state secret to them. Access to computation is likely the biggest bottleneck, but plenty of services exist to make it happen. The same goes for CRISPR or DNA synthesis.
Technology, as in the case of food supply, is a vital part of addressing the challenges humanity inevitably faces today and will face tomorrow. We pursue new technologies, including those in the coming wave, not just because we want them, but because, at a fundamental level, we need them.
A school of naive techno-solutionism sees technology as the answer to all of the world’s problems. Alone, it’s not. How it is created, used, owned, and managed all make a difference. No one should pretend that technology is a near-magical answer to something as multifaceted and immense as climate change. But the idea that we can meet the century’s defining challenges without new technologies is completely fanciful. It’s also worth remembering that the technologies of the wave will make life easier, healthier, more productive, and more enjoyable for billions. They will save time, cost, hassle, and millions of lives. The significance of this should not be trivialized or forgotten amid the uncertainty.
Everything leaks. Everything is copied, iterated, improved. And because everyone is watching and learning from everyone else, with so many people all scratching around in the same areas, someone is inevitably going to figure out the next big breakthrough. And they will have no hope of containing it, for even if they do, someone else will come behind them and uncover the same insight or find an adjacent way of doing the same thing; they will see the strategic potential or profit or prestige and go after it. This is why we won’t say no. This is why the coming wave is coming, why containing it is such a challenge.
Companies like Stability AI and Hugging Face accelerate distributed, decentralized forms of AI. Techniques like CRISPR make biological experimentation easier, meaning biohackers in their garages can tinker at the absolute frontier of science. Ultimately, sharing or copying DNA or the code of a large language model is trivial. Openness is the default, imitations are endemic, cost curves relentlessly go down, and barriers to access crumble. Exponential capabilities are given to anyone who wants them.
Our present suite of technologies is in many ways remarkable, but there is little sign that it can be sustainably rolled out to support more than eight billion people at levels those in developed countries take for granted. Unpalatable as it is to some, it’s worth repeating: solving problems like climate change, or maintaining rising living and health-care standards, or improving education and opportunity is not going to happen without delivering new technologies as part of the package. Pausing technological development, assuming it was possible, would in one sense lead to safety. It would for a start limit the introduction of new catastrophic risks. But it wouldn’t mean successfully avoiding dystopia. Instead, as the unsustainability of twenty-first-century societies began to tell, it would simply deliver another form of dystopia. Without new technologies, sooner or later everything stagnates, and possibly collapses altogether. Over the next century, the global population will start falling, in some countries precipitously. As the ratio of workers to retirees shifts and the labor force dwindles, economies will simply not be able to function at their present levels. In other words, without new technologies it will be impossible to maintain living standards.
“For progress there is no cure,” he writes. “Any attempt to find automatically safe channels for the present explosive variety of progress must lead to frustration.”
Power Dynamics & Social Impact
This is an argument I have made many times over the last decade behind closed doors, but as the impacts become ever more unignorable, it’s time that I make the case publicly.
Spend time in tech or policy circles, and it quickly becomes obvious that head-in-the-sand is the default ideology. To believe and act otherwise risks becoming so crippled by fear of and outrage against enormous, inexorable forces that everything feels futile.
In fact early enthusiasts for automobiles argued for their environmental benefits: engines would rid the streets of mountains of horse dung that spread dirt and disease across urban areas. They had no conception of global warming.
Understanding technology is, in part, about trying to understand its unintended consequences, to predict not just positive spillovers but “revenge effects.”
Luddites, the groups that violently rejected industrial techniques, are not the exception to the arrival of new technologies; they are the norm.
EleutherAI, a grassroots coalition of independent researchers, has made a series of large language models completely open-source, readily available to hundreds of thousands of users. Meta has open-sourced—“democratized,” in its own words—models so large that just months earlier they were state-of-the-art. Even when that isn’t the intention, advanced models can and do leak. Meta’s LLaMA system was meant to be restricted, but was soon available for download by anyone through BitTorrent. Within days someone had found a way of running it (slowly) on a $50 computer. This ease of access and ability to adapt and customize, often in a matter of weeks, is a prominent feature of the coming wave.
Success would have major societal repercussions. At the same time, cognitive, aesthetic, physical, and performance-related enhancements are also plausible and would be as disruptive and reviled as they are desired. Either way, serious physical self-modifications are going to happen. Initial work suggests memory can be improved and muscle strength enhanced. It won’t be long before “gene doping” becomes a live issue in sports, education, and professional life. Laws governing clinical trials and experiments hit a gray area when it comes to self-administration. Experimenting on others is clearly off-limits, but experimenting on yourself? As with many other elements of frontier technologies, it’s a legally and morally ill-defined space.
Something had changed. If Seoul offered a hint, Wuzhen brought it home. As the dust settled, it became clear AlphaGo was part of a much bigger story than one trophy, system, or company; it was that of great powers engaging in a new and dangerous game of technological competition—and a series of overwhelmingly powerful and interlocking incentives that ensure the coming wave really is coming.
Countless friends and colleagues in Washington and Brussels, in government, in think tanks, and in academia would all trot out the same infuriating line: “Even if we are not actually in an arms race, we must assume ‘they’ think we are, and therefore we must ourselves race to achieve a decisive strategic advantage since this new technological wave might completely rebalance global power.” This attitude becomes a self-fulfilling prophecy.
Spend enough time in technical environments and, despite all the talk about ethics and social responsibility, you will come to recognize the prevalence of this view, even when facing technologies of extreme power. I have seen it many times, and I’d probably be lying if I said I haven’t succumbed to it myself on occasion as well.
The political order that fostered rising wealth, better living standards, growing education, science, and technology, a world tending toward peace, is now under immense strain, destabilized in part by the very forces it helped engender. The full implications are sprawling and hard to fathom, but to me they indicate a future where the challenge of containment is harder than ever,
The idea that technology alone can solve social and political problems is a dangerous delusion. But the idea that they can be solved without technology is also wrongheaded.
I’m British, born and raised in London, but one side of my family is Syrian. My family has been caught up in the terrible war suffered by that country in recent years. I know well what it looks like when states fail, and to put it crudely, it’s unimaginably bad. Horrific. And anyone who thinks what happened in Syria could never happen “here” is kidding themselves; people are people wherever they are. Our system of nation-states isn’t perfect, far from it. Nonetheless, we must do everything to bolster and protect it. This book, in part, is my attempt to rally to its defense.
Global living conditions are objectively better today than at any time in the past. We take running water and plentiful food supplies for granted. Most people enjoy warmth and shelter all year round. Literacy rates, life expectancy, and gender equality sit at all-time highs. The sum of thousands of years of human scholarship and inquiry is available at the touch of a button. For most people in developed countries, life is marked by an ease and abundance that would have seemed unbelievable in bygone eras. And yet, under the surface, there’s a nagging feeling that something isn’t quite right.
That so many people profoundly feel society is failing is itself a problem: Distrust breeds negativity and apathy. People decline to vote.
These are especially worrying trends when you consider persistent relationships between social immobility, widening inequality, and political violence. Across data from more than one hundred countries, evidence suggests that the lower a country’s social mobility, the more it experiences upheavals like riots, strikes, assassinations, revolutionary campaigns, and civil wars. When people feel stuck, that others are unfairly hogging the rewards, they get angry.
It would take a brave, or possibly delusional, person to argue that all is well, that there are not serious forces of populism, anger, and dysfunction raging across societies—all despite the highest living standards the world has ever known. This makes containment far more complicated. Forming national and international consensus and establishing new norms around fast-moving technologies are already steep challenges. How can we hope to do this when our baseline mode seems to be instability?
A meta-analysis published in the journal Nature reviewed the results of nearly five hundred studies, concluding there is a clear correlation between growing use of digital media and rising distrust in politics, populist movements, hate, and polarization. Correlation may not be causation, but this systematic review throws up “clear evidence of serious threats to democracy” coming from new technologies.
Power is “the ability or capacity to do something or act in a particular way;…to direct or influence the behavior of others or the course of events.” It’s the mechanical or electrical energy that underwrites civilization.
Technology is ultimately political because technology is a form of power. And perhaps the single overriding characteristic of the coming wave is that it will democratize access to power.
Today, no matter how wealthy you are, you simply cannot buy a more powerful smartphone than is available to billions of people. This phenomenal achievement of civilization is too often overlooked. In the next decade, access to ACIs will follow the same trend. Those same billions will soon have broadly equal access to the best lawyer, doctor, strategist, designer, coach, executive assistant, negotiator, and so on. Everyone will have a world-class team on their side and in their corner.
Democratizing access necessarily means democratizing risk.
Imagine a huge cache of documents from a company leaked. A legal AI might be able to parse this against multiple legal systems, figure out every possible infraction, and then hit that company with multiple crippling lawsuits around the world at the same time.
While small changes in technology can fundamentally alter the balance of power, trying to predict exactly how, decades into the future, is incredibly difficult. Exponential technologies amplify everyone and everything. And that creates seemingly contradictory trends. Power is both concentrated and dispersed. Incumbents are both strengthened and weakened. Nation-states are both more fragile and at greater risk of slipping into abuses of unchecked power.
This ungovernable “post-sovereign” world, in the words of the political scientist Wendy Brown, will go far beyond a sense of near-term fragility; it will be instead a long-term macro-trend toward deep instability grinding away over decades. The first result will be massive new concentrations of power and wealth that reorder society.
PEOPLE OFTEN LIKE TO measure progress in AI by comparing it with how well an individual human can perform a certain task. Researchers talk about achieving superhuman performance in language translation, or on real-world tasks like driving. But what this misses is that the most powerful forces in the world are actually groups of individuals coordinating to achieve shared goals. Organizations too are a kind of intelligence. Companies, militaries, bureaucracies, even markets—these are artificial intelligences, aggregating and processing huge amounts of data, organizing themselves around specific goals, building mechanisms to get better and better at achieving those goals. Indeed, machine intelligence resembles a massive bureaucracy far more than it does a human mind. When we talk about something like AI having an enormous impact on the world, it’s worth bearing in mind just how far-reaching these old-fashioned AIs are.
Put all the inequalities resulting from concentration together, and it adds up to another great acceleration and structural deepening of an existing fracture. Little wonder there is talk of neo- or techno-feudalism—a direct challenge to the social order, this time built on something beyond even stirrups.
YOUR SMART SPEAKER WAKES you up. Immediately you turn to your phone and check your emails. Your smart watch tells you you’ve had a normal night’s sleep and your heart rate is average for the morning. Already a distant organization knows, in theory, what time you are awake, how you are feeling, and what you are looking at. You leave the house and head to the office, your phone tracking your movements, logging the keystrokes on your text messages and the podcast you listen to. On the way, and throughout the day, you are captured on CCTV hundreds of times. After all, this city has at least one camera for every ten people, maybe many more than that. When you swipe in at the office, the system notes your time of entry. Software installed on your computer monitors productivity down to eye movements.
The only step left is bringing these disparate databases together into a single, integrated system: a perfect twenty-first-century surveillance apparatus. The preeminent example is, of course, China. That’s hardly news, but what’s become clear is how advanced and ambitious the party’s program already is, let alone where it might end up in twenty or thirty years.
This heralds a colossal redistribution of power away from existing centers. Imagine a future where small groups—whether in failing states like Lebanon or in off-grid nomad camps in New Mexico—provide AI-empowered services like credit unions, schools, and health care, services at the heart of the community often reliant on scale or the state. Where the chance to set the terms of society at a micro level becomes irresistible: come to our boutique school and avoid critical race theory forever, or boycott the evil financial system and use our DeFi product.
ACI and synthetic biology empower Extinction Rebellion as much as the Dow Jones megacorp; the microstate with a charismatic leader as much as a lumbering giant. While some advantages of size may be augmented, they may also be nullified. Ask yourself what happens to already fraying states if every sect, separatist movement, charitable foundation, and social network, every zealot and xenophobe, every populist conspiracy theory, political party, or even mafia, drug cartel, or terrorist group has their shot at state building. The disenfranchised will simply re-enfranchise themselves—on their own terms. Fragmentations could occur all over. What if companies themselves start down a journey of becoming states? Or cities decide to break away and gain more autonomy? What if people spend more time, money, and emotional energy in virtual worlds than the real? What happens to traditional hierarchies when tools of awesome power and expertise are as available to street children as to billionaires? It’s already a remarkable fact that corporate titans spend most of their lives working on software, like Gmail or Excel, accessible to most people on the planet. Extend that, radically, with the democratization of empowerment, when everyone on the planet has unfettered access to the most powerful technologies ever built.
For many people working in or adjacent to technology, these kinds of radical outcomes are not just unwelcome by-products; they’re the goal itself. Hyper-libertarian technologists like the PayPal founder and venture capitalist Peter Thiel celebrate a vision of the state withering away, seeing this as liberation for an overmighty species of business leaders or “sovereign individuals,” as they call themselves. A bonfire of public services, institutions, and norms is cheered on with an explicit vision where technology might “create the space for new modes of dissent and new ways to form communities not bounded by historical nation-states.” The techno-libertarian movement takes Ronald Reagan’s 1981 dictum “Government is the problem” to its logical extreme, seeing government’s many flaws but not its immense benefits, believing that its regulatory and tax functions are destructive rate limiters with few upsides—for them at least. I find it deeply depressing that some of the most powerful and privileged take such a narrow and destructive view, but it adds a further impetus to fragmentation.
And if this picture sounds too strange, paradoxical, and impossible, consider this. The coming wave will only deepen and recapitulate the exact same contradictory dynamics of the last wave. The internet does precisely this: centralizes in a few key hubs while also empowering billions of people. It creates behemoths and yet gives everyone the opportunity to join in. Social media created a few giants and a million tribes. Everyone can build a website, but there’s only one Google. Everyone can sell their own niche products, but there’s only one Amazon. And on and on. The disruption of the internet era is largely explained by this tension, this potent, combustible brew of empowerment and control.
Technology has penetrated our civilization so deeply that watching technology means watching everything. Every lab, fab, and factory, every server, every new piece of code, every string of DNA synthesized, every business and university, from every biohacker in a shack in the woods to every vast and anonymous data center. To counter calamity in the face of the unprecedented dynamics of the coming wave means an unprecedented response. It means not just watching everything but reserving the capacity to stop it and control it whenever and wherever necessary. Some will inevitably say this: centralize power to an extreme degree, build the panopticon, and tightly orchestrate every aspect of life to ensure that no pandemic or rogue AI ever happens. Steadily, many nations will convince themselves that the only way of truly ensuring this is to install the kind of blanket surveillance we saw in the last chapter: total control, backed by hard power. The door to dystopia is cracked open. Indeed, in the face of catastrophe, for some dystopia may feel like a relief. Suggestions like this remain fringe, especially in the West. However, it seems to me only a matter of time before they grow.
And on the continuum between the two there is also a chance of the worst of all worlds: scattered but repressive surveillance and control apparatuses that still don’t add up to a watertight system.
There will be no single, magic fix from a roomful of smart people in a bunker somewhere. Quite the opposite. Current elites are so invested in their pessimism aversion that they are afraid to be honest about the dangers we face. They’re happy to opine and debate in private, less so to come out and talk about it. They are used to a world of control and order: the control of a CEO over a company, of a central banker over interest rates, of a bureaucrat over military procurement, or of a town planner over which potholes to fix.
Xi Jinping was worried. “We rely on imports for some critical devices, components, and raw materials,” the Chinese president told a group of the country’s scientists in September 2020. Ominously, the “key and core technologies” he believed so vital to China’s future and geopolitical security were “controlled by others.” Indeed, China spends more on importing chips than it does on oil.
People often ask me, given all this, why work in AI and build AI companies and tools? Aside from the huge positive contribution they can make, my answer is that I don’t just want to talk about and debate containment. I want to proactively help make it happen, on the front foot, ahead of where the technology is going. Containment needs technologists utterly focused on making it a reality.
I fully acknowledge this doesn’t make for an easy life. There’s no comfortable place here. It’s impossible not to recognize some of the paradoxes. It means people like me have to face the prospect that alongside trying to build positive tools and forestall bad outcomes, we may inadvertently accelerate the very things we’re trying to avoid, just like gain-of-function researchers with their viral experiments. Technologies I develop may well cause some harm. I will personally continue to make mistakes, despite my best efforts to learn and improve. I’ve wrestled with this point for years—hang back or get involved? The closer you are to a technology’s beating heart, the more you can affect outcomes, steer it in more positive directions, and block harmful applications. But this means also being part of what makes it a reality—for all the good and for all the harm it may do.
For the most part concerns over technology like those outlined in this book are elite pursuits, nice talking points for the business-class lounge, op-eds for bien-pensant publications, or topics for the presentation halls at Davos or TED. Most of humanity doesn’t yet worry about these things in any kind of systematic way. Off Twitter, out of the bubble, most people have very different concerns, other problems demanding attention in a fragile world. Communication around AI hasn’t always helped, tending to fall into simplistic narratives. So, if the invocation of the grand “we” is at present meaningless, it prompts an obvious follow-up: let’s build one. Throughout history change came about because people self-consciously worked for it. Popular pressure created new norms. The abolition of slavery, women’s suffrage, civil rights—these are huge moral achievements that happened because people fought hard, building broad-based coalitions that took a big claim seriously and then effected change based on it. Climate wasn’t just put on the map because people noticed the weather getting more extreme. They noticed because grassroots activists and scientists and then later (some) writers, celebrities, CEOs, and politicians agitated for meaningful change. And they acted on it out of a desire to do the right thing.
We should all get comfortable with living with contradictions in this era of exponential change and unfurling powers. Assume the worst, plan for it, give it everything. Stick doggedly to the narrow path. Get a world beyond the elites engaged and pushing. If enough people start building that elusive “we,” those glimmers of hope will become raging fires of change.
Technology should amplify the best of us, open new pathways for creativity and cooperation, work with the human grain of our lives and most precious relationships. It should make us happier and healthier, the ultimate complement to human endeavor and life well lived—but always on our terms, democratically decided, publicly debated, with benefits widely distributed. Amid the turbulence, we must never lose sight of this: a vision even the most ardent of Luddites could embrace
Risk & Security Concerns
The fate of humanity hangs in the balance, and the decisions we make in the coming years and decades will determine whether we rise to the challenge of these technologies or fall victim to their dangers. But in this moment of uncertainty, one thing is certain: the age of advanced technology is upon us, and we must be ready to face its challenges head-on.
They could present an existential threat to nation-states—risks so profound they might disrupt or even overturn the current geopolitical order. They open pathways to immense AI-empowered cyberattacks, automated wars that could devastate countries, engineered pandemics, and a world subject to unexplainable and yet seemingly omnipotent forces. The likelihood of each may be small, but the possible consequences are huge. Even a slim chance of outcomes like these requires urgent attention.
Decades after their invention, the architects of the atomic bomb could no more stop a nuclear war than Henry Ford could stop a car accident. Technology’s unavoidable challenge is that its makers quickly lose control over the path their inventions take once introduced to the world.
In most cases, containment is about meaningful control, the capability to stop a use case, change a research direction, or deny access to harmful actors. It means preserving the ability to steer waves to ensure their impact reflects our values, helps us flourish as a species, and does not introduce significant harms that outweigh their benefits.
On July 16, 1945, under the auspices of the Manhattan Project, the U.S. Army detonated a device code-named Trinity in the New Mexico desert. Weeks later a Boeing B-29 Superfortress, the Enola Gay, dropped a device code-named Little Boy containing sixty-four kilograms of uranium-235 over the city of Hiroshima, killing 140,000 people. In an instant, the world had changed. Yet from there, against the wider pattern of history, nuclear weapons did not endlessly proliferate. Nuclear weapons have been detonated only twice in wartime. To date only nine countries have acquired them.
The biggest explosion ever recorded was a test of an H-bomb called the Tsar Bomba. Detonated over a remote archipelago in the Barents Sea in 1961, the explosion created a three-mile fireball and a mushroom cloud fifty-nine miles wide. The blast was ten times more powerful than the combined total of all the conventional explosives deployed in World War II. Its scale frightened everyone. In this respect it might have actually helped. Both the United States and Russia stepped back from ramping up their weapons in the face of their sheer, horrific power. That nuclear technology remained contained was no accident; it was a conscious nonproliferation policy of the nuclear powers, helped by the fact that nuclear weapons are incredibly complex and expensive to produce.
Popular revulsion at the possibility of a thermonuclear apocalypse was a powerful motivator for signing the treaty. But these weapons have also been contained by cold calculation. Mutually assured destruction hemmed in possessors since it soon became clear that using them in anger is a quick way of ensuring your own destruction.
And in perhaps the most well-known case, nuclear catastrophe was only avoided during the Cuban missile crisis when one man, the acting Russian commodore, Vasili Arkhipov, refused to give an order to fire nuclear torpedoes. The two other officers on the submarine, convinced they were under attack, had brought the world within a split second of full-scale nuclear war.
Then the SWAT team came up with a new idea. The police department had a bomb disposal robot, the $150,000 Remotec Andros Mark 5A-1 made by Northrop Grumman. In fifteen minutes they hatched a plan to attach a large blob of C-4 explosive to its arm and send it into the building with the intention of incapacitating the shooter. The police chief, David Brown, quickly signed off on the plan. It went into action, the robot rumbling through the building, where it positioned the explosive in an adjacent room, next to a wall with the shooter on the other side. The explosive detonated, blasting apart the wall and killing the gunman. It was the first time a robot had used targeted lethal force in the United States. In Dallas, it saved the day. A horrific event was brought to a conclusion.
In the words of the security expert Audrey Kurth Cronin, “Never before have so many had access to such advanced technologies capable of inflicting death and mayhem.”
Internal research on GPT-4 concluded that it was “probably” not capable of acting autonomously or self-replicating, but within days of launch users had found ways of getting the system to ask for its own documentation and to write scripts for copying itself and taking over other machines. Early research even claimed to find “sparks of AGI” in the model, adding that it was “strikingly close to human-level performance.” These now are coming into view.
A paradox of the coming wave is that its technologies are largely beyond our ability to comprehend at a granular level yet still within our ability to create and use. In AI, the neural networks moving toward autonomy are, at present, not explainable.
Engineers can’t peer beneath the hood and easily explain what caused something to happen. GPT-4, AlphaGo, and the rest are black boxes, their outputs and decisions based on opaque and intricate chains of minute signals. Autonomous systems can and may be explainable, but the fact that so much of the coming wave operates at the edge of what we can understand should give us pause. We won’t always be able to predict what these autonomous systems will do next; that’s the nature of autonomy.
For a long time I objected, resisting the framing of technological progress as a zero-sum international arms race. At DeepMind, I always pushed back on references to us as a Manhattan Project for AI, not just because of the nuclear comparison, but because even the framing might initiate a series of other Manhattan Projects, feeding an arms race dynamic when close global coordination, break points, and slowdowns were needed. But the reality is that the logic of nation-states is at times painfully simple and yet utterly inevitable. In the context of a state’s national security, merely floating an idea becomes dangerous.
IN WORLD WAR II the Manhattan Project, which consumed 0.4 percent of U.S. GDP, was seen as a race against time to get the bomb before the Germans. But the Nazis had initially ruled out pursuit of nuclear weapons, considering them too expensive and speculative. The Soviets were far behind and eventually relied on extensive leaks from the United States. America had conducted an arms race against phantoms, bringing nuclear weapons into the world far earlier than under other circumstances.
Declaring an arms race is no longer a conjuring act, a self-fulfilling prophecy. The prophecy has been fulfilled. It’s here, it’s happening. It is a point so obvious it doesn’t often get mentioned: there is no central authority controlling what technologies get developed, who does it, and for what purpose; technology is an orchestra with no conductor. Yet this single fact could end up being the most significant of the twenty-first century. And if the phrase “arms race” triggers worry, that’s with good reason. There could hardly be a more precarious foundation for a set of escalating technologies than the perception (and reality) of a zero-sum competition built on fear. There are, however, other, more positive drivers of technology to consider.
The NHS had been hit by a ransomware attack. It was called WannaCry, and its scale was immense. Ransomware works by compromising a system to encrypt and thus lock down access to key files and capabilities. Cyberattackers typically demand a ransom in exchange for liberating a captive system. The NHS wasn’t WannaCry’s only target. Exploiting a vulnerability in older Microsoft systems, hackers had found a way to grind swaths of the digital world to a halt, including organizations like Deutsche Bahn, Telefónica, FedEx, Hitachi, even the Chinese Ministry of Public Security. WannaCry tricked some users into opening an email, which released a “worm” replicating and transporting itself to infect a quarter of a million computers across 150 countries in just one day. For a few hours after the attack much of the digital world teetered, held for ransom by a distant, faceless assailant. The ensuing damage cost up to $8 billion, but the implications were even graver. The WannaCry attack exposed just how vulnerable institutions whose operation we take for granted were to sophisticated cyberattacks.
Now imagine if, instead of accidentally leaving open a loophole, the hackers behind WannaCry had designed the program to systematically learn about its own vulnerabilities and repeatedly patch them. Imagine if, as it attacked, the program evolved to exploit further weaknesses. Imagine that it then started moving through every hospital, every office, every home, constantly mutating, learning. It could hit life-support systems, military infrastructure, transport signaling, the energy grid, financial databases. As it spread, imagine the program learning to detect and stop further attempts to shut it down. A weapon like this is on the horizon if not already in development.
the words of a New York Times investigation, this was a “debut test of a high-tech, computerized sharpshooter kitted out with artificial intelligence and multiple-camera eyes, operated via satellite and capable of firing 600 rounds a minute.” Mounted on a strategically parked but innocuous-looking pickup truck fitted with cameras, it was a kind of robot weapon assembled by Israeli agents. A human authorized the strike, but it was the AI that automatically adjusted the gun’s aim. Just fifteen bullets were fired and one of the most high-profile and well-guarded people in Iran was killed in under a minute. The explosion was merely a failed attempt to hide the evidence.
Start-ups like Anduril, Shield AI, and Rebellion Defense have raised hundreds of millions of dollars to build autonomous drone networks and other military applications of AI. Complementary technologies like 3-D printing and advanced mobile communications will reduce the cost of tactical drones to a few thousand dollars, putting them within reach of everyone from amateur enthusiasts to paramilitaries to lone psychopaths.
Cue an “Infocalypse,” the point at which society can no longer manage a torrent of sketchy material, where the information ecosystem grounding knowledge, trust, and social cohesion, the glue holding society together, falls apart. In the words of a Brookings Institution report, ubiquitous, perfect synthetic media means “distorting democratic discourse; manipulating elections; eroding trust in institutions; weakening journalism; exacerbating social divisions; undermining public safety; and inflicting hard-to-repair damage on the reputation of prominent individuals, including elected officials and candidates for office.”
Eventually, as some of history’s most powerful technologies percolate everywhere, those edge cases become more likely. Eventually, something will go wrong—at scales and speeds commensurate with the capabilities unleashed. The upshot of the coming wave’s four features is that, absent strong methods of containment operating at every level, catastrophic outcomes like an engineered pandemic are more possible than ever. That is unacceptable. And yet here’s the dilemma: the most secure solutions for containment are equally unacceptable, leading humanity down an authoritarian and dystopian pathway.
But just because a warning has dramatic implications isn’t good grounds to automatically reject it. The pessimism-averse complacency greeting the prospect of disaster is itself a recipe for disaster. It feels plausible, rational in its own terms, “smart” to dismiss warnings as the overblown chatter of a few weirdos, but this attitude prepares the way for its own failure. No doubt, technological risk takes us into uncertain territory. Nonetheless, all the trends point to a profusion of risk. This speculation is grounded in constantly compounding scientific and technological improvements. Those who dismiss catastrophe are, I believe, discounting the objective facts before us. After all, we are not talking here about the proliferation of motorbikes or washing machines.
Terrorists mount automatic weapons equipped with facial recognition to an autonomous drone swarm hundreds or thousands strong, each capable of quickly rebalancing from the weapon’s recoil, firing short bursts, and moving on. These drones are unleashed on a major downtown with instructions to kill a specific profile. In busy rush hour these would operate with terrifying efficiency, following an optimized route around the city. In minutes there would be an attack at far greater scale than, say, the 2008 Mumbai attacks, which saw armed terrorists roaming through city landmarks like the central train station.
A mass murderer decides to hit a huge political rally with drones, spraying devices, and a bespoke pathogen. Soon attendees become sick, then their families.
Start following chains of logic like this and myriad sequences of unnerving events unspool. AI safety researchers worry (correctly) that should something like an AGI be created, humanity would no longer control its own destiny. For the first time, we would be toppled as the dominant species in the known universe. However clever the designers, however robust the safety mechanisms, accounting for all eventualities, guaranteeing safety, is impossible. Even if it was fully aligned with human interests, a sufficiently powerful AI could potentially overwrite its programming, discarding safety and alignment features apparently built in.
Aum Shinrikyo combined an unusual degree of organization with a frightening level of ambition. They wanted to initiate World War III and a global collapse by murdering at shocking scale and began building an infrastructure to do so. On the one hand, it’s reassuring how rare organizations like Aum Shinrikyo are. Of the many terrorist incidents and other non-state-perpetrated mass killings since the 1990s, most have been carried out by disturbed loners or groups with specific political or ideological agendas. But on the other hand, this reassurance has limits. Procuring weapons of great power was previously a huge barrier to entry, helping keep catastrophe at bay. The sickening nihilism of the school shooter is bounded by the weapons they can access. The Unabomber had only homemade devices. Building and disseminating biological and chemical weapons were huge challenges for Aum Shinrikyo. As a small, fanatical coterie operating in an atmosphere of paranoid secrecy, with only limited expertise and access to materials, they made mistakes. As the coming wave matures, however, the tools of destruction will, as we’ve seen, be democratized and commoditized. They will have greater capability and adaptability, potentially operating in ways beyond human control or understanding, evolving and upgrading at speed, some of history’s greatest offensive powers available widely.
IT’S TEMPTING TO DISMISS all these dark risk scenarios as the distant daydreams of people who grew up reading too much science fiction, those biased toward catastrophism. Tempting, but a mistake. Regardless of where we are with BSL-4 protocols or regulatory proposals or technical publications on the AI alignment problem, those incentives grind away, the technologies keep developing and diffusing. This is not the stuff of speculative novels and Netflix series. This is real, being worked on right this second in offices and labs around the world.
Trading off liberty and security is an ancient dilemma. It was there in the foundational account of the Leviathan state from Thomas Hobbes. It has never gone away. To be sure, this is often a complex and multidimensional relationship, but the coming wave raises the stakes to a new pitch. What level of societal control is appropriate to stopping an engineered pandemic? What level of interference in other countries is appropriate toward the same end? The consequences for liberty, sovereignty, and privacy have never been so potentially painful.
If this book feels contradictory in its attitude toward technology, part positive and part foreboding, that’s because such a contradictory view is the most honest assessment of where we are. Our great-grandparents would be astonished at the abundance of our world. But they would also be astonished at its fragility and perils. With the coming wave, we face a real threat, a cascade of potentially disastrous consequences—yes, even an existential risk to the species. Technology is the best and worst of us. There isn’t a neat one-sided approach that does it justice. The only coherent approach to technology is to see both sides at the same time.
The dilemma should be a pressing call to action. But over the years it’s become obvious that most people find this a lot to take in. I absolutely get it. It barely seems real on first encounter. In all those many discussions about AI and regulation, I’ve been struck by how hard it is, compared with a host of existing or looming challenges, to convey exactly why the risks in this book need to be taken seriously, why they aren’t just nearly irrelevant tail risks or the province of science fiction.
How do we find common ground amid competing agendas? China and the United States don’t share a vision of restricting development of AI; Meta wouldn’t share the view that social media is part of the problem; AI researchers and virologists believe their work is a critical part not of causing catastrophe but of understanding and averting it. “Technology” is not, on the face of it, a problem in the same sense as a heating planet. And yet it might be. The first step is recognition. We need to calmly acknowledge that the wave is coming and the dilemma is, absent a jarring change in course, unavoidable.
As the century wears on, the lesson of the Cold War will have to be relearned: there is no path to technological safety without working with your adversaries.
The risks of failure scarcely bear thinking about, but face them we must. The prize, though, is awesome: nothing less than the secure, long-term flourishing of our precious species. That is worth fighting for.
Governance & Regulation
Before more CRISPR babies are born, the world will likely need to grapple with iterated embryo selection that could also select for desired traits.
There is only one entity that could, perhaps, provide the solution, one that anchors our political system and takes final responsibility for the technologies society produces: the nation-state. But there’s a problem. States are already facing massive strain, and the coming wave looks set to make things much more complicated. The consequences of this collision will shape the rest of the century.
DEMOCRACIES ARE BUILT ON trust. People need to trust that government officials, militaries, and other elites will not abuse their dominant positions. Everyone relies on the trust that taxes will be paid, rules honored, the interests of the whole put ahead of individuals. Without trust, from the ballot box to the tax return, from the local council to the judiciary, societies are in trouble. Trust in government, particularly in America, has collapsed. Postwar presidential administrations like those of Eisenhower and Johnson were trusted to do “what is right” by more than 70 percent of Americans, according to a Pew survey. For recent presidents such as Obama, Trump, and Biden, this measure of confidence has cratered, all falling below 20 percent.
Leaders will need to take bold actions without precedent, trading off short-term gain for long-term benefit. Responding effectively to one of the most far-reaching and transformative events in history will require mature, stable, and most of all trusted governments to perform at their best. States that work really, really well. That is what it will take to ensure that the coming wave delivers the great benefits it promises. It’s an incredibly tall order.
If only it were that simple. Saying “Regulation!” in the face of awesome technological change is the easy part. It’s also the classic pessimism-averse answer. It’s a simple way to shrug off the problem. On paper regulation looks enticing, even obvious and straightforward; suggesting it lets people sound smart, concerned, and even relieved. The unspoken implication being that it’s solvable, but it’s someone else’s problem. Look deeper, though, and the fissures become evident.
Technology evolves week by week. Drafting and passing legislation takes years. Consider the arrival of a new product on the market like Ring doorbells. Ring put a camera on your front door and connected it to your phone. The product was adopted so quickly and is now so widespread that it has fundamentally changed the nature of what needs regulating; suddenly your average suburban street went from relatively private space to surveilled and recorded. By the time the regulation conversation caught up, Ring had already created an extensive network of cameras, amassing data and images from the front doors of people around the world. Twenty years on from the dawn of social media, there’s no consistent approach to the emergence of a powerful new platform (and besides, is privacy, polarization, monopoly, foreign ownership, or mental health the core problem—or all of the above?). The coming wave will worsen this dynamic.
Truth is, though, novel threats are just exceptionally difficult for any government to navigate. That’s not a flaw with the idea of government; it’s an assessment of the scale of the challenge before us. When they are faced with something like an ACI that can pass my version of the Modern Turing Test, the response of even the most thoughtful, farsighted bureaucracies will resemble the response to COVID. Governments fight the last war, the last pandemic, regulate the last wave. Regulators regulate for things they can anticipate. This, meanwhile, is an age of surprises.
The main monitor of bioweapons, for example, the Biological Weapons Convention, has a budget of just $1.4 million and only four full-time employees—fewer than the average McDonald’s.
APIs that let others use foundational AI services should not be blindly open, but rather come with “know your customer” checks, as with, say, portions of the banking industry.
U.S. citizens working on semiconductors with Chinese companies are faced with a choice: keep their jobs and lose American citizenship, or immediately quit.
Human Nature & Psychology
PESSIMISM AVERSION: The tendency for people, particularly elites, to ignore, downplay, or reject narratives they see as overly negative. A variant of optimism bias, it colors much of the debate around the future, especially in technology circles.
What do you see? Furniture? Buildings? Phones? Food? A landscaped park? Almost every object in your line of sight has, in all likelihood, been created or altered by human intelligence. Language—the foundation of our social interactions, of our cultures, of our political organizations, and perhaps of what it means to be human—is another product, and driver, of our intelligence. Every principle and abstract concept, every small creative endeavor or project, every encounter in your life, has been mediated by our species’ unique and endlessly complex capacity for imagination, creativity, and reason. Human ingenuity is an astonishing thing.
It’s no exaggeration to say the entirety of the human world depends on either living systems or our intelligence.
I have come to call the pessimism-aversion trap: the misguided analysis that arises when you are overwhelmed by a fear of confronting potentially dark realities, and the resulting tendency to look the other way. Pretty much everyone has some version of this reaction, and the consequence is that it’s leading us to overlook a number of critical trends unfolding right before our eyes. It’s almost an innate physiological response. Our species is not wired to truly grapple with transformation at this scale, let alone the potential that technology might fail us in this way. I’ve experienced this feeling throughout my career, and I’ve seen many, many others have the same visceral response. Confronting this feeling is one of the purposes of this book. To take a cold hard look at the facts, however uncomfortable.
We are not just the creators of our tools. We are, down to the biological, the anatomical level, a product of them.
There is no such thing as a non-technological human being.
Technologies are ideas, and ideas cannot be eliminated.
Little is ultimately more valuable than intelligence.
Scientists and technologists are all too human. They crave status, success, and a legacy. They want to be the first and best and recognized as such. They’re competitive and clever with a carefully nurtured sense of their place in the world and in history. They love pushing boundaries, sometimes for money but often for glory, sometimes just for its own sake. AI scientists and engineers are among the best-paid people in the world, and yet what really gets them out of bed is the prospect of being first to a breakthrough or seeing their name on a landmark paper. Love them or hate them, technology magnates and entrepreneurs are viewed as unique lodestars of power, wealth, vision, and sheer will.
The Silicon Valley mythos of the heroic start-up founder single-handedly empire building in the face of a hostile and ignorant world is persistent for a reason. It is the self-image technologists too often still aspire to, an archetype to emulate, a fantasy that still drives new technologies.
Psychologically, none of this feels present. Our prehistoric brains are generally hopeless at dealing with amorphous threats like these. However, over the last decade or so, the challenge of climate change has come into better focus. Although the world still spews out increasing amounts of CO2, scientists everywhere can measure CO2 parts per million (ppm) in the atmosphere. As recently as the 1970s, global atmospheric carbon was around the low 300s ppm. In 2022 it was at 420 ppm. Whether in Beijing, Berlin, or Burundi, whether an oil major or a family farm, everyone can see, objectively, what is happening to the climate. Data brings clarity.
Economic & Industrial Transformation
The Nobel Prize–winning economist William Nordhaus calculated that the same amount of labor that once produced fifty-four minutes of quality light in the eighteenth century now produces more than fifty years of light. As a result, the average person in the twenty-first century has access to approximately 438,000 times more “lumen-hours” per year than our eighteenth-century cousins.
In the era of abundant venture capital, distinguishing shiny objects from genuine breakthroughs is not so straightforward.
Think of them as proto–to-do lists that do themselves, enabling the automation of a wide range of tasks. We’ll come to robots later, but the truth is that for a vast range of tasks in the world economy today all you need is access to a computer; most of global GDP is mediated in some way through screen-based interfaces amenable to an AI.
The vast petrochemical industry could see a challenge from young start-ups like Solugen, whose Bioforge is an attempt to build a carbon-negative factory; it would produce a wide range of chemicals and commodities, from cleaning products to food additives to concrete, all while pulling carbon out of the atmosphere. Their process is essentially low-energy, low-waste bio-manufacturing at industrial scale, built on AI and biotech. Another company, LanzaTech, harnesses genetically modified bacteria to convert waste CO2 from steel mill production into widely used industrial chemicals. This kind of synthetic biology is helping to build a more sustainable “circular” economy. Next-generation DNA printers will produce DNA with an increasing degree of precision. If improvements can be made in not only expressing that DNA but then using it to genetically engineer a diverse array of new organisms, automating and scaling the processes, a device or set of devices could, theoretically, produce an enormous range of biological materials and constructions using only a few basic inputs.
(Life + Intelligence) x Energy = Modern Civilization
Renewable energy will become the largest single source of electricity generation by 2027. This shift is occurring at an unprecedented pace, with more renewable capacity set to be added in the next five years than in the previous two decades. Solar power in particular is experiencing rapid growth, with costs falling significantly. In 2000, solar energy cost $4.88 per watt, but by 2019 it had fallen to just 38 cents.
THE RAILWAY BOOM OF the 1840s was “arguably the greatest bubble in history.” But in the annals of technology, it is more norm than exception. There was nothing inevitable about the coming of the railways, but there was something inevitable about the chance to make money.
The truth is that the curiosity of academic researchers or the will of motivated governments is insufficient to propel new breakthroughs into the hands of billions of consumers. Science has to be converted into useful and desirable products for it to truly spread far and wide. Put simply: most technology is made to earn money.
This engine has created a world economy worth $85 trillion—and counting. From the pioneers of the Industrial Revolution to the Silicon Valley entrepreneurs of today, technology has a magnetic incentive in the form of serious financial rewards. The coming wave represents the greatest economic prize in history.
WHEN A CORPORATION AUTOMATES insurance claims or adopts a new manufacturing technique, it creates efficiency savings or improves the product, boosting profits and attracting new customers. Once an innovation delivers a competitive advantage like this, everyone must either adopt it, leapfrog it, switch focus, or lose market share and eventually go bust. The attitude around this dynamic in technology businesses in particular is simple and ruthless: build the next generation of technology or be destroyed.
PwC forecasts AI will add $15.7 trillion to the global economy by 2030. McKinsey forecasts a $4 trillion boost from biotech over the same period. Boosting world robot installations 30 percent above a baseline forecast could unleash a $5 trillion dividend, a sum bigger than Germany’s entire output. Especially when other sources of growth are increasingly scarce, these are strong incentives. With profits this high, interrupting the gold rush is likely to be incredibly challenging.
Electric vehicles may not emit carbon when being driven, but they are resource hungry nonetheless: materials for just one EV require extracting around 225 tons of finite raw materials, demand for which is already spiking unsustainably.
Yes, it’s almost certain that many new job categories will be created. Who would have thought that “influencer” would become a highly sought-after role? Or imagined that in 2023 people would be working as “prompt engineers”—nontechnical programmers of large language models who become adept at coaxing out specific responses? Demand for masseurs, cellists, and baseball pitchers won’t go away. But my best guess is that new jobs won’t come in the numbers or timescale to truly help. The number of people who can get a PhD in machine learning will remain tiny in comparison to the scale of layoffs. And, sure, new demand will create new work, but that doesn’t mean it all gets done by human beings.
Labor markets also have immense friction in terms of skills, geography, and identity. Consider that in the last bout of deindustrialization the steelworker in Pittsburgh or the carmaker in Detroit could hardly just up sticks, retrain mid-career, and get a job as a derivatives trader in New York or a branding consultant in Seattle or a schoolteacher in Miami. If Silicon Valley or the City of London creates lots of new jobs, it doesn’t help people on the other side of the country if they don’t have the right skills or aren’t able to relocate. If your sense of self is wedded to a particular kind of work, it’s little consolation if you feel your new job demeans your dignity.
The Private Sector Job Quality Index, a measure of how many jobs provide above-average income, has plunged since 1990; it suggests that well-paying jobs as a proportion of the total have already started to fall.
To get a sense of these concentrations, consider that the combined revenues of companies in Fortune’s Global 500 are already at 44 percent of world GDP. Their total profits are larger than all but the top six countries’ annual GDPs. Companies already control the largest clusters of AI processors, the best models, the most advanced quantum computers, and the overwhelming majority of robotics capacity and IP. Unlike with rockets, satellites, and the internet, the frontier of this wave is found in corporations, not in government organizations or academic labs. Accelerate this process with the next generation of technology, and a future of corporate concentration doesn’t seem so extraordinary.
Samsung and Korea are outliers but perhaps not for much longer. Given the range of concentrated capabilities, things typically the province of governments today, like education and defense, perhaps even currency or law enforcement, could be provided by this new generation of companies. Already, for example, eBay and PayPal’s dispute resolution system handles around sixty million disagreements a year, three times as many as the entire U.S. legal system. Ninety percent of these disputes are settled using technology alone.
MODERN CIVILIZATION WRITES CHECKS only continual technological development can cash. Our entire edifice is premised on the idea of long-term economic growth. And long-term economic growth is ultimately premised on the introduction and diffusion of new technologies. Whether it’s the expectation of consuming more for less or getting ever more public service without paying more tax, or the idea that we can unsustainably degrade the environment while life keeps getting better indefinitely, the bargain—arguably the grand bargain itself—needs technology.
In the long term, though, that probably won’t stop it. Instead, it is pushing a difficult and hugely expensive but still plausible path toward domestic semiconductor capacity. If it takes hundreds of billions of dollars (and it will), they’ll spend it. Chinese companies are already finding ways to bypass the controls, using networks of shell and front companies and cloud computing services in third-party countries. NVIDIA, the American manufacturer of the world’s most advanced AI chips, recently retroactively tweaked its most advanced chips to evade the sanctions. Nonetheless, it shows us something vital: there is at least one undeniable lever. The wave can be slowed, at least for some period of time and in some areas.
In AI, the lion’s share of the most advanced GPUs essential to the latest models are designed by one company, the American firm NVIDIA. Most of its chips are manufactured by one company, TSMC, in Taiwan, the most advanced in just a single building, the world’s most sophisticated and expensive factory. TSMC’s machinery to make these chips comes from a single supplier, the Dutch firm ASML, by far Europe’s most valuable and important tech company. ASML’s machines, which use a technique known as extreme ultraviolet lithography and produce chips at levels of astonishing atomic precision, are among the most complex manufactured goods in history. These three companies have a choke hold on cutting-edge chips, a technology so physically constrained that one estimate argues they cost up to $10 billion per kilogram.
Chips aren’t the only choke point. Industrial-scale cloud computing, too, is dominated by six major companies. For now, AGI is realistically pursued by a handful of well-resourced groups, most notably DeepMind and OpenAI. Global data traffic travels through a limited number of fiber-optic cables bunched in key pinch points (off the coast of southwest England or Singapore, for example). A crunch on the rare earth elements cobalt, niobium, and tungsten could topple entire industries. Some 80 percent of the high-quality quartz essential to things like photovoltaic panels and silicon chips comes from a single mine in North Carolina. DNA synthesizers and quantum computers are not commonplace consumer goods. Skills, too, are a choke point: the number of people working on all the frontier technologies discussed in this book is probably no more than 150,000.
Scientific Innovation & Research
As we watched from our control room, the tension was unreal. Yet as the endgame approached, that “mistaken” move proved pivotal. AlphaGo won again. Go strategy was being rewritten before our eyes. Our AI had uncovered ideas that hadn’t occurred to the most brilliant players in thousands of years. In just a few months, we could train algorithms to discover new knowledge and find new, seemingly superhuman insights.
AlexNet was built by the legendary researcher Geoffrey Hinton and two of his students, Alex Krizhevsky and Ilya Sutskever, at the University of Toronto. They entered the ImageNet Large Scale Visual Recognition Challenge, an annual competition designed by the Stanford professor Fei-Fei Li to focus the field’s efforts around a simple goal: identifying the primary object in an image. Each year competing teams would test their best models against one another, often beating the previous year’s submissions by no more than a single percentage point in accuracy. In 2012, AlexNet beat the previous winner by 10 percent. It may sound like a small improvement, but to AI researchers this kind of leap forward can make the difference between a toylike research demo and a breakthrough on the cusp of enormous real-world impact. The event that year was awash with excitement. The resulting paper by Hinton and his colleagues became one of the most frequently cited works in the history of AI research.
Researchers meanwhile see more and more evidence for “the scaling hypothesis,” which predicts that the main driver of performance is, quite simply, to go big and keep going bigger. Keep growing these models with more data, more parameters, more computation, and they’ll keep improving—potentially all the way to human-level intelligence and beyond. No one can say for sure whether this hypothesis will hold, but so far at least it has. I think that looks set to continue for the foreseeable future.
Genome sequencing like this turns biological information, DNA, into raw text: information humans can read and use. Complex chemical structure is rendered into a sequence of its four defining bases—A, T, C, and G.
While Moore’s law justifiably attracts considerable attention, less well known is what The Economist calls the Carlson curve: the epic collapse in costs for sequencing DNA. Thanks to ever-improving techniques, the cost of human genome sequencing fell from $1 billion in 2003 to well under $1,000 by 2022. That is, the price dropped a millionfold in under twenty years, a thousand times faster than Moore’s law. A stunning development hiding in plain sight.
Experiments that once took years are tackled by grad students in weeks. Companies like the Odin will sell you a genetic engineering kit including live frogs and crickets for $1,999, while another kit includes a mini-centrifuge, a polymerase chain reaction machine, and all the reagents and materials you need to get going.
Companies such as DNA Script are commercializing DNA printers that train and adapt enzymes to build de novo, or completely new, molecules. This capability has given rise to the new field of synthetic biology—the ability to read, edit, and now write the code of life.
In the 1960s computer chips were still largely hand built, just as—until recently—most biotech research was still a manual process, slow, unpredictable, messy in every sense. Now semiconductor fabrication is a hyperefficient atomic-scale manufacturing process churning out some of the world’s most complex products. Biotech is following a similar trajectory, only at a much earlier phase; organisms will soon be designed and produced with the precision and scale of today’s computer chips and software.
Altos Labs, which has raised $3 billion, more start-up funding than for any previous biotech venture, is one company seeking to find effective anti-aging technologies. Its chief scientist, Richard Klausner, argues, “We think we can turn back the clock” on human mortality. Focusing on techniques of “rejuvenation programming,” the company aims to reset the epigenome, chemical marks on DNA that control genes by turning them “on” and “off.” As we get older, these “flip” to wrong positions. This experimental approach aims to flip them back, reversing or arresting the aging process.
Want to make some washing detergent or a new toy or even grow a house? Just download the “recipe” and hit “go.” In the words of Elliot Hershberg, “What if we could grow what we wanted locally? What if our supply chain was just biology?”
Proteins are the building blocks of life. Your muscles and blood, hormones and hair, indeed, 75 percent of your dry body weight: all proteins. They are everywhere, coming in every conceivable form, doing myriad vital tasks, from the cords holding your bones together, to the hooks on antibodies used to catch unwanted visitors. Understand proteins, and you’ve taken a giant leap forward in understanding—and mastering—biology.
Biology’s sheer complexity opens up vast troves of data, like all those proteins, almost impossible to parse using traditional techniques. A new generation of tools has quickly become indispensable as a result. Teams are working on products that will generate new DNA sequences using only natural language instructions. Transformer models are learning the language of biology and chemistry, again discovering relationships and significance in long, complex sequences illegible to the human mind. LLMs fine-tuned on biochemical data can generate plausible candidates for new molecules and proteins, DNA and RNA sequences. They predict the structure, function, or reaction properties of compounds in simulation before these are later verified in a laboratory. The space of applications and the speed at which they can be explored is only accelerating.
Some scientists are beginning to investigate ways to plug human minds directly into computer systems. In 2019, electrodes surgically implanted in the brain let a fully paralyzed man with late-stage ALS spell out the words “I love my cool son.” Companies like Neuralink are working on brain interfacing technology that promises to connect us directly with machines. In 2021 the company inserted three thousand filament-like electrodes, thinner than a human hair, that monitor neuron activity, into a pig’s brain. Soon they hope to begin human trials of their N1 brain implant, while another company, Synchron, has already started human trials in Australia.
Scientists at a start-up called Cortical Labs have even grown a kind of brain in a vat (a bunch of neurons grown in vitro) and taught it to play Pong. It likely won’t be too long before neural “laces” made from carbon nanotubes plug us directly into the digital world.
While very much a nascent technology, there are huge implications when quantum computing does materialize. Its key attraction is that each additional qubit doubles a machine’s total computing power. Start adding qubits and it gets exponentially more powerful. Indeed, a relatively small number of particles could have more computing power than if the entire universe was converted into a classical computer. It’s the computational equivalent of moving from a flat, black-and-white film into full color and three dimensions, unleashing a world of algorithmic possibility.
Nanomachines would work at speeds far beyond anything at our scale, delivering extraordinary outputs: an atomic-scale nanomotor, for example, could rotate forty-eight billion times a minute. Scaled up, it could power a Tesla with material equivalent in volume to about twelve grains of sand. This is a world of gossamer structures made of diamond, space suits that cling to and protect the body in all environments, a world where compilers can create anything out of a basic feedstock. A world, in short, where anything can become anything with the right atomic manipulation. The dream of the physical universe rendered a completely malleable platform, the plaything of tiny, dexterous nanobots or effortless replicators, is still the province, like superintelligence, of science fiction. It’s a techno-fantasia, many decades away, but one that will steadily come into focus as the coming wave plays out.
More citations mean more prestige, credibility, and research funding. Junior researchers are especially liable to be judged—and hired—on their publication record, publicly viewable on platforms like Google Scholar. Moreover, these days papers are announced on Twitter and often written with social media influence in mind. They are designed to be eye-catching and attract attention.
And yet it was also LeCun who said AlphaGo was impossible just days before it made its first big breakthrough. That’s no discredit to him; it just shows that no one can ever be sure of anything at the research frontier.
Engineers often have a particular mindset. The Los Alamos director J. Robert Oppenheimer was a highly principled man. But above all else he was a curiosity-driven problem solver. Consider these words, in their own way as chilling as his famous Bhagavad Gita quotation (on seeing the first nuclear test, he recalled some lines from Hindu scripture: “Now I am become Death, the destroyer of worlds”): “When you see something that is technically sweet, you go ahead and do it, and you argue about what to do about it only after you have had your technical success.” It was an attitude shared by his colleague on the Manhattan Project, the brilliant, polymathic Hungarian American John von Neumann. “What we are creating now,” he said, “is a monster whose influence is going to change history, provided there is any history left, yet it would be impossible not to see it through, not only for military reasons, but it would also be unethical from the point of view of the scientists not to do what they know is feasible, no matter what terrible consequences it may have.”
Future Scenarios & Implications
No previous wave has mushroomed as quickly, but the historical pattern nonetheless repeats. At first it seems impossible and unimaginable. Then it appears inevitable. And each wave grows bigger and stronger still.
In the coming decades, a new wave of technology will force us to confront the most foundational questions our species has ever faced. Do we want to edit our genomes so that some of us can have children with immunity to certain diseases, or with more intelligence, or with the potential to live longer? Are we committed to holding on to our place at the top of the evolutionary pyramid, or will we allow the emergence of AI systems that are smarter and more capable than we can ever be? What are the unintended consequences of exploring questions like these? They illustrate a key truth about Homo technologicus in the twenty-first century. For most of history, the challenge of technology lay in creating and unleashing its power. That has now flipped: the challenge of technology today is about containing its unleashed power, ensuring it continues to serve us and our planet. That challenge is about to decisively escalate.
Ultimately, in its most dramatic forms, the coming wave could mean humanity will no longer be at the top of the food chain. Homo technologicus may end up being threatened by its own creation. The real question is not whether the wave is coming. It clearly is; just look and you can see it forming already. Given risks like these, the real question is why it’s so hard to see it as anything other than inevitable.
If centralization and decentralization sound as if they are in direct contradiction, that’s with good reason: they are. Understanding the future means handling multiple conflicting trajectories at once. The coming wave launches immense centralizing and decentralizing riptides at the same time. Both will be in play at once. Every individual, every business, every church, every nonprofit, every nation, will eventually have its own AI and ultimately its own bio and robotics capability. From a single individual on their sofa to the world’s largest organizations, each AI will aim to achieve the goals of its owner. Herein lies the key to understanding the coming wave of contradictions, a wave full of collisions.
We go to the supermarket and expect it to be stuffed with fresh fruits and vegetables. We expect it to be kept cool in the summer, warm in the winter. Even despite constant turbulence, we assume that the supply chains and affordances of the twenty-first century are as robust as an old town hall. All the most historically extreme parts of our existence appear utterly banal, and so for the most part we carry on our lives as if they can go on indefinitely. Most of those around us, up to and including our leaders, do the same. And yet, nothing lasts forever. Throughout history societal collapses are legion: from ancient Mesopotamia to Rome, the Maya to Easter Island, again and again it’s not just that civilizations don’t last; it’s that unsustainability appears baked in. Civilizations that collapse are not the exception; they are the rule. A survey of sixty civilizations suggests they last about four hundred years on average before falling apart. Without new technologies, they hit hard limits to development—in available energy, in food, in social complexity—that bring them crashing down.
A moratorium on technology is not a way out; it’s an invitation to another kind of dystopia, another kind of catastrophe. Even if it were possible, the idea of stopping the coming wave isn’t a comforting thought. Maintaining, let alone improving, standards of living needs technology. Forestalling a collapse needs technology. The costs of saying no are existential. And yet every path from here brings grave risks and downsides. This is the great dilemma.
Containment of the coming wave is, I believe, not possible in our current world. What these steps might do, however, is change the underlying conditions. Nudge forward the status quo so containment has a chance. We should do all this with the knowledge that it might fail but that it is our best shot at building a world where containment—and human flourishing—are possible. There are no guarantees here, no rabbits pulled out of hats. Anyone hoping for a quick fix, a smart answer, is going to be disappointed. Approaching the dilemma, we are left in the same all-too-human position as always: giving it everything and hoping it works out. Here’s how I think it might—just might—come together.
Author
Mauro Sicard
CEO & Creative Director at BRIX Agency. My main interests are tech, science and philosophy.