21 Lessons for the 21st Century

21 Lessons for the 21st Century examines today's most pressing challenges, from technology to war to truth.

21 Lessons for the 21st Century
Book Highlights

The following are the key points I highlighted in this book. If you’d like, you can download all of them to chat about with your favorite language model.

Technology & AI Evolution

  • It also turned out that the biochemical algorithms of the human brain are far from perfect. They rely on heuristics, shortcuts and outdated circuits adapted to the African savannah rather than to the urban jungle. No wonder that even good drivers, bankers and lawyers sometimes make stupid mistakes. This means that AI can outperform humans even in tasks that supposedly demand ‘intuition’.
  • AI not only stands poised to hack humans and outperform them in what were hitherto uniquely human skills. It also enjoys uniquely non-human abilities, which make the difference between an AI and a human worker one of kind rather than merely of degree. Two particularly important non-human abilities that AI possesses are connectivity and updateability.
  • Hence, in the not too distant future a machine-learning algorithm could analyse the biometric data streaming from sensors on and inside your body, determine your personality type and your changing moods, and calculate the emotional impact that a particular song – even a particular musical key – is likely to have on you.
  • On 7 December 2017 a critical milestone was reached, not when a computer defeated a human at chess – that’s old news – but when Google’s AlphaZero program defeated the Stockfish 8 program. Stockfish 8 was the world’s computer chess champion for 2016. It had access to centuries of accumulated human experience in chess, as well as to decades of computer experience. It was able to calculate 70 million chess positions per second. In contrast, AlphaZero performed only 80,000 such calculations per second, and its human creators never taught it any chess strategies – not even standard openings. Rather, AlphaZero used the latest machine-learning principles to self-learn chess by playing against itself. Nevertheless, out of a hundred games the novice AlphaZero played against Stockfish, AlphaZero won twenty-eight and tied seventy-two. It didn’t lose even once. Since AlphaZero learned nothing from any human, many of its winning moves and strategies seemed unconventional to human eyes. They may well be considered creative, if not downright genius.
  • Can you guess how long it took AlphaZero to learn chess from scratch, prepare for the match against Stockfish, and develop its genius instincts? Four hours. That’s not a typo. For centuries, chess was considered one of the crowning glories of human intelligence. AlphaZero went from utter ignorance to creative mastery in four hours, without the help of any human guide.18
  • AlphaZero is not the only imaginative software out there. Many programs now routinely outperform human chess players not just in brute calculation, but even in ‘creativity’. In human-only chess tournaments, judges are constantly on the lookout for players who try to cheat by secretly getting help from computers. One of the ways to catch cheats is to monitor the level of originality players display. If they play an exceptionally creative move, the judges will often suspect that this cannot possibly be a human move – it must be a computer move. At least in chess, creativity is already the trademark of computers rather than humans!
  • There are simply several different paths leading to high intelligence, and only some of these paths involve gaining consciousness. Just as airplanes fly faster than birds without ever developing feathers, so computers may come to solve problems much better than mammals without ever developing feelings.

Future of Work & Automation

  • The technological revolution might soon push billions of humans out of the job market, and create a massive new useless class, leading to social and political upheavals that no existing ideology knows how to handle.
  • We have no idea what the job market will look like in 2050. It is generally agreed that machine learning and robotics will change almost every line of work – from producing yoghurt to teaching yoga. However, there are conflicting views about the nature of the change and its imminence. Some believe that within a mere decade or two, billions of people will become economically redundant. Others maintain that even in the long run automation will keep generating new jobs and greater prosperity for all.
  • Fears that automation will create massive unemployment go back to the nineteenth century, and so far they have never materialised. Since the beginning of the Industrial Revolution, for every job lost to a machine at least one new job was created, and the average standard of living has increased dramatically.1 Yet there are good reasons to think that this time it is different, and that machine learning will be a real game changer.
  • Humans have two types of abilities – physical and cognitive. In the past, machines competed with humans mainly in raw physical abilities, while humans retained an immense edge over machines in cognition. Hence as manual jobs in agriculture and industry were automated, new service jobs emerged that required the kind of cognitive skills only humans possessed: learning, analysing, communicating and above all understanding human emotions. However, AI is now beginning to outperform humans in more and more of these skills, including in the understanding of human emotions.2 We don’t know of any third field of activity – beyond the physical and the cognitive – where humans will always retain a secure edge.
  • In other words, switching to autonomous vehicles is likely to save the lives of a million people every year. Hence it would be madness to block automation in fields such as transport and healthcare just in order to protect human jobs. After all, what we ultimately ought to protect is humans – not jobs. Redundant drivers and doctors will just have to find something else to do.
  • At least in the short term, AI and robotics are unlikely to completely eliminate entire industries. Jobs that require specialisation in a narrow range of routinised activities will be automated. But it will be much more difficult to replace humans with machines in less routine jobs that demand the simultaneous use of a wide range of skills, and that involve dealing with unforeseen scenarios. Take healthcare, for example. Many doctors focus almost exclusively on processing information: they absorb medical data, analyse it, and produce a diagnosis. Nurses, in contrast, also need good motor and emotional skills in order to give a painful injection, replace a bandage, or restrain a violent patient. Hence we will probably have an AI family doctor on our smartphone decades before we have a reliable nurse robot.
  • Alongside care, creativity too poses particularly difficult hurdles for automation. We don’t need humans to sell us music any more – we can download it directly from the iTunes store – but the composers, musicians, singers and DJs are still flesh and blood. We rely on their creativity not just to produce completely new music, but also to choose among a mind-boggling range of available possibilities.
  • Nevertheless, in the long run no job will remain absolutely safe from automation.
  • The loss of many traditional jobs in everything from art to healthcare will partly be offset by the creation of new human jobs. GPs who focus on diagnosing known diseases and administering familiar treatments will probably be replaced by AI doctors. But precisely because of that, there will be much more money to pay human doctors and lab assistants to do groundbreaking research and develop new medicines or surgical procedures.
  • The problem with all such new jobs, however, is that they will probably demand high levels of expertise, and will therefore not solve the problems of unemployed unskilled labourers.
  • During previous waves of automation, people could usually switch from one routine low-skill job to another. In 1920 a farm worker laid off due to the mechanisation of agriculture could find a new job in a factory producing tractors. In 1980 an unemployed factory worker could start working as a cashier in a supermarket. Such occupational changes were feasible, because the move from the farm to the factory and from the factory to the supermarket required only limited retraining. But in 2050, a cashier or textile worker losing their job to a robot will hardly be able to start working as a cancer researcher, as a drone operator, or as part of a human–AI banking team. They will not have the necessary skills.
  • Consequently, despite the appearance of many new human jobs, we might nevertheless witness the rise of a new ‘useless’ class. We might actually get the worst of both worlds, suffering simultaneously from high unemployment and a shortage of skilled labour.
  • In addition, no remaining human job will ever be safe from the threat of future automation, because machine learning and robotics will continue to improve. A forty-year-old unemployed Walmart cashier who by dint of superhuman efforts manages to reinvent herself as a drone pilot might have to reinvent herself again ten years later, because by then the flying of drones may also have been automated. This volatility will also make it more difficult to organise unions or secure labour rights. Already today, many new jobs in advanced economies involve unprotected temporary work, freelancing and one-time gigs.16 How do you unionise a profession that mushrooms and disappears within a decade?
  • What is happening today to human–AI chess teams might happen down the road to human–AI teams in policing, medicine and banking too.19 Consequently, creating new jobs and retraining people to fill them will not be a one-off effort. The AI revolution won’t be a single watershed event after which the job market will just settle into a new equilibrium. Rather, it will be a cascade of ever-bigger disruptions. Already today few employees expect to work in the same job for their entire life.20 By 2050, not just the idea of ‘a job for life’, but even the idea of ‘a profession for life’ might seem antediluvian.
  • Even if we could constantly invent new jobs and retrain the workforce, we may wonder whether the average human will have the emotional stamina necessary for a life of such endless upheavals. Change is always stressful, and the hectic world of the early twenty-first century has produced a global epidemic of stress.21 As the volatility of the job market and of individual careers increases, would people be able to cope? We would probably need far more effective stress-reduction techniques – ranging from drugs through neuro-feedback to meditation – to prevent the Sapiens mind from snapping. By 2050 a ‘useless’ class might emerge not merely because of an absolute lack of jobs or lack of relevant education, but also because of insufficient mental stamina.
  • However, we cannot allow ourselves to be complacent. It is dangerous just to assume that enough new jobs will appear to compensate for any losses. The fact that this has happened during previous waves of automation is absolutely no guarantee that it will happen again under the very different conditions of the twenty-first century. The potential social and political disruptions are so alarming that even if the probability of systemic mass unemployment is low, we should take it very seriously.
  • Potential solutions fall into three main categories: what to do in order to prevent jobs from being lost; what to do in order to create enough new jobs; and what to do if, despite our best efforts, job losses significantly outstrip job creation. Preventing job losses altogether is an unattractive and probably untenable strategy, because it means giving up the immense positive potential of AI and robotics.
  • Slowing down the pace of change may give us time to create enough new jobs to replace most of the losses. Yet as noted earlier, economic entrepreneurship will have to be accompanied by a revolution in education and psychology.
  • Theoretically, you can have an economy in which a mining corporation produces and sells iron to a robotics corporation, the robotics corporation produces and sells robots to the mining corporation, which mines more iron, which is used to produce more robots, and so on. These corporations can grow and expand to the far reaches of the galaxy, and all they need are robots and computers – they don’t need humans even to buy their products. Indeed, already today computers and algorithms are beginning to function as clients in addition to producers.
  • Nobody’s life-dream is to be a cashier. What we should focus on is providing for people’s basic needs and protecting their social status and self-worth.
  • Globalisation has made people in one country utterly dependent on markets in other countries, but automation might unravel large parts of this global trade network with disastrous consequences for the weakest links.
  • Throughout this book, I often use the first person plural to speak about the future of humankind. I talk about what ‘we’ need to do about ‘our’ problems. But maybe there are no ‘we’. Maybe one of ‘our’ biggest problems is that different human groups have completely different futures. Maybe in some parts of the world you should teach your kids to write computer code, while in others you had better teach them to draw fast and shoot straight.

Data & Algorithm Authority

  • Big Data algorithms might create digital dictatorships in which all power is concentrated in the hands of a tiny elite while most people suffer not from exploitation, but from something far worse – irrelevance.
  • Already today, computers have made the financial system so complicated that few humans can understand it. As AI improves, we might soon reach a point when no human can make sense of finance any more.
  • Indeed, the algorithm may learn to recognise your wishes even without you being explicitly aware of them.
  • By using massive biometric databases garnered from millions of people, the algorithm could know which biochemical buttons to press in order to produce a global hit which would set everybody swinging like crazy on the dance floors.
  • Similarly in the advertisement business, the most important customer of all is an algorithm: the Google search algorithm. When people design Web pages, they often cater to the taste of the Google search algorithm rather than to the taste of any human being.
  • I know this from personal experience. When I publish a book, the publishers ask me to write a short description that they use for publicity online. But they have a special expert, who adapts what I write to the taste of the Google algorithm. The expert goes over my text, and says ‘Don’t use this word – use that word instead. Then we will get more attention from the Google algorithm.’ We know that if we can just catch the eye of the algorithm, we can take the humans for granted.
  • Notwithstanding the danger of mass unemployment, what we should worry about even more is the shift in authority from humans to algorithms, which might destroy any remaining faith in the liberal story and open the way to the rise of digital dictatorships.
  • The liberal belief in the feelings and free choices of individuals is neither natural nor very ancient. For thousands of years people believed that authority came from divine laws rather than from the human heart, and that we should therefore sanctify the word of God rather than human liberty. Only in the last few centuries did the source of authority shift from celestial deities to flesh-and-blood humans. Soon authority might shift again – from humans to algorithms. Just as divine authority was legitimised by religious mythologies, and human authority was justified by the liberal story, so the coming technological revolution might establish the authority of Big Data algorithms, while undermining the very idea of individual freedom.
  • Accordingly, liberalism was correct in counselling people to follow their heart rather than the dictates of some priest or party apparatchik. However, soon computer algorithms could give you better counsel than human feelings.
  • People will enjoy the best healthcare in history, but for precisely this reason they will probably be sick all the time. There is always something wrong somewhere in the body. There is always something that can be improved. In the past, you felt perfectly healthy as long as you didn’t sense pain or you didn’t suffer from an apparent disability such as limping. But by 2050, thanks to biometric sensors and Big Data algorithms, diseases may be diagnosed and treated long before they lead to pain or disability. As a result, you will always find yourself suffering from some ‘medical condition’ and following this or that algorithmic recommendation. If you refuse, perhaps your medical insurance would become invalid, or your boss would fire you – why should they pay the price of your obstinacy?
  • Who will have the time and energy to deal with all these illnesses? In all likelihood, we could just instruct our health algorithm to deal with most of these problems as it sees fit. At most, it will send periodic updates to our smartphones, telling us that ‘seventeen cancerous cells were detected and destroyed’. Hypochondriacs might dutifully read these updates, but most of us will ignore them just as we ignore those annoying anti-virus notices on our computers.
  • But Amazon won’t have to be perfect. It will just need to be better on average than us humans. And that is not so difficult, because most people don’t know themselves very well, and most people often make terrible mistakes in the most important decisions of their lives. Even more than algorithms, humans suffer from insufficient data, from faulty programming (genetic and cultural), from muddled definitions, and from the chaos of life.
  • Just think of the way that within a mere two decades, billions of people have come to entrust the Google search algorithm with one of the most important tasks of all: searching for relevant and trustworthy information. We no longer search for information. Instead, we google. And as we increasingly rely on Google for answers, so our ability to search for information by ourselves diminishes. Already today, ‘truth’ is defined by the top results of the Google search.
  • As authority shifts from humans to algorithms, we may no longer see the world as the playground of autonomous individuals struggling to make the right choices. Instead, we might perceive the entire universe as a flow of data, see organisms as little more than biochemical algorithms, and believe that humanity’s cosmic vocation is to create an all-encompassing data-processing system – and then merge into it.
  • The race to obtain the data is already on, headed by data-giants such as Google, Facebook, Baidu and Tencent. So far, many of these giants seem to have adopted the business model of ‘attention merchants’.2 They capture our attention by providing us with free information, services and entertainment, and they then resell our attention to advertisers. Yet the data-giants probably aim far higher than any previous attention merchant. Their true business isn’t to sell advertisements at all. Rather, by capturing our attention they manage to accumulate immense amounts of data about us, which is worth more than any advertising revenue. We aren’t their customers – we are their product. In the medium term, this data hoard opens a path to a radically different business model whose first victim will be the advertising industry itself. The new model is based on transferring authority from humans to algorithms, including the authority to choose and buy things.

Social & Economic Inequality

  • In 2018 the common person feels increasingly irrelevant.
  • It is much harder to struggle against irrelevance than against exploitation.
  • Thanks to learning algorithms and biometric sensors, a poor villager in an underdeveloped country might come to enjoy far better healthcare via her smartphone than the richest person in the world gets today from the most advanced urban hospital.
  • Instead of economic growth improving conditions all over the world, we might see immense new wealth created in hi-tech hubs such as Silicon Valley, while many developing countries collapse.
  • Consequently the gap between the rich (Tencent managers and Google shareholders) and the poor (those dependent on universal basic income) might become not merely bigger, but actually unbridgeable.
  • Hence even if some universal support scheme provides poor people in 2050 with much better healthcare and education than today, they might still be extremely angry about global inequality and the lack of social mobility. People will feel that the system is rigged against them, that the government serves only the super-rich, and that the future will be even worse for them and their children.
  • For without a social safety net and a modicum of economic equality, liberty is meaningless. But just as Big Data algorithms might extinguish liberty, they might simultaneously create the most unequal societies that ever existed. All wealth and power might be concentrated in the hands of a tiny elite, while most people will suffer not from exploitation, but from something far worse – irrelevance.
  • In the last few decades, people all over the world were told that humankind is on the path to equality, and that globalisation and new technologies will help us get there sooner. In reality, the twenty-first century might create the most unequal societies in history. Though globalisation and the Internet bridge the gap between countries, they threaten to enlarge the rift between classes, and just as humankind seems about to achieve global unification, the species itself might divide into different biological castes.
  • Already today, the richest 1 per cent owns half the world’s wealth. Even more alarmingly, the richest hundred people together own more than the poorest 4 billion.
  • By 2100, the richest 1 per cent might own not merely most of the world’s wealth, but also most of the world’s beauty, creativity and health. The two processes together – bioengineering coupled with the rise of AI – might therefore result in the separation of humankind into a small class of superhumans and a massive underclass of useless Homo sapiens.

Human Consciousness & Emotions

  • Humans think in stories rather than in facts, numbers or equations, and the simpler the story, the better.
  • To have one story is the most reassuring situation of all. Everything is perfectly clear. To be suddenly left without any story is terrifying.
  • In the past, we humans have learned to control the world outside us, but we had very little control over the world inside us. We knew how to build a dam and stop a river from flowing, but we did not know how to stop the body from ageing
  • If mosquitoes buzzed in our ears and disturbed our sleep, we knew how to kill the mosquitoes; but if a thought buzzed in our mind and kept us awake at night, most of us did not know how to kill the thought.
  • It is easier to manipulate a river by building a dam across it than it is to predict all the complex consequences this will have for the wider ecological system. Similarly, it will be easier to redirect the flow of our minds than to divine what it will do to our personal psychology or to our social systems.
  • The first step is to tone down the prophecies of doom, and switch from panic mode to bewilderment. Panic is a form of hubris. It comes from the smug feeling that I know exactly where the world is heading – down.
  • It turned out that our choices of everything from food to mates result not from some mysterious free will, but rather from billions of neurons calculating probabilities within a split second. Vaunted ‘human intuition’ is in reality ‘pattern recognition’.
  • Yet even if enough government help is forthcoming, it is far from clear whether billions of people could repeatedly reinvent themselves without losing their mental balance.
  • Homo sapiens is just not built for satisfaction. Human happiness depends less on objective conditions and more on our own expectations. Expectations, however, tend to adapt to conditions, including to the condition of other people. When things improve, expectations balloon, and consequently even dramatic improvements in conditions might leave us as dissatisfied as before.
  • We usually fail to realise that feelings are in fact calculations, because the rapid process of calculation occurs far below our threshold of awareness.
  • Up till now, these arguments have had embarrassingly little impact on actual behaviour, because in times of crisis humans all too often forget about their philosophical views and follow their emotions and gut instincts instead.
  • Of course, it is not absolutely impossible that AI will develop feelings of its own. We still don’t know enough about consciousness to be sure. In general, there are three possibilities we need to consider: Consciousness is somehow linked to organic biochemistry in such a way that it will never be possible to create consciousness in non-organic systems. Consciousness is not linked to organic biochemistry, but it is linked to intelligence in such a way that computers could develop consciousness, and computers will have to develop consciousness if they are to pass a certain threshold of intelligence. There are no essential links between consciousness and either organic biochemistry or high intelligence. Hence computers might develop consciousness – but not necessarily. They could become super-intelligent while still having zero consciousness.
  • At our present state of knowledge, we cannot rule out any of these options. Yet precisely because we know so little about consciousness, it seems unlikely that we could program conscious computers any time soon. Hence despite the immense power of artificial intelligence, for the foreseeable future its usage will continue to depend to some extent on human consciousness. The danger is that if we invest too much in developing AI and too little in developing human consciousness, the very sophisticated artificial intelligence of computers might only serve to empower the natural stupidity of humans.
  • To avoid such outcomes, for every dollar and every minute we invest in improving artificial intelligence, it would be wise to invest a dollar and a minute in advancing human consciousness. Unfortunately, at present we are not doing much to research and develop human consciousness. We are researching and developing human abilities mainly according to the immediate needs of the economic and political system, rather than according to our own long-term needs as conscious beings. My boss wants me to answer emails as quickly as possible, but he has little interest in my ability to taste and appreciate the food I am eating. Consequently, I check my emails even during meals, while losing the ability to pay attention to my own sensations. The economic system pressures me to expand and diversify my investment portfolio, but it gives me zero incentives to expand and diversify my compassion. So I strive to understand the mysteries of the stock exchange, while making far less effort to understand the deep causes of suffering.
  • Indeed we have no idea what the full human potential is, because we know so little about the human mind. And yet we hardly invest much in exploring the human mind, and instead focus on increasing the speed of our Internet connections and the efficiency of our Big Data algorithms. If we are not careful, we will end up with downgraded humans misusing upgraded computers to wreak havoc on themselves and on the world.

Political & Liberal Systems

  • The liberal story celebrates the value and power of liberty. It says that for thousands of years humankind lived under oppressive regimes which allowed people few political rights, economic opportunities or personal liberties, and which heavily restricted the movements of individuals, ideas and goods. But people fought for their freedom, and step by step, liberty gained ground
  • However, since the global financial crisis of 2008 people all over the world have become increasingly disillusioned with the liberal story.
  • Both politicians and voters are barely able to comprehend the new technologies, let alone regulate their explosive potential.
  • Did you ever vote about the Internet? The democratic system is still struggling to understand what hit it, and is hardly equipped to deal with the next shocks, such as the rise of AI and the blockchain revolution.
  • The revolutions in biotech and infotech are made by engineers, entrepreneurs and scientists who are hardly aware of the political implications of their decisions, and who certainly don’t represent anyone
  • When you live under such an oligarchy, there is always some crisis or other that takes priority over boring stuff such as healthcare and pollution. If the nation is facing external invasion or diabolical subversion, who has time to worry about overcrowded hospitals and polluted rivers? By manufacturing a never-ending stream of crises, a corrupt oligarchy can prolong its rule indefinitely.8
  • Most humans never enjoyed greater peace or prosperity than they did under the aegis of the liberal order of the early twenty-first century. For the first time in history, infectious diseases kill fewer people than old age, famine kills fewer people than obesity, and violence kills fewer people than accidents.
  • The liberal story cherishes human liberty as its number one value. It argues that all authority ultimately stems from the free will of individual humans, as it is expressed in their feelings, desires and choices. In politics, liberalism believes that the voter knows best. It therefore upholds democratic elections. In economics, liberalism maintains that the customer is always right. It therefore hails free-market principles. In personal matters, liberalism encourages people to listen to themselves, be true to themselves, and follow their hearts – as long as they do not infringe on the liberties of others.
  • If democracy were a matter of rational decision-making, there would be absolutely no reason to give all people equal voting rights – or perhaps any voting rights.
  • Winston Churchill famously said that democracy is the worst political system in the world, except for all the others.

Philosophy and Ethics

  • Since the corporations and entrepreneurs who lead the technological revolution naturally tend to sing the praises of their creations, it falls to sociologists, philosophers and historians like myself to sound the alarm and explain all the ways things can go terribly wrong.
  • Given everything we know and don’t know about science, about God, about politics and about religion – what can we say about the meaning of life today?
  • This may sound overambitious, but Homo sapiens cannot wait. Philosophy, religion and science are all running out of time. People have debated the meaning of life for thousands of years. We cannot continue this debate indefinitely. The looming ecological crisis, the growing threat of weapons of mass destruction, and the rise of new disruptive technologies will not allow it. Perhaps most importantly, artificial intelligence and biotechnology are giving humanity the power to reshape and re-engineer life. Very soon somebody will have to decide how to use this power – based on some implicit or explicit story about the meaning of life.
  • Philosophers are very patient people, but engineers are far less patient, and investors are the least patient of all. If you don’t know what to do with the power to engineer life, market forces will not wait a thousand years for you to come up with an answer.
  • The invisible hand of the market will force upon you its own blind reply. Unless you are happy to entrust the future of life to the mercy of quarterly revenue reports, you need a clear idea what life is all about.
  • The revolutions in biotech and infotech will give us control of the world inside us, and will enable us to engineer and manufacture life. We will learn how to design brains, extend lives, and kill thoughts at our discretion. Nobody knows what the consequences will be.
  • Humans were always far better at inventing tools than using them wisely
  • Technology is never deterministic, and the fact that something can be done does not mean it must be done.
  • Once AI makes better decisions than us about careers and perhaps even relationships, our concept of humanity and of life will have to change.
  • Human emotions trump philosophical theories in countless other situations. This makes the ethical and philosophical history of the world a rather depressing tale of wonderful ideals and less than ideal behaviour. How many Christians actually turn the other cheek, how many Buddhists actually rise above egoistic obsessions, and how many Jews actually love their neighbours as themselves? That’s just the way natural selection has shaped Homo sapiens. Like all mammals, Homo sapiens uses emotions to quickly make life and death decisions. We have inherited our anger, our fear and our lust from millions of ancestors, all of whom passed the most rigorous quality control tests of natural selection.
  • Computer algorithms, however, have not been shaped by natural selection, and they have neither emotions nor gut instincts. Hence in moments of crisis they could follow ethical guidelines much better than humans – provided we find a way to code ethics in precise numbers and statistics.
  • Which means that when designing their self-driving car, Toyota or Tesla will be transforming a theoretical problem in the philosophy of ethics into a practical problem of engineering. Granted, the philosophical algorithms will never be perfect. Mistakes will still happen, resulting in injuries, deaths and extremely complicated lawsuits. (For the first time in history, you might be able to sue a philosopher for the unfortunate results of his or her theories, because for the first time in history you could prove a direct causal link between philosophical ideas and real-life events.)
  • However, there might be some new openings for philosophers, because their skills – hitherto devoid of much market value – will suddenly be in very high demand. So if you want to study something that will guarantee a good job in the future, maybe philosophy is not such a bad gamble.
  • Well, maybe Tesla will just leave it to the market. Tesla will produce two models of the self-driving car: the Tesla Altruist and the Tesla Egoist. In an emergency, the Altruist sacrifices its owner to the greater good, whereas the Egoist does everything in its power to save its owner, even if it means killing the two kids. Customers will then be able to buy the car that best fits their favourite philosophical view. If more people buy the Tesla Egoist, you won’t be able to blame Tesla for that. After all, the customer is always right. This is not a joke. In a pioneering 2015 study people were presented with a hypothetical scenario of a self-driving car about to run over several pedestrians. Most said that in such a case the car should save the pedestrians even at the price of killing its owner. When they were then asked whether they personally would buy a car programmed to sacrifice its owner for the greater good, most said no. For themselves, they would prefer the Tesla Egoist.22 Imagine the situation: you have bought a new car, but before you can start using it, you must open the settings menu and tick one of several boxes. In case of an accident, do you want the car to sacrifice your life – or to kill the family in the other vehicle? Is this a choice you even want to make? Just think of the arguments you are going to have with your husband about which box to tick.
  • AI often frightens people because they don’t trust the AI to remain obedient. We have seen too many science-fiction movies about robots rebelling against their human masters, running amok in the streets and slaughtering everyone. Yet the real problem with robots is exactly the opposite. We should fear them because they will probably always obey their masters and never rebel.
  • Science fiction tends to confuse intelligence with consciousness, and assume that in order to match or surpass human intelligence, computers will have to develop consciousness. The basic plot of almost all movies and novels about AI revolves around the magical moment when a computer or a robot gains consciousness. Once that happens, either the human hero falls in love with the robot, or the robot tries to kill all the humans, or both things happen simultaneously. But in reality, there is no reason to assume that artificial intelligence will gain consciousness, because intelligence and consciousness are very different things. Intelligence is the ability to solve problems. Consciousness is the ability to feel things such as pain, joy, love and anger. We tend to confuse the two because in humans and other mammals intelligence goes hand in hand with consciousness. Mammals solve most problems by feeling things. Computers, however, solve problems in a very different way.

Global Change & Disruption

  • The golden thread running through his exhilarating new book is the challenge of maintaining our collective and individual focus in the face of constant and disorienting change. Are we still capable of understanding the world we have created?
  • In a world deluged by irrelevant information, clarity is power. In theory, anybody can join the debate about the future of humanity, but it is so hard to maintain a clear vision.
  • Climate change may be far beyond the concerns of people in the midst of a life-and-death emergency, but it might eventually make the Mumbai slums uninhabitable, send enormous new waves of refugees across the Mediterranean, and lead to a worldwide crisis in healthcare.
  • People learned to think for themselves and follow their hearts, instead of blindly obeying bigoted priests and hidebound traditions. Open roads, stout bridges and bustling airports replaced walls, moats and barbed-wire fences.
  • In 1938 humans were offered three global stories to choose from, in 1968 just two, in 1998 a single story seemed to prevail; in 2018 we are down to zero.
  • The sense of disorientation and impending doom is exacerbated by the accelerating pace of technological disruption.
  • Lots of mysterious words are bandied around excitedly in TED talks, government think tanks and hi-tech conferences – globalisation, blockchain, genetic engineering, artificial intelligence, machine learning – and common people may well suspect that none of these words are about them. The liberal story was the story of ordinary people. How can it remain relevant to a world of cyborgs and networked algorithms?
  • The challenge posed to humankind in the twenty-first century by infotech and biotech is arguably much bigger than the challenge posed in the previous era by steam engines, railroads and electricity. And given the immense destructive power of our civilisation, we just cannot afford more failed models, world wars and bloody revolutions. This time around, the failed models might result in nuclear wars, genetically engineered monstrosities, and a complete breakdown of the biosphere. Consequently, we have to do better than we did in confronting the Industrial Revolution.
Author - Mauro Sicard
Author
Author
Mauro Sicard

CEO & Creative Director at BRIX Agency. My main interests are tech, science and philosophy.