Deep Utopia explores what gives life meaning in a future where all problems are solved.
The following are the key points I highlighted in this book. If you’d like, you can download all of them to chat about with your favorite language model.
The second part of Keynes’s prediction, on the other hand, would appear to be about to miss its mark, if trends are extrapolated. While it is true that working hours have declined substantially over the past ninety-plus years, we are nowhere near the 15-hour work week that Keynes expected. Since 1930, the typical work week has been reduced by about a quarter, to roughly 36 hours.11 The proportion of our lives spent working has seen a somewhat sharper drop: we join the workforce later, live longer after retirement, and take more leave.12 And our work is on average less strenuous. For the most part, however, we have used our increased productivity for consumption rather than leisure. Greed has triumphed over Sloth.
Aye, but there’s a catch! All the preceding discussion of whether people will continue to work rests on one assumption: that there would still be work for people to do.
So long as human labor remains a net complement to capital, growth in capital stocks should tend to drive up the price of labor. The increasing wages could then motivate people to continue to work just as hard as they currently do even if they become very rich, provided they have the kind of insatiable desires that I just described. In reality, permanently higher wages would probably cause people to work a bit less, as they would choose to use some of their productivity gains to increase leisure and some to increase consumption. But in any case, the degree to which labor is a complement to capital is a function of technology. With sufficiently advanced automation technology, capital becomes a substitute for labor. Consider the extreme case: imagine that you could buy an intelligent robot that can do everything that a human worker can do. And suppose that it is cheaper to buy or rent this robot than to hire a human. Robots would then compete with human workers and put downward pressure on wages. If the robots become cheap enough, humans would be squeezed out of the labor market altogether. The zero-hour workweek would have arrived.
Therefore, whereas the effects of perfect automation technology are clear—full human unemployment and zero human labor income—the consequences of imperfect automation technology for human employment and human wages are theoretically ambiguous. For example, it is possible in this model that if robots could do every job except design and oversee robots, the wages paid to human robot-designers and robot-overseers could exceed the total wages paid to workers today; and, theoretically, the total number of hours worked could also rise.
But there is a terminological question that arises here. If a job is outsourced to a sentient robot, would we really want to say that it has been “automated”? Would this not be more akin to a scenario in which we had given rise to a new person, born with special talents, who grows up and becomes a master of the profession, allowing its previous practitioners to retire? It would not seem apposite, in that case, to say that the job had been automated. Nor is it easy to see why the fact that the new worker was maybe made out of silicon and steel rather than organic chemistry should make an essential difference here; nor the fact that it might have been conceived in a factory rather than a bedroom; nor the fact that its childhood might have been abridged; nor that its features were, to a greater extent than might be typical for human beings, the result of a deliberate design process rather than chance and inheritance.
In principle, mistrust could limit the uptake of automation no matter how capable and efficient machines become. It seems plausible, however, that trust barriers will eventually erode, as artificial systems accumulate track records that rival or exceed those of human decision-makers, or as we discover other ways to verify reliability and alignment. In time, machines will probably become more trusted than humans.
Purpose & Meaning in Life
There are, at present, more than enough problems to provide meaningful challenges for even the most resourceful and enterprising among us.
“It is true that as artificial intelligence gets more powerful, we need to ensure that it serves humanity and not the other way around. But this is an engineering problem … I am more interested in what you might call the purpose problem. . . . if we solved big problems like hunger and disease, and the world kept getting more peaceful: What purpose would humans have then? What challenges would we be inspired to solve?”
“How do we find meaning in life if the AI can do your job better than you can? I mean if I think about it too hard, it can frankly be dispiriting and demotivating. Because—I’ve put a lot of blood, sweat, and tears into building the companies, and then I’m like ‘should I be doing this?’. Because if I’m sacrificing time with friends and family that I would prefer, but then ultimately the AI can do all these things. Does that make sense? I don’t know. To some extent, I have to have deliberate suspension of disbelief in order to remain motivated.”
I wondered why I had been made with a soul that had the capacity to wonder but not the capacity to find out; why I could see so much that was wrong while seemingly being unable to do anything about it; and why I was a fox and not a worm or a duck; why I was alive now and not at some other time; and why indeed there was anything at all rather than it being the case that nothing ever existed, no forest, no Earth, no universe, which it seemed to me would have been a far more natural condition, not to mention one that would have saved everybody a great deal of trouble. With such imponderables was I preoccupying myself. And I could not put it down, not put it to rest.
There’s a world of difference between nothing and something.
“So long as there is ignorance, there is hope!”
Start with education. The current paradigm is one of industrial production. The raw materials—children—are delivered to the school gates for age-based batch processing. They are hammered, ground, and drilled for twelve years. Graded and quality-controlled worker-citizens emerge, ready to take up employment in a factory or a trucking company. Some units are sent onward to another plant for three to ten years of further processing. The units that emerge from these more advanced facilities are ready to be installed in offices. They will then perform their assigned duties for the few decades remaining of their active life. If we look at this process, we can see that the main functions performed by our education system are threefold. First, storage and safekeeping. Since parents are undertaking paid labor outside the home, they can’t take care of their own children, so they need a child-storage facility during the day. Second, disciplining and civilizing. Children are savages and need to be trained to sit still at their desks and do as they are told. This takes a long time and a lot of drilling. Also: indoctrination. Third, sorting and certification. Employers need to know the quality of each unit—its conscientiousness, conformity, and intelligence—in order to determine to which uses it can be put and hence how much it is worth. What about learning? This may also happen, mostly as a side effect of the operations done to perform (1) through (3). Any learning that takes place is extremely inefficient. At least the smarter kids could have mastered the same material in 10% of the time, using free online learning resources and studying at their own pace; but since that would not contribute to the central aims of the education system, there is usually no interest in facilitating this path.
Cultivating curiosity—here I may be projecting my own proclivities, but I think a passion for learning could greatly enhance a life of leisure. Also, cultivation of the virtues and an interest in moral self-improvement. The opening of the intellect to science, history, and philosophy, in order to reveal the larger context of patterns and meanings within which our lives are embedded…
“The purpose problem. Assume we maintain control. What if we solved big problems like hunger and disease, and the world kept getting more peaceful: What purpose would humans have then? What challenges would we be inspired to solve? In this version of the future, our biggest worry is not an attack by rebellious robots, but a lack of purpose.”
“autotelic” activity—an activity that is valued for its own sake and not merely as a means to an end.
The more fundamental version of the purpose problem that is now coming into view—let’s call it deep redundancy—is that much leisure activity is also at risk of losing its purpose. The four case studies showed that many of our usual reasons for engaging in non-work activities disappear at technological maturity. And those observations can be generalized. It might even come to appear as though there would be no point in us doing anything—not working long hours for money, of course; but there would also be no point in putting effort into raising children, no point in going out shopping, no point in studying, no point in going to the gym or practicing the piano… et cetera. We can call this hypothetical condition, in which we have no instrumental reasons for doing anything, the age of post-instrumentality. As we move toward this weightless condition, blasting away from the gravitational pull of the ground and its tough “sweat of the brow” imperatives on our days and our strength, we may begin to feel an alienating sense of purposelessness, an unanchored “lightness of being”. We are left to deal with the discovery that the place of maximal freedom is actually a void.
“I reach in vain for words to convey to you what it all amounts to… It’s like a rain of the most wonderful feeling, where every raindrop has its own unique and indescribable meaning—or rather a scent or essence that evokes a whole world… And each such evoked world is subtler, deeper, more palpable than the totality of the reality that you have encountered. One drop would justify and set right a human life, and the rain keeps raining, and there are floods and seas.”
At this point I am making the suggestion that even if we had no instrumental reasons for doing anything at technological maturity—that is to say, no reason to engage in any activity in order to produce some result (because the same result could be more efficiently brought about by machines)—this would not imply that we would not be doing anything.
If you find it difficult to really embrace a goal in this manner using only your natural capacity for buy-in and commitment, you could use neurotechnology to do so. Having decided that playing football would enrich your life, and seeing that really wanting to win would improve the activity and the experience, you could program your mind to have a burning desire to help your team to victory. Another form of artificial purpose would be to place yourself in a situation in which only your own efforts could allow you to achieve some outcome that you already care about for independent reasons. Think of a rock climber halfway up a mountain: there, they have no choice but to employ their strength and skill, on pain of death. In utopia, the analogous possibility would involve creating a special situation in which the affordances of technological maturity are unavailable.
Those of you who are not taking this course for credit may opt instead to take a nap, and we can arrange to have you woken up once it’s over. (I wonder, by the way, how many might prefer to take this approach to their entire present life, if that option existed?) But the rest of us, who choose to postpone the slumber, whether for course credits or for the sake of some even higher aspiration (or because we actually don’t mind a bit of strain and roughness in our fun): let us proceed.
I felt that the flaw in my life, must be a flaw in life itself; that the question was, whether, if the reformers of society and government could succeed in their objects, and every person in the community were free and in a state of physical comfort, the pleasures of life, being no longer kept up by struggle and privation, would cease to be pleasures.”
If there is no multiverse, and if our universe is not too large, and if it is devoid of extraterrestrial intelligence: if, in other words, our planet is the only furnace in which the flame of consciousness has been lit—then the human phenomenon, flickering and faltering as it may be, does take on a kind of cosmic interestingness. On a dark enough night, even the faint glow of a firefly can stand out as a noteworthy sight.
Technology & Societal Evolution
Boosting your annual income from $1,000 to $2,000 is a big deal. Raising it from $1,000,000 to $1,001,000—or even, I should think, to $2,000,000—is barely noticeable. But: this could change. Technological progress might create new ways of converting money into either quality or quantity of life, ways that don’t have the same steeply diminishing returns that we experience today.
The assumption that humans will remain in perfect control of the robots is definitely open to doubt, though it is not one that I intend to discuss in these lectures. If that assumption is relaxed, the result would either be the same as above except with a somewhat smaller human population and a somewhat larger robot population at equilibrium; or, in the case of a more complete failure of control, the human population could disappear altogether and there would be even more robots.
Perhaps we came out of the kiln a little too soon? Maybe we would have been better conditioned for the final vault into the machine intelligence era if we had spent another few hundred thousand years throwing spears and telling tales around campfires? Maybe, or maybe not. Little is known about these matters. We are still remarkably in the dark about the basic macrostrategic directionality of things. Truly, I wonder whether we can even tell up from down.
If you want to store some amount of stuff, it is cheaper (in terms of the amount of material you need) to store it in one large container than in many smaller ones. Similarly, thicker pipes are more efficient than thinner ones. So are larger vessels: losses from water resistance is lower, per unit of cargo, for bigger ships. Likewise, larger furnaces waste less of their heat. And so on. Running things at scale therefore tends to lower unit costs.
Another important consequence of scale is that the cost of producing nonrivalrous goods, such as ideas, can be amortized over a larger user base. The more people there are, the more brains that can produce inventions—and the greater the value of any given invention, since it can be used to benefit more people. So the larger the world population, the faster we should expect the rate of intellectual and technological progress to be; and hence also the rate of economic growth. But this is not exactly right. We should rather say: the larger the world population, the stronger we may expect the drivers of intellectual and technological progress to be. The actual rate of progress would also depend on how hard it is to make progress. And that will vary over time. In particular, we may expect it to get harder over time, as the lowest-hanging fruits are picked first.
Technological maturity: A condition in which a set of capabilities exist that afford a level of control over nature that is close to the maximum that could be achieved in the fullness of time.
Manufacturing & robotics High-throughput atomically precise manufacturing54 Distributed robotics systems at various scales, including with molecular-scale actuators Artificial intelligence Machine superintelligence that vastly exceeds human abilities in all cognitive domains Precision-engineered AI motivation Transportation & aerospace von Neumann Probes (self-replicating space colonization machines that can travel at a substantial fraction of the speed of light) Space habitats (e.g. terraforming suitable planets or free-floating platforms such as O’Neill cylinders) Dyson spheres (for harvesting the energy output of stars) Virtual reality & computation Realistic simulations (of realities that to human-level occupants are indistinguishable from physical reality, or of rich multimodal alternative fantasy worlds) Arbitrary sensory inputs Computer hardware of sufficient efficiency to enable terrestrial resources to implement vast numbers of fast superintelligences and ancestor simulations Medicine & biology Cures for all diseases Reversal of aging Reanimation of cryonics patients Full control of genetics and reproduction Redesign of organisms and ecosystems Mind engineering Cognitive enhancement Precision-control of hedonic states, motivation, mood, personality, focus, etc. High-bandwidth brain-computer interconnects Many forms of biological brain editing Digital minds that are conscious, in many varieties Uploading of biological brains into computers Sensors & security Ubiquitous fine-grained real-time multi-sensor monitoring and interpretation Error-free replication of critical robotic and AI control systems Aligned police-bots and automatic treaty enforcement
Advances in coordination could even be used to stop further advances in coordination, locking in a condition that is essentially uncoordinated, modulo whatever limited forms of coordination are necessary for the anarchy to be perpetually preserved. There are many examples of anti-coordination mechanisms in today’s world: they are top-down, as when antitrust regulators make it harder for firms to collude; and bottom-up, such as when publics roiled by nationalist sentiment make it harder for two antagonistic countries to negotiate an end to their hostilities.
It is possible that a civilization might “tunnel through” a prudential barrier, quantum-style, if the civilization is sufficiently irrational or uncoordinated. It might then take risks that it is imprudent for it to take, and get lucky. I’m not sure we would be where we are today had it not been for such reckless tunneling in the past. There could also be prudential barriers that are high but not infinitely high: bandpass filters that block civilizations only within a certain range of epistemic sophistication—those that are too clever and coordinated to simply tunnel through yet not clever enough to climb over. Consider a bottle of liquid labeled “dihydrogen monoxide”. A thirsty infant will gladly drink it, since they can’t read the text. So will a thirsty chemist, since they understand that it is just water. But the slightly educated midwit will refuse to imbibe, in view of the scary-looking nomenclature. This is the bracket, by the way, which many of you are set to enter upon the deferral of your degrees.
If any of you have the desire to be the first to discover some fundamental truth about the universe, chances are that you’ve been scooped. Somewhere out there in the infinite expanse of spacetime, some alien Archimedes or AI-Einstein has already discovered whatever it is that you will discover. But even if you have the more modest goal of merely being the first in our civilization to discover some important new truth, this too will become harder, and eventually impossible—both because superintelligent AIs will leave our own intellects far behind, and also because, increasingly, the most important fundamental truths will already have been discovered.
Another way in which humans might have an epistemic advantage is as sources of certain kinds of data. AIs will outstrip us in intelligence and general knowledge, but it is possible that we will still have something to contribute when it comes to information about ourselves—about our memories, preferences, dispositions, and choices. We have a kind of privileged access to some of this information, and one could imagine humans getting paid for conveying it to the machines, by providing verbal descriptions or allowing ourselves to be studied. Again, this opportunity to make a living as primary sources of data about human characteristics might be temporary. There tends to be diminishing returns to data about a given system, and a growing amount of data might also end up in the public domain, reducing the value of additional data feeds. Eventually, superintelligent AIs may construct such accurate models of human beings that they need little or no additional input from us to be able to predict our thoughts and desires. Not only might they know us better than we know ourselves, they may know us so well that there is nothing we could tell them that would significantly add to their knowledge. We may come to rely on AI recommendations and evaluations, which we may find to be more consistent and predictive than our own snap judgments about which decisions would be in our best long-term interest (or about what we ourselves would have decided if we had put in the time and effort to carefully reflect on the options in light of all the relevant facts). Even the onus of making decisions about what we want may thus ultimately be lifted from our shoulders.
This made me wonder how much knowledge there actually is out there. Without any method of combining what different communities have discovered, not only do we not know much, we don’t even know what we know. Could one build something to solve this problem? What would it look like? If it worked, would it cause the world soul to wake up?
Some of these aspects of the skill and effort of shopping are already being undercut by recommender systems and other functionalities that are becoming available thanks to progress in AI. Instead of the shopper having to visit many boutiques or having to browse up and down the aisles of a department store, they can visit a single online vendor. Offers predicted to be of greatest interest to the customer are brought to their attention. Let’s extrapolate this a bit. If the recommender system is sufficiently capable, it would remove the need for exploration entirely. The system would know your tastes and offer suggestions that you like better than whatever you would have picked out yourself. Then what would be the point of you searching through the inventory yourself? Furthermore, if the AI could model your purchase decisions with sufficient accuracy, there would be no need for you to even look at the suggestions. It could simply buy them on your behalf.
Supposing there were some demand for it, it would still be possible to go shopping the old-fashioned way. You could choose to drive to the store, spend time trying to find something you want (perhaps to discover that the store doesn’t stock your size or preferred color), wait in line to pay for it, and finally schlep it all home in plastic bags. If you do that, you end up with a purchase that you will like less than if you let your AI assistant handle it all. Shopping in this old-fashioned way would have something Rube Goldberg–esque about it. Yes, you could do it. But the pointlessness of it all—the extra hassle and effort only to obtain an inferior result… when this ghostly pointlessness is staring you in the face the entire time with its empty eye sockets, would not the allure of the activity be drained out? To the point where most people would cease to bother doing it?
Let us consider a different kind of activity: going to the gym. Here, at least, a task that cannot be automated! No robot can ever take your place on the elliptical. To gain the physical and mental benefits of exercise, you have to do it yourself. Perhaps, then, we’ve found our platinum—an activity that is completely resistant to the purpose-corroding acid of technological convenience? On closer inspection, this hope proves illusory. While it is true that you cannot hire someone else or buy a robot to do your exercise for you, there are other solutions that would enable you to accomplish the common functions of exercise without breaking a sweat. With advanced-enough technology, the health benefits and physiological effects of a workout could be induced by artificial means, such as drugs (safe and free of side-effects), or gene therapy, or medical nanobots that keep you in perfect shape regardless of your eating and drinking habits and your sedentary lifestyle. This holds for the mental benefits of exercise, too. The endorphin release that is triggered by physical exertion could be induced pharmacologically. Likewise for whatever other mind-clearing, de-stressing, and revitalizing effects that exercisers enjoy: all available in a pill or one-off injection of nanomedicine.103 Begone muscle soreness, strains, calluses, and piles of sodden gym gear! Welcome the effortless sixpack and the VO2 max of a Tour de France cyclist!
On objective functional characteristics—beauty, charm, virtue, humor, faithfulness, affection, etc., natural persons would be outclassed. Artificial people would win any fair contest and comparison. They would be better.
Well, we have a complication here. “Doing it because you enjoy it” seems to mean that you’re doing it as a means to experiencing pleasure or positive affect. But at technological maturity, there would be more efficient paths toward that outcome. You could take a superdrug that has no side-effects, or reprogram your brain so that it experiences pleasure all the time independently of whether you are doing any “fun” activities or not.
Consider how much information there is in a book. Let’s say it has 100,000 words. An average word is about 5 characters, and each character is 8 bits. So that would be 4 megabits. With compression, it would be way less. There’s no way you could represent all the contents of all our experiences that we’ve had in our lifetimes with that few bits.
This is the difference between playing a movie of a computation and actually implementing the computation. In the movie, each frame might contain a picture of the state of the memory cells. If you play the movie, you would see a sequence of pictures of successive states of the memory cells. But if, while the movie was playing, you went in and edited one of the frames, the later frames would not change. So in a movie of a simple arithmetic computation, one frame might depict “2+2”, and the next frame might depict “4”. But if you edited the first frame to “2+3”, the second frame would still depict “4”. This is in contrast to if you are actually implementing the computation. If instead of a movie reel, you were using a pocket calculator, which does implement the computation, then in the time step after you had edited the input, the screen would depict a “5”.
In a world where there are multiple agents, with sometimes opposing goals, general increases in plasticity do not necessarily make anybody better off. Technological advancement could make us all worse off, for example by enabling mischief to be conducted more easily and on a larger scale.
Consider, for instance, pain, which serves as a warning signal of bodily damage. There are rare individuals born without the ability to feel pain, and this is a dangerous condition. People with congenital analgesia may walk around on broken bones or stick their hand into boiling water. They often take excessive risk and fail to protect their bodies, and meet an early demise. So if we want to get rid of pain, therefore, we need some way of dealing with this problem. Fortunately one can think of several possible solutions. One would be to design the environment so that it would be safe even for people with diminished or absent nociception. Alternatively, improved medicine for repairing or regenerating damaged tissues and joints might make the frequent injuries less of a concern. But another approach would be to create a mechanism that serves the same function as pain but without being painful. Imagine an “exoskin”: a layer of nanotech sensors so thin that we can’t feel it or see it, but which monitors our skin surface for noxious stimuli. If we put our hand on a hot plate, a bright red warning message flashes in our visual field and we hear a loud noise. Simultaneously, the mechanism contracts our muscle fibers so as to make our hand to withdraw, giving us time to consider our next move. Another component of the system might surveil internal tissues and organs, and flag any condition that requires remedial action. Such an exoskin is not so different in principle from familiar devices such as carbon monoxide detectors, wearable dosimeters, and continuous glucose monitors. The notion of outfitting a biological organism with a full suite of artificial sensors for noxious exposure does seem somewhat steampunky, although with advanced nanotechnology the implementation could be perfectly inconspicuous. And of course, if we become fully digital, many things can be accomplished far more elegantly.
Human Nature & Psychology
I mentioned a third reason why we might continue to work hard even at very high income levels: namely, that our appetites may be relative in a way that makes them collectively insatiable. Suppose that we desire that we have more than others. We might desire this either because we value relative standing as a final good; or, alternatively, because we hope to derive advantages from our elevated standing—such as the perks attendant on having high social status, or the security one might hope to attain by being better resourced than one’s adversaries. Such relative desires could then provide an inexhaustible source of motivation. Even if our income rises to astronomical levels, even if we have swimming pools full of cash, we still need more: for only thus can we maintain our relative standing in scenarios where the income of our rivals grows commensurately.16 Notice, by the way, that insofar as we crave position—whether for its own sake or as a means to other goods—we could all stand to benefit from coordinating to reduce our efforts. We could create public holidays, legislate an 8-hour work day, or a 4-hour work day. We could impose steeply progressive taxes on labor income. In principle, such measures could preserve the rankings of everybody involved and achieve the same relative outcomes at a reduced price of sweat and toil.
It is also possible to have a desire for improvement per se: to desire that tomorrow we have more than we have today. This might sound like a strange thing to want. But it reflects an important property of the human affective system—the fact that our hedonic response mechanism acclimates to gains. We begin taking our new acquisitions for granted, and the initial thrill wears off. Imagine how elated you would be now if this kind of habituation didn’t happen: if the joy you felt when you got your first toy truck remained undiminished to this day, and all subsequent joys—your first pair of skis, your first bicycle, your first kiss, your first promotion—kept stacking on top of each other. You’d be over the moon!
The desire for relative standing is therefore a promising source of motivation that could spur work and exertion even in a context where “man’s economic problem” has been solved. Provided only that other people’s incomes keep rising roughly in tandem with our own, vanity could prevent us from slacking off no matter how rich we get.
You all know what complements and substitutes are in economics, right? We say that X is a complement to Y if having more of Y makes extra units of X more valuable. A left shoe is a complement to a right shoe. If, instead, having more of X makes Y less valuable, we say that X and Y are substitutes. A lighter is a substitute for a box of matches.
Inequality could however raise average income in the Malthusian equilibrium if we assume that the relationship between income and fitness is not linear. This is easiest to see if we consider an extreme example: a king and a queen who have an income 100,000 times larger than that of a peasant couple—yet the regal pair would not have 100,000 times more surviving children. So inequality probably would increase average income in the Malthusian steady state.
From material welfare, it is yet a further question what it corresponds to in terms of subjective well-being. Individual psychology has a huge impact here. Two persons can live in virtually identical conditions—have similar jobs, health, family situations, and so on—and yet one of them may be far happier than the other. Some people are by temperament leaden, anxious, or ill-at-ease; others, blessed with natural buoyancy, remain cheerful and un-troubled even when their objective circumstances are quite dire.
“You know what the fellow said—in Italy, for thirty years under the Borgias, they had warfare, terror, murder and bloodshed, but they produced Michelangelo, Leonardo da Vinci and the Renaissance. In Switzerland, they had brotherly love, they had five hundred years of democracy and peace— and what did that produce? The cuckoo clock.”
Theoretically, if we focus our evaluation only on people who exist today, it might be possible to lift everybody up the status hierarchy by creating new people at the bottom of the hierarchy. Everyone who now exists could then have a growing number of inferiors to look down upon. A strategy of this sort is used today by managers in bureaucratic organizations, who sometimes seek to hire as many subordinates as possible in order to exalt their own position within the corporate structure.
I am able to not say a great number of things. Generally speaking, when there is something that you think that I should be saying, there is probably little reason for me to say it—considering that you are already thinking of it on your own.
Well, actually, the full name we’ve given ourselves is Homo sapiens sapiens. Really. “Hello ancient alien megaminds who’ve crossed intergalactic voids in search of fellowship and peers—welcome to your new housemates: behold how we’ve organized our traffic flow, how we’re simultaneously ruining both the planet and our own health; harken the honking of our horns as we sit through our collective catalepsy. Do come in and let us tell you what’s what. We are The Wise Wise Human. But you can call us Wisdom Squared.
The human identity might be ascertainable only via expert certification, but it could still be an important determinant of value, just as an authentic artwork by a master is worth a lot more than a nearly indistinguishable replica—perhaps because it is more prestigious to own the original than the copy.
What we can say is that it seems plausible that for some people, perhaps a significant fraction of the current population, a sudden leap into great wealth and complete leisure would not be an unalloyed blessing; and for some, it could be ruinous.
To many, shopping is a necessary evil; but there are also folk who enjoy this activity, and who would gladly spend more time engaging in it if they had money to spare and they didn’t
The human brain is of course quite unlike a regular digital computer, where standardized data representation formats and file transfer protocols make it easy to swap software in and out and to share it between different processors. By contrast, each human brain is unique. Even a simple concept that we all share, such as the concept of a chair, is implemented by an idiosyncratic constellation of neural connections in each person—the precise patterning of the neural connections encoding the concept is contingent on the details of that individual’s past sensory experience, their innate brain wiring and neurochemistry, and an incalculable host of stochastic factors. One therefore cannot simply “copy and paste” the concept of a chair from one brain to another without performing a complicated synaptic-level translation, from the “neuralese” of one brain to the quite different “neuralese” of another. Human brains can perform this translation themselves—slowly and imperfectly. This is what happens when we communicate using language. Some mental content in one brain, represented using that brain’s idiosyncratic neuronal machinery, is first projected down to a low-dimensional symbolic representation consisting of a string of words in natural language; and then the receiving brain has to unpack this radically impoverished linguistic representation by trying to infer which configurations of its own idiosyncratic neural machinery best match those representations in the sender’s brain that might have produced the perceived words and sentences. If the act of communication is successful, the receiving brain ends up with neural circuitry that shares some structural similarities with the circuitry in the sender’s brain: enough so as to give the receiver some of the capabilities that the sender wanted to impart. For large or complicated messages, such as when a professor of organic chemistry wishes to bring their students up to their own level of expertise, this process can take years—and even then the result is all too often disappointing.
Hedonic valence Here I want to say that referring to the option that involves artificially inducing contentment as one in which we become “mere pleasure-blobs” does really not do justice to what is on offer. Perhaps a life as a pleasure-blob is not everything we aspire to or the very best that we could possibly hope to achieve, but there needs be nothing “mere” about it. We might say more on this later, but some preliminary remarks: (a) A common mistake in evaluating possible futures is to focus on how good those futures are for us now, in the sense of how interesting it is for us to contemplate a given future or how suitable it is as a setting for entertaining stories and morality tales that we wish to tell each other. But the question before us here is a very different one: not how interesting a future is to look at, but how good it is to live in. We must remember that “interesting times” are often horrible times for those who have to live through them. An uneventful and orderly future, in contrast, can be a great place to inhabit. And even if its occupants should be somewhat blobbified, even if it would not offer the most inspiring backdrop for grand dramatical narratives, it could provide a state of continual contentment and pleasurable feeling that is pretty solidly desirable. (b) It is a commonplace that the heedless pursuit of pleasure is often counterproductive. “The search for happiness is one of the chief sources of unhappiness.” We think of a drug addict desperate for their next hit, and it does not seem like a good life. In fact, it is in all likelihood a life of suffering, punctuated by brief moments of drug-induced relief. Probably nobody in this room would be eager to swap their own life for that of a hardcore addict. Traditional wisdom therefore recommends taking a more oblique approach in our pursuit of happiness. “Happiness—a butterfly, which, when pursued, seems always just beyond your grasp; but if you sit down quietly, may alight upon you.” This wisdom would lead us astray, however, if we applied it to a scenario in which its underlying premiss—that chasing after pleasure is self-defeating— does not obtain: such as one that plays out in a setting where the technology exists whereby one really can induce pleasure by directly aiming at it, and where one can do so reliably and lastingly. Somebody might have the intuition that “a world in which mind engineering is used to induce pleasure would feel stale and unsatisfying after a while”. But this intuition is simply false. (c) Suppose that we became acquainted with the quality and the quantity of super-pleasure that could be ours at technological maturity. At present, we are opining on the matter without being directly acquainted with the thing under evaluation. If, however, we gained direct experience, it is plausible that we would most swiftly come around to the view that it was extremely desirable to experience and to keep experiencing that pleasure. And it is not obvious that, in this case, the process of becoming more intimately acquainted with the mental state whose desirability we are trying to ascertain would necessarily involve a corruption of our ability to judge well. (d) Also: “You could say I am happy, that I feel good. That I feel surpassing bliss and delight. Yes, but these are words to describe human experience—arrows shot at the moon. “It feels so good that if the sensation were translated into tears of gratitude, rivers would overflow.”
how can I know which thoughts some writer might have written about? And how can I know that my thoughts when I am thinking about my mother are not actually some reader’s thoughts when they are thinking about their mother or about some other random motherlike figure that they create in their imagination? Firafix: Well? Tessius: WelI, I have a great many thoughts. It would seem unlikely that any writer—any human writer at least—could have thought about all those thoughts and written them down; or that any reader would be conjuring up all these thoughts in the process of reading a novel… And, if I may be brutally honest for a moment, it is also perhaps possible that I might on some occasion have had some fleeting thought that would not have merited being written down… So, er, the fact that I have had all these thoughts, including some that authors would not have deemed significant enough to jot down in their novels or readers to picture in detail in their imagination: this fact would then prove that I am in fact not a character in a novel.
If fictional people became real while somebody was reading about them, they would on average have less power to influence the world than people who are real the entire time, continuously and cumulatively for seven or eight decades. There might be some fictional people who are influential, but mostly the world is run and shaped by nonfictional people. Also, for every fictional character who has influence, you could argue that that influence is also shared by the person who wrote them, the author.
Imagine first that we already had somehow achieved a high degree of cooperation, and that the challenge was to make this stable. We think that one way to do this would be by breeding for cooperativeness. So if somebody is cheating, they wouldn’t be allowed to have offspring, but the individuals who are more helpful and cooperative than average, they could have more offspring. Since we’ve assumed that we have a high degree of cooperation to begin with, people would mostly adhere to this agreement, and they would volunteer to help enforce it if there were any defectors. Each generation would be better at cooperating than the preceding one, and so there would be hope that the arrangement would stick. Of course, along with cooperativeness, there may be other desirable traits that one may also want to select for—vitality, wisdom, ability to thrive on a diet of leaves and grass, and so on.
An important special case of plasticity is that you have the ability to modify yourself in whichever way you want. In one of my early works, I termed this ability autopotency. An autopotent being is one that has complete power over itself, including its internal states. It has the requisite technology, and the know-how to use it, to reconfigure itself as it sees fit, both physically and mentally. Thus, a person who is autopotent could readily redesign herself to feel instant and continuous joy, or to become absorbingly fascinated by stamp collecting, or to assume the shape of a lion.
As you recall, the five rings of defense were: Hedonic valence; Experience texture; Autotelic activity; Artificial purpose; and Sociocultural entanglement.
According to Buddhist thought, we are doomed to experience unsatisfactoriness even if we should be so fortunate as to live under optimal material conditions—with abundant health, wealth, youth, reputation, etc. The root cause of our experience of unsatisfactoriness, on this view, is the role that we allow desire and attachment to play in our existence. And the only way to escape suffering is by eradicating fundamental illusions about the nature of self and reality. We must cease identifying with our desires and let go of our habit of viewing the world through the distorting lens of ego: only then may we see and accept phenomena for what they are; and only then may we find release from our suffering and attain inner peace.
Along similar lines, Arthur Schopenhauer, the great nineteenth-century German pessimist who took inspiration from the Vedic tradition, the Upanishads in particular—a core part of his philosophy centers on a basic predicament: the dilemma we face between the pain that comes from unsatisfied desires and the boredom we experience in the absence of unsatisfied desires: “The most general survey shows us that the two foes of human happiness are pain and boredom. We may go further, and say that in the degree in which we are fortunate enough to get away from the one, we approach the other . . . Accordingly, while the lower classes are engaged in a ceaseless struggle with need, in other words, with pain, the upper carry on a constant and often desperate battle with boredom.”
In principle, there is enormous opportunity to improve our existence by modifying and reengineering our emotional faculties. In practice, there is a considerable likelihood that we would make a hash of ourselves if we proceed down this path too heedlessly and without first attaining a more mature level of insight and wisdom.135 The caution applies especially to modifications of our emotional or volitional nature, since changes that affect what we want could easily become permanent. Not because we wouldn’t be able to change them—with increasingly advanced technology, it should be perfectly feasible to roll back changes made earlier—but because we may not want to change them. (For example, if you changed yourself to want nothing but the maximum number of paperclips, you would not want to change yourself back into a being who wants other things besides paperclips, except in certain very special kinds of circumstances where you expect a greater number of paperclips to come into existence conditional on you thus changing yourself.) This sort of volitional change, therefore, even if not irreversible, may have a tendency to in fact never be reversed.
Just as gazillions of neutrinos pass through our bodies every second without our noticing, so too might the world present us with countless beautiful things at every moment—which our minds are too coarse and insensitive to appreciate.
The value we place on interestingness derives from a social signaling motive. We desire to engage in activities and to be in situations that will enable us to tell a good story about what we’ve been up to, because this increases our social status.
The rut-avoidance hypothesis We tend to get bored if we keep doing the same thing for too long, especially if we don’t see any positive results. This emotional disposition could be evolutionarily useful, not only as a mechanism for encouraging active learning (as per the first hypothesis), but also more specifically to prevent us from persisting in fruitless endeavors or getting stuck in situations that we’ve mistakenly estimated as more propitious than they actually are.
Intrinsification: The process whereby something initially desired as a means to some end eventually comes to be desired for its own sake as an end in itself.
Utopia & Post-Scarcity
The telos of technology, we might say, is to allow us to accomplish more with less effort. If we extrapolate this internal directionality to its logical terminus, we arrive at a condition in which we can accomplish everything with no effort.
For example, suppose there were a series of progressively more expensive medical treatments that each added some interval of healthy life-expectancy, or that made somebody smarter or more physically attractive. For one million dollars, you can live five extra years in perfect health; triple that, and you can add a further five healthy years. Spend a bit more, and make yourself immune to cancer, or get an intelligence enhancement for yourself or one of your children, or improve your looks from a seven to a ten. Under these conditions—which could plausibly be brought about by technological advances—there could remain strong incentives to continue to work long hours, even at very high levels of income. So the future rich may have far more appealing ways to spend their earnings than by filling up their houses, docks, garages, wrists and necks with increasing amounts of today’s rather pathetic luxury goods.
We start with an economy of full human employment. Then the perfect robots are invented. This causes massive amounts of capital flow into the robotics sector, and the number of robots increases rapidly. It is cheaper to build or rent a robot than to hire a human. Initially, there is a shortage of robots, so they don’t immediately replace all human workers. But as their numbers increase, and their cost goes down, robots replace human workers everywhere. Nevertheless, the average income of humans is high and rising. This is because humans own everything, and the economy is growing rapidly as a result of the successful automation of human labor. Capital and land become exceedingly productive. Capital keeps accumulating; so eventually land is the only scarce input. If you want to visualize this condition, you could imagine that every nook and cranny has been filled with intelligent robots. The robots produce a flow of goods and services for human consumption, and they also build robots and maintain and repair the existing robot fleet. As land becomes scarce, the production of new robots slows, as there is nowhere to put them or no raw materials with which to build them—or, more realistically, nothing for them to do that cannot be equally well done by the already existing robots. Non-physical capital goods might continue to accumulate, goods such as films, novels, and mathematical theorems. There are no jobs and humans don’t work, but in aggregate they earn income from land rents and intellectual property. Average income is extremely high. The model doesn’t say anything about its distribution. Even though economic work is no longer possible for humans, there may continue to be wealth flows between individuals. Impatient individuals sell land and other assets to fuel consumption spurts; while more long-term-oriented individuals save a larger fraction of their investment income in order to grow their wealth and eventually enjoy a larger total amount of consumption. Another way to climb the wealth ranking in this steady state of the economy may be by stealing people’s or countries’ property, or by lobbying governments to redistribute wealth. Gifts and inheritances may also move some wealth around. And beyond these sources of economic mobility, there is always the craps table and the roulette.
The Industrial Revolution is important, since from that point onward economic growth has been rapid enough to outpace population growth, allowing humanity to escape the Malthusian condition: a very great blessing! Although we have only spent a few hundred years in this emancipated condition—and less than that in many parts of the globe— it has nevertheless shaped the life experiences of a significant and rapidly growing proportion of all humans who have ever been born. Of the roughly 100 billion humans who have ever lived, more than 10 billion have been post-Malthusian. Under standard demographic extrapolations, this figure would climb rapidly, since around 5% or 10% of all humans who were ever born are alive right now, and almost all contemporary human populations have been sprung from the Malthusian trap. Thus, maybe 10% of human lives so far have been (or currently are) post-Malthusian; and this fraction is increasing at a rate of about 10 percentage points per century.
I think you can make a case that wisdom and wide-scoped cooperativeness are the two qualities currently most needful to secure a great future for our Earth-sprouted civilization. I also think wealth, stability, security, and peace are better for wisdom and global cooperation than are their opposites. And so we should welcome advancements in these directions, not only because they are good for us now, but also because they are good for humanity’s future.
But that is not the right perspective from which to judge a utopia. For the question is not “How interesting is a utopia to look at?” but rather “How good is it to live in?”.
Without progress in the way that our civilization governs itself, increases in our material powers could easily make things worse instead of better; and even if a utopian condition were attained, it would likely be unstable and short-lived unless, at a minimum, the most serious of our global coordination problems were also solved.
However, while we can probably continue to ride this rocket for a while, eventually depletion effects will dominate scale effects. Technological inventions will become harder to make, as the lowest-hanging fruits are picked; and land (resources we cannot produce more of) will become scarce. Even space colonization can produce at best a polynomial growth in land, assuming we are limited by the speed of light—whereas population growth can easily be exponential, making this an ultimately unwinnable race. Eventually the mouths to feed will outnumber the loaves of bread to put in them, unless we exit the competitive regime of unrestricted reproduction.
Although the amount of stuff that could be produced is finite, it is possible to conceive of some dimensions along which an aggregate measure could continue to grow indefinitely. For instance, if we imagine a being whose utility is a function of how far apart things are, that being’s utility may continue to increase without bound, as the spatial fabric of the universe continues to stretch at an accelerating pace. Slightly less preposterously, we may consider a being whose utility is a linear function of the total amount of (some kind of) information that has been accumulated by our Earth-originating civilization—and perhaps the memory capacity in the accessible universe is unbounded, if cosmic expansion enables spatial encoding schemes to store an indefinitely increasing number of bits; although there might be reasons this wouldn’t really work in the truly long run.
There could be trajectory traps along the path of humanity’s future development. If we are unlucky, it could even turn out that all plausible paths toward a truly wonderful utopia are blocked—not because utopia wouldn’t be a technologically, economically, and politically possible and sustainable condition, but because all the realistic paths from here to there lead into some inevitable trajectory trap, wherein our civilization gets destroyed, stuck, or deflected. Fortunately, it does not appear as if all trajectories between here and utopia are trapped—at least, we don’t have strong evidence to rule out the possibility that at least one path lies open.
I have an idea how to solve the status problem in utopia. What if we create new people who are designed in such a way that they have a desire for low status? This should be possible at technological maturity, right? Then the status desires of the existing population could be satisfied, and the new people would also be satisfied! Both average and total preference-satisfaction would increase.
automation limits can—paradoxically—present challenges for a utopian vision in both of these ways: by not letting us offload our workloads to machines, so that we have to keep carrying these burdens ourselves; or by letting us offload our workloads to machines, so that we become useless and unemployed.
How Much Is Enough?, a book by Robert and Edward Skidelsky, in which they proposed reforms to the current capitalist system to reduce its emphasis on growth and consumption and make it easier for people to escape the rat race and to enjoy more leisure.
Unemployment raises the risk of alcoholism, depression, and death. But the scenario we are considering is different in several ways. First, and most obviously, losing one’s job today means, for many, either actual financial hardship or stress and anxiety over the increased risk of encountering such hardship later—whereas, in our hypothetical, we’re supposing that everybody has a secure high level of income. Second, job loss is today often associated with stigma—whereas this would not apply if everybody, or almost everybody, is out of a job, as in our scenario. Third and relatedly, job loss today often has a strong negative effect on self-image, partly because of the aforementioned stigma and partly because many people’s identity is tied up in their role of being a breadwinner for the family or being a success in the labor market—whereas, in our scenario, where those roles are simply nonexistent, people would form their identities around other attributes and relationships. Fourth, becoming unemployed today often means losing social connections to work colleagues, and more generally it can make it harder to relate socially to people who have jobs—whereas, again, this does not apply if we are all unemployed. Fifth, if we simply compare the lives and circumstances of the employed and the unemployed, we can be misled unless we take into account that there may be selection effects at play. Individuals with less enterprise, drive, education, health, emotional stability, etc. are more likely to become unemployed. If we observe a different distribution of those characteristics among people who have just lost their jobs, it is quite possible that some of the causation goes in the other direction—whereas, in the case of universal unemployment, the unemployed would be identical to the general population.
The education system is just one aspect of society that would need reform. More broadly, we’d need a transformation of culture and social values. A move away from efficiency, usefulness, profit, and the struggle for scarce resources; a move toward appreciation, gratitude, self-directed activity, and play. A culture that places a premium on fun, on appreciating beauty, on practices conducive to health and spiritual growth, and that encourages people to take pride in living well.
Let’s call these kinds of visions, which focus on how people (and animals and nature) could interact in ways that make for an allegedly more harmonious way of living, governance & culture utopias. They hold up images of how society could be “run better”, if we take this in the broadest sense, as encompassing not just laws and government policies but also customs, norms, habitual manners of going about things, internalized ways of viewing others, occupational and gender roles, and so forth. Sadly, when people have had the opportunity to put governance & culture utopian visions into practice, the endeavors have often fallen short of expectations, with typical outcomes ranging from disappointing to atrocious.
We don’t need to be too strict with these definitions. I mean, whether our governance and our culture is harmonious, fair, and conducive to flourishing is a matter of degree, as is the cornucopian character of our society. There is also ambiguity in the notion of economic abundance: exactly what kinds of goods and affordances are “economic”? There are many things you can’t buy even with infinite money—for example because they haven’t been invented yet. But for our purposes it may be sufficient to say that a post-scarcity utopia is one in which it is easy to meet everybody’s basic material needs as traditionally conceived—food, housing, transportation, etc. We may toss in schools and hospitals and some other such services into the mix as well. And we can then observe that, in developed countries, we have already come a long way toward realizing this type of abundance—say, more than halfway toward a post-scarcity utopia. This estimate obviously omits our animal brothers and sisters, for the vast majority of whom the situation is still most dire and in urgent need of amelioration.
I go further and assert that as we look deeper into the future, any possibility that is not radical is not realistic.
Today’s societies may set themselves goals such as clean air, good schools, high-quality healthcare, adequate pensions, an efficient transportation system, and so forth. Once those goals have been achieved, ambition could turn in more cultural directions: let’s say, to create a society where people care about one another, where individual differences are recognized and celebrated, where many people come together to create large happenings, where customs are continuously refined to make daily interactions more meaningful and fulfilling, and where there are constantly renewed efforts to deepen and broaden the public discourse about art, religion, ethics, literature, media, technology, politics, science, history, and philosophy. And so on and so forth. Again, a significant transition—but, really, an opportunity rather than a problem.
It is worth noting that in some respects a leisure utopia would be closer to the natural human condition than is our current world. I don’t think being woken by an alarm clock and summoned to sit behind a desk processing paperwork for an insurance agency or some other bureaucratic behemoth is at all natural. Some researchers have suggested that our Stone Age forebears had plenty of free time, that they may have worked as little as four hours a day. I’m a bit skeptical of the number, but what is likely true is that the boundary between work and leisure was not so clearly drawn in those primitive societies. When people’s instincts are well-matched to their environment, maybe they mostly just do what they feel like in the moment, and that happens to coincide with what is useful. We, by contrast, we Homo cubiculi, needs must rely on self-discipline and structured incentives to get us to perform the requisite labors.
At the present, and throughout history, there are many pressing tasks that we humans must do ourselves, and there are many big challenges that we confront together. These tasks and challenges give structure, purpose, and meaning to our lives. But technological progress (and, to a more limited extent, capital accumulation) enables us to achieve more of what we want with less effort. In the limit, with perfect technology and abundant capital, we are able to get everything we want with no effort. We will then have nothing to strive for. We will then either be bored out of our minds or transform ourselves into “pleasure blobs”, passive minds that experience an artificially induced sense of contentment. Either way, a dystopian future awaits. And those would be the best-case scenarios! It would hardly be reassuring, for example, to be told that we don’t need to worry about deep redundancy because our high-tech civilization will come tumbling down in a cataclysm before we reach technological maturity. At the heart of the argument here lies a pessimistic view of human nature. Basically: we’re unfit to inhabit a perfect world.
Why should the experiences of the utopians, while charged with positive hedonic valence, not also possess rich, varied, and aesthetically ace content—far more so than the comparatively tawdry experiences that occasionally impress us in the present era? The
The environment of the utopians could thus be one of heartrending beauty. Appreciators of art and architecture or natural landscapes could feast their eyes on the most excellent sights; music lovers could thrill their ears with brilliantly captivating sounds and melodies; gourmets could chomp their way through Xanadus of culinary wonders. And so on. Each day could be arranged with artistic ingenuity and turn out as little masterpieces all in themselves, while adding to an ever-rising larger structure into which they all fit together perfectly each in its unique way: like carefully carved and coordinated stones that together compose a great cathedral of life. Furthermore, the utopians could enjoy enhanced perceptual capabilities; and, more importantly, they could be endowed with superlative aesthetic sensibilities that enable them to actually apprehend more of the beauty and significance that suffuse their sensory streams and their environment. If we were teleported into their world, without receiving these upgrades of our subjectivity, we would not appreciate it as they do. We may see some pretty-looking wildflowers over there. They would come closer to seeing heaven in those same flowers.
So, if the utopians understand that their lives would go better if they did something, this would give them a reason to do something. It wouldn’t be an instrumental reason. They wouldn’t be engaging in the activity in order to produce some output. Rather, they would be engaging in the activity because the activity itself is valuable, or directly value-adding to their life. The activity is autotelic: it is done for its own sake.
Many of the leisure activities people do today, they do because they are fun—they engage in them as a means to experiencing pleasure. But this, by itself, would not be a reason to continue doing them in a post-instrumental world. So we may then ask, would it be a problem if people in utopia just stopped doing things and became inert recipients of pleasure and various forms of passive experience? Some people might think that this would be a problem—such a passive life just wouldn’t be as good, other things equal, as a life that also included more active forms of experience and participation. A life full of pleasure and passive experience would still be missing something important. And in response, I say that if that is indeed so, then let us note that the utopians can add active experience to their mix: they would have reason to engage in activities in order to realize whatever value activity has (beyond its ability to confer instrumental benefits, including the instrumental benefit of generating pleasure).
One might object to this proposal of creating artificial purpose that it would in effect amount to suspending utopia, at least locally. This is most clearly the case if the artificial purpose is created by entering a “hardcore” mode, in which the otherwise universally available means of automatically achieving outcomes have been removed—generating a pocket of non-utopian scarcity and danger. But perhaps one could argue that there would also be an element of suspension in the case where the artificial purpose is achieved by inducing a particular desire that requires an exertion of effort, such as in the case of the football player who comes to have the desire to help his team win using only fair and square means.
Now I want to point out that there is another important consequence of technological maturity, besides the obviation of human effort. A technologically mature world is plastic. I mean this in the sense that it has affordances that make it easy to achieve any preferred local configuration. Let us say that we have some quantity of basic physical resources: a room full of various kinds of atoms and some source of energy. We also have some preferences about how these resources should be organized: we wish that the atoms in the room should be arranged so as to constitute a desk, a computer, a well-drafted fireplace, and a puppy labradoodle. In a fully plastic world, it would be possible to simply speak a command—a sentence in natural language expressing the desire—and, voila, the contents in the room would be swiftly and automatically reorganized into the preferred configuration. Perhaps you need to wait twenty minutes, and perhaps there is a bit of waste heat escaping through the walls: but, when you open the door, you find that everything is set up precisely as you wished. There is even a vase with fresh-cut tulips on the desk, something you didn’t explicitly ask for but which was somehow implicit in your request.
One thing to note about this space is that it is not convex with respect to goodness. By this I mean that moving closer to utopia from our current position does not necessarily make things better. It could easily be the case, for example, that some advanced technological capability is beneficial only once the world has achieved enough cooperation to avoid using that capacity for war and oppression. Likewise, some advanced facility for cooperation might be beneficial only in societies that exceed some minimum threshold of wisdom—without which the resulting cooperative equilibrium that would result may serve only to buttress some prevailing prejudice or misconception, and permanently lock in a flawed status quo. Another thing to note is that the paths that lead to the quickest gains in welfare could be different from the paths that lead ultimately, most expeditiously or with the greatest surety, to utopia. I mean it is possible that the course of speediest improvement leads to a merely local optimum. When this is the case, there could be a tension between the interests of a relatively primitive generation, such as ours, and the interests of future utopians, whose coming into existence might require some sacrifice and forbearance on the part of their ancestors.
on the whole, people do not appear willing to make much of a sacrifice for the sake of posterity. But we could perhaps hope that either (a) creating utopia is easy, or (b) the steps needed to get there coincide with some of the steps that people are motivated to take for other reasons, or (c) we are already in utopia, or (d) we get outside help—or (e) we find some way to collimate and accumulate the parts of our wills that do share a love of utopia. Maybe these parts, though individually weak, could, with the right mechanism, be made to combine constructively (between people and over time) in a way that would let them have a greater influence on our common future than the myopic, selfish, and partisan desires that largely rule the present.
Governance & culture utopia The traditional type, what we could also (optimistically) call “post-misrule” utopias. Laws and customs are ideal; society is well-organized. Does not by definition imply boring and stultifying, although that is a common failure mode. Another common failure mode is being based on false views about human nature, or making gross errors of economics or political science. Another typical flaw is a failure to recognize the moral patiency and needs of some oppressed group, such as animals. Comes in many flavors—feminist, Marxist, scientific/technological, ecological, religious. (And now, most recently, crypto?)
Post-scarcity utopia Featuring an abundance of material goods and services—food, electronics, transportation, housing, schools and hospitals, etc. Everybody can have plenty of everything (with the important exception of positional goods). Many governance & culture utopias are also, to varying degrees, post-scarcity. In reality, if we focus just on human beings, Earth is already, what—about two-thirds of the way there? compared to the baseline of a typical hunter-gatherer ancestor.
Post-work utopia Full automation. This means there’s no need for human economic labor, though attempts to imagine this condition are often half-hearted and assume a continued need of human labor for cultural production. In post-scarcity utopia, there is plenty, but producing it might require work. In post-work utopia, there is little or no human work, either because machines give us effortless abundance, or because of a choice to live frugally with maximal leisure. Unclear how far toward a post-work condition we’ve come, given tradeoffs between income and leisure. Many people could probably find some way to eke out at least a hunter-gatherer level of material welfare while doing scarcely any work, although perhaps not without significant sacrifices of social status or community participation. Those with a few mil in their investment portfolios could afford much more, yet often keep working regardless, mostly for the social rewards.
Post-instrumental utopia No instrumental need for any human effort. Implies post-work but goes beyond in also assuming no instrumental need for any non-economic work either—no need to exercise to keep fit, for example; no need to study to learn; no need to actively evaluate and select in order to obtain the kinds of food, shelter, music, and clothing that you prefer. This is a far more radical conception than the preceding three types of utopia, and has been much less explored.
Plastic utopia Any preferred local configuration can be effortlessly achieved, except when blocked by some other agent. Autopotency is a special case of this—a being’s ability to reshape itself as it wills. This goes beyond post-instrumentality, which implies only that whatever can be accomplished can be done so without effort but doesn’t necessarily entail any expansion of what can be accomplished. In a plastic condition, the technologically possible becomes identical to the physically possible (at least locally). An important consequence of utopian plasticity is that it is likely to lead to a metamorphic humanity: beings that have through their technological advances been profoundly transformed. Plastic utopias have been very little explored, except in theological contexts and in some works of science fiction.
In classical governance & culture dystopias, for example, the problematic pattern might be oppressive totalitarianism (Nineteen Eighty-Four) or dehumanizing consumerism (Brave New World). In a post-scarcity dystopia, it could be alienation or social disconnectedness. In a post-work dystopia, the issue might be tedium and indolence. In post-instrumental or plastic dystopias, the problematic would be a sense of meaninglessness or of the world becoming uninterestingly arbitrary and untethered.
Dystopias are usually better settings for stories because at least they don’t lack problems. (The usual advice to writers is that “stories require conflict”.) At a minimum, the dystopian order itself is a big problem that a protagonist could struggle against. But this is only true for the first three types of dystopia. Post-instrumentality and plasticity pose difficulties for all attempts at storytelling, whether the setting is presented as positive or negative. This is because the conditions for dramatic agency are undermined, and because realistic portrayals of characters and environments would render them unrelatable and incomprehensible to us.
even if we did achieve perfection it would not make us happy. Perhaps it would be… boring to live in a perfect world?
This, at least, is a consequence if we take the currently most favored cosmological models at face value. They suggest that we are living in what I’ve called a Big World: a world that is big enough and locally stochastic enough that it is statistically certain to contain all possible human experiences.
Consciousness & Digital Minds
So, as I was saying, you could always create more people, especially of the digital sort. The number of digital minds you could create is proportional to the amount of computational resources you could deploy, which we can assume is proportional to the amount of money you have to spend.
But if the devices doing the work in this scenario are very sophisticated, it is possible that we should not think of them as mere machines but instead as a new kind of laborer, and that we should also consider the welfare of these digital minds. Although I went off on several tangents last time, I did resist the temptation to expound on the moral and political status of digital minds. Well, let me state that I think this is an important topic and I believe that some types of digital minds could have moral status—potentially very high moral status.
Also, the AI industry, and its customers, seem quite willing to countenance the creation of increasingly sophisticated digital minds that are trained to meekly serve their users without a thought as to their own social position or independent aspiration.
The other way in which our ability to automate could be limited is if there are certain behaviorally specified performances that cannot be achieved without generating conscious experience as a side effect. For example, it could be that any cognitive system that is capable of acting very much like a human being across a very wide set of situations and over extended periods of time, could only do so by performing computations that instantiate phenomenal experience. I’m not at this point taking a stand on whether this is indeed the case. But if it is, then a second limit to automation is that there could be demand for certain complex behaviors or interactions the performance of which necessarily generates sentience; wherefore, if we do not count sentient processes as automatic, the jobs requiring these performances could not be fully automated. Everything I’ve said here of sentience could be said, pari passu, of moral status. This is relevant if sentience is not a necessary condition for moral status. For example, if some non-sentient forms of agency are sufficient for moral status, there might be jobs (e.g. executive positions that require flexible goal-seeking in complex environments, but perhaps many other roles too) that could only be performed by systems that have moral status. And if delegating tasks to systems that have moral status doesn’t count as automation, then again we have here a limit to the possibility of automation.
I mean, when you are recalling something from long-term memory, might we not regard that, in some sense, as a kind of “internal perceptual input”? except that your sensory organ in this case is not looking outward at the surrounding visual environment, but inward at an internal neuronal environment. But if in one case you are looking something up in a notebook using your eyes, and in the other case you are looking something up in your long-term memory bank, is that really at a deep level so different? I mean, especially if the operation takes place outside consciousness? So whereas the extended mind thesis says that some extracranial elements of the world should be regarded as parts of our minds, maybe from an axiological perspective we should also go in the other direction and say that many parts of our minds are not really part of “us”? And then the question is, the part that is us: how big and complex is that part, really?
Familial bonds are among the closest and most selfless that most of us humans are capable of. However, this investment function, too, might be undercut at technological maturity, inasmuch as there would be an easier path to achieving an equivalent outcome. Namely, we might create artificial persons (fully articulated conscious humanlike beings with moral status) who stand in the same type of relationship to us: who understand us, trust us, and resemble aspects of us in the way that our children do. This would be much faster and cheaper than bringing up a human child in the traditional way. What is more, artificial persons could be designed to have a greater capacity for love and gratitude and close connection than is generally vouchsafed to our own fallen kind.
(b) If you want to have an experience that involves making an effort, you would in effect need to yourself actually make an effort. Modulo the limitations expressed under (a), the scientists could cause you to make such efforts; but it is worth noting that this kind of experience would not come “for free”. Suppose you want to have the experience of climbing Mount Everest. You could easily have the experience of seeing a series of views that would be seen on the ascent; and, if you look down, you could see your legs moving. You could also feel the pressure on your shoulders from the backpack, and the chilly air biting into your cheeks. But without the sense of having to strain, of having to dig deep inside yourself to find the wherewithal to continue, your experience would be but a shadow of the experiences of those who have surmounted Everest in real life. If, however, the scientists do induce these elements, then you are paying a hefty price for the experience—a price of discomfort, fear, and willpower expenditure. The experience machine might not give you that much of a benefit compared to actually going to Nepal and climbing the real mountain, although it would protect you from the risk of physical injury. (c) Among our most important experiences are ones that involve interactions with other human beings. How could such experiences be implemented? Consider the following routes. i. NPCs. In order to generate the sensory input that you receive when you are interacting with others, those other people could be implemented as NPCs (“nonplayer characters”), by which I mean constructs that display some of the attributes of an intelligent being yet without thereby instantiating any phenomenal experience or other bases that would endow that being with moral status. This is undoubtedly possible in the case of relatively shallow interactions. For example, if you want to have the experience of asking a stranger a few questions along the lines of “what is two plus two?” and getting back an answer like “four”, it would be metaphysically possible to implement the requisite computations without creating any morally considerable being (other than yourself). But it is less clear that it would be metaphysically possible to generate the fully realistic experience of having long, deep, and rich interactions with another human being without running a computation that effectively implements a complex digital mind that has moral status. Which brings us to the second route toward generating interaction experiences… ii. VPCs. By VPCs (“virtual player characters”) I mean artificial computational constructs that do have moral status, for instance because they possess conscious digital minds.
iv. Recordings. The notion that you enter the experience machine by having neuropsychologists stimulate your brain with electrodes is a bit quaint. A more plausible and more efficient method would be to first upload yourself into a computer and then to interact with a virtual reality. This presents us with at least one special case in which you could have fully realistic deep interaction experiences without instantiating any morally significant entity (other than yourself), namely by replaying recordings of the outputs of other people. To do this, you would first do one run in which you interact with VPCs or PCs (which can themselves be uploads or biological). The superduper scientists record the interaction history between you and these other people. When you have finished doing whatever you wanted to be doing, we reset your mind and the environment to their initial states. You now have the ability to enjoy the same experience again, but this time without instantiating any real persons. This would be done by rerunning the program again that implements your mind, initialized to the exact starting configuration of the first run; but instead of re-running the computations that correspond to your interaction partners’ minds (or to the physical environment), we simply fetch the relevant information from memory. As you are having the experience this second time, you can make whatever choices you want: but since we already know what choices you will want to make, and since we have a recording of how other people and the environment reacted to these choices, we don’t need to recompute those parts and can instead used stored data to determine the input that your senses receive. (The operations of your own mind do need to be recomputed, of course, because—we believe—this is what actually generates your experience.)
I don’t think the human intellect is powerful enough to bring an imaginary sentient mind into being simply by thinking about it. For a superintelligence, it’s a different matter. It could internally simulate sentient minds. But this is a very different proposition than that literary characters in a novel are conscious, or that we could be such characters.
How exactly do we know that there is not enough information content and computation taking place when a human is reading a novel that the characters described come alive sufficiently that we can’t be sure that we aren’t such characters? We are clearly not explicitly representing a mind composed of one hundred trillion synapses when we are reading a novel; but it doesn’t seem obvious that an explicit representation with that level of granularity would be necessary to produce the subjective experiences in question.
Tessius: Maybe one way to get at this: Suppose you are reading about an imaginary character who is piloting an airplane in World War II. But you the reader have never piloted an airplane. There is no way that your brain would be capable of implementing the computations that would be required to successfully pilot a World War II-era fighter plane. So how then could the requisite computations be implemented that would accurately generate the experience of doing that? Firafix: Nay, I guess that would not be possible… But what if we are considering a case where the reader is on the same level as the fictional character, and where she has the same skills and so on. Tessius: Are you averring that there exist readers that are “at the same level” as us? I, for one, declare myself offended and aggrieved. Firafix: I was primarily wondering whether I might be a fictional character! But if we are fictional characters in a fictional world, maybe in the world where the reading is being done there are some pretty clever readers? Kelvin: If the readers were superintelligent, and if their “reading” essentially consisted of running detailed internal simulations of the neural networks of the characters described in their novels, then yes. But we are not talking about the simulation hypothesis here. We are instead discussing the crazy proposition that we might be characters in a story written and read by ordinary humanlike beings, right? Firafix: Well, maybe there could be readers who are just a bit cleverer than regular humans but not superintelligent? But perhaps it’s better to focus on the case where we have a fictional character who is of sufficiently limited abilities that there are at least some human readers who would be able to do everything that this fictional character can do. If my abilities are thus limited, how could I then know that I am not a fictional character? Or, I suppose, an instance of such a fictional character being read about and imagined by some particular reader? As opposed to a real flesh and blood creature?
Maybe one way to get at this: Suppose you are reading about an imaginary character who is piloting an airplane in World War II. But you the reader have never piloted an airplane. There is no way that your brain would be capable of implementing the computations that would be required to successfully pilot a World War II-era fighter plane. So how then could the requisite computations be implemented that would accurately generate the experience of doing that?
Tessius: But what about your earlier point about information content, then? Unless I’m terribly mistaken, I can recall a great many specific details about my past, more than any novelist would care to write about or reader imagine. Actually, this is a different version of the information content argument that you were initially proposing. You seemed to suggest that the fact that the human brain contains 1014 synapses might be enough to establish that fictional characters are not sentient, since a book does not contain enough information to specify what all of these synapses in the fictional character’s brain are doing. But now the idea is that the reader’s brain is doing most of this work. The book contains some nudges and pointers, but the reader’s brain is filling in the great bulk of the requisite information—namely, by the reader using their own concepts, intuitions, and imagination to render the fictional
By the way, this might be an aside, but I’m a bit puzzled by the reports we hear of split-brain patients, whose hemispheres, after most of their connection via the corpus callosum has been severed, appear to be able to operate pretty independently and perhaps with person-level proficiency. Could we really be walking around with enough neural matter to implement two normal persons, yet ordinarily only actually be implementing one? It seems wasteful.
Tessius: In any case, I’m not aware of any particular reason to suppose that even if our brains do have this kind of quasi-redundancy, the “spare” capacity for additional conscious experience would actually be coming into play while we are reading… Also, we’re not very good at multitasking. If our brains were sometimes simultaneously implementing the conscious experiences of two persons, using separate pieces of neural machinery, then should we not be able to make use of this duplicated circuitry to, let us say, work out the proof of an algebraic theorem while at the same time making complex scheduling arrangements for a family reunion? For example, you could be modeling an imaginary character who was working on proving the theorem in one part of your brain (or in one subset of your cortical microcircuits), while in another part (or another subset of your microcircuitry) you would be doing the complicated events planning. But I for one would find that utterly impossible.
Kelvin: Let’s suppose there is a fictional character and a nonfictional one, and that both have their own separate conscious minds. Maybe you are not sure which one you are. Now you could argue, in this case, that you should act mostly as if you were the nonfictional one. The fictional character would tend not to live very long and their choices would have less opportunity to have long-term consequences. Note, it is not their lifespan or their impact as described in the novel that matters here. A novel might say that a fictional character saved the world and lived happily for a million years thereafter. But this does not mean that any real world was saved or that there was actually some fictional character that had a million years of real phenomenal experience. Even under the premiss that reading about a fictional character can bring that character’s experience into reality, this would apply only to those of the character’s experiences that the reader’s brain actually models in sufficient detail. So the maximum amount of subjective experience that a fictional character could have is the amount of experience one can have during ten hours, or however long it takes to read a book. Tessius: What if the book is read by many people? A bestseller might be read a million times. Ten hours times a million would be longer than an ordinary human lifespan. Kelvin: Yeah. Tessius: So maybe we should act as if we are characters in a bestseller? Or maybe we should even act so as to make it more likely that the book we’re in becomes a bestseller? Kelvin: Yeah. Tessius: The narratological imperative? I think we have just proved that the best thing for you to do would be to moon those ladies over there at the bus stop, Kelvin! It might sell another thousand copies… resulting in, what, ten hours times a thousand: ten thousand hours—that’s more than a year, Kelvin. Maybe divided by the three of us. Still, four months of Kelvin-life—worth it!
I’m harping on this, not because we know with certainty that we do live in a Big World, but because (a) it is quite likely that we do and (b) the implications are so striking. (But it is also possible that our basic way of conceptualizing possibilities apparently involving physical infinites is in some deep way flawed.)
Leisure & Boredom
“The only problem I have is that I have no problems—life, you know, is just too perfect, and it really bugs me!”.
“There is no country and no people, I think, who can look forward to the age of leisure and of abundance without a dread. For we have been trained too long to strive and not to enjoy. It is a fearful problem for the ordinary person, with no special talents, to occupy himself, especially if he no longer has roots in the soil or in custom or in the beloved conventions of a traditional society. To judge from the behaviour and the achievements of the wealthy classes today in any quarter of the world, the outlook is very depressing! For these are, so to speak, our advance guard—those who are spying out the promised land for the rest of us and pitching their camp there.”
“The Skidelskys have an exalted conception of leisure. They say that the true sense of the word is ‘activity without extrinsic end’: ‘The sculptor engrossed in cutting marble, the teacher intent on imparting a difficult idea, the musician struggling with a score, a scientist exploring the mysteries of space and time—such people have no other aim than to do what they are doing well.’ That isn’t true. Most of these people are ambitious achievers who seek recognition. And it is ridiculous to think that if people worked just 15 or 20 hours a week, they would use their leisure to cut marble or struggle with a musical score. If they lacked consumer products and services to fill up their time they would brawl, steal, overeat, drink and sleep late.”
“Millions long for immortality who don’t know what to do with themselves on a rainy Sunday afternoon.”
WHAT TO DO WHEN THERE’S NOTHING TO DO Building sand castles, going to the gym, reading in bed, taking a walk with your spouse or a friend, doing some gardening, participating in folk dance, resting in the sun, practicing an instrument, playing a game of bridge, climbing a rock wall, playing beach volleyball, golfing, bird watching, watching a TV series, cooking dinner for friends, going out on the town partying, redecorating the house, building a treehouse with children, knitting, painting a landscape, learning mathematics, traveling, participating in historical reenactments, writing a diary, gossiping about acquaintances, looking at famous people, windsurfing, taking a bath, praying, playing computer games, visiting the grave of an ancestor, taking a dog for a walk, sipping a cup of tea, running a marathon, engaging in witty banter, watching a football match, shopping, going to a concert, protesting an injustice, having a picnic, going on a camping trip, eating ice cream, organizing a murder mystery game, playing with LEGOs, wine tasting, having a massage, learning about history, doing a silent retreat, taking drugs, getting your nails done, attending a religious ritual, keeping up with current events, interacting on social media, exploring virtual reality environments, kayaking, learning to fly a sports plane, gambling, pouring a martini, celebrating a holiday, researching your family tree, participating in a neighborhood clean-up, singing in a choir, meditating, carving pumpkins, swimming, solving a crossword puzzle, visiting friends, making love, driving in a demolition derby, biohacking yourself to optimize physical and mental performance, attending an amateur astronomy meeting, creating a time capsule, teaching a young person something you know, watching a sunset, going to a costume party, arguing about moral philosophy, judging a koi fish competition (“living jewels”), collecting antiques, attending a lecture… The list goes on.
Boredom is actually an important topic, and we shall discuss it in more depth tomorrow. For now, I’ll just say that it seems quite possible that, with appropriate changes in education and culture, we would feel less bored in a post-work world than we do today. Aside from presenting the opportunity to adapt education and culture to foster fulfilling leisure, the greater levels of wealth and better technology would also make it easier to build institutions and infrastructure that support a wide range of enjoyable and fulfilling activities. But what if universal automation does lead to some increase in boredom? My guess is that it would still be good overall, considering the many people around the world who currently live in such abject poverty that being catapulted into great wealth would have to be regarded as a big improvement even if it resulted in a life of some tedium and frivolous dissipation. Brawling, stealing, overeating, drinking, and sleeping late may not make for the best life, but even that could be a lot better than one of deprivation or incessant grind under the thumb of some mean and vexatious taskmaster
The solution to shallow redundancy is to develop a leisure culture. Leisure culture would raise and educate people to thrive in unemployment. It would encourage rewarding interests and hobbies, and promote spirituality and the appreciation of the arts, literature, sports, nature, games, food, and conversation, and other domains which can serve as playgrounds for our souls that let us express our creativity, learn about each other and about ourselves and about the environment, while enjoying ourselves and developing our virtues and potentialities. A leisure culture would base self-worth and prestige on factors other than economic contribution, and individuals would construct their social identities around roles other than that of breadwinner (although there might be game-like environments that allow those who previously excelled in financial performance to display and gain recognition for their resourcefulness).
On the one hand, we surely have reasons to pursue the development of technological capabilities that enable us to get more of what we want with less effort. That’s almost part of the definition of rationality: that one seeks efficient means to one’s ends. Certainly, our society is pouring great effort into technological and economic progress, and we give awards to individuals who make it happen. And yet, on the other hand, if and when our efforts to increase the efficiency with which we can achieve our aims are fully successful, we will supposedly enter a condition in which either we are terminally bored or we become passive recipients of narcotized contentment. Neither alternative sounds appealing. So it looks like we have reason to work to achieve a condition X and that it would be very bad if we achieved X. In other words, the conclusion would seem to be that we ought to devote massive resources toward achieving something while at the same time desperately hoping that we will fail. Not quite a logical contradiction, but it would certainly be an odd predicament to be in.
Boredom in this sense is definitely avoidable at technological maturity. Pleasure, fascination, joyful absorption, and other boredom-excluding psychological states, are (trivially) among the things that a thriving technologically mature civilization could generate. This is a direct implication of autopotency. Indeed, boredom-excluding mental states could be generated in prodigious quantity and degree, by neurotechnological means (such as genetic engineering, brain stimulation, pharmacological substances, or nanomedicine) or by appropriately designing or modifying digital minds. Far from being an inevitable consequence of technological perfection, then, boredom as subjective experience could be completely abolished at technological maturity. Now consider boringness as an objective attribution. We might say that a book or a party is boring, and mean thereby not that anybody necessarily happens to feel bored, but that the object in question has various attributes whose presence is summed up and expressed by the label “BORING”. While it is difficult to give a precise characterization of this boringness property, we may take it to involve a deficit of features such as novelty, relevance, significance, and worthwhile challenge. Whether and to what extent a technologically mature civilization can avoid having this boringness property is a more difficult and subtle question than whether it can avoid containing subjective feelings of boredom.
You might think that if the utopians extirpated their ability to feel bored, then they would be perfectly content with the simplest and most monotonous preoccupations, such as watching paint dry; and that they would then not bother to do anything more interesting with the future than occasionally repainting a wall so they could watch it dry; and that the future would then consist of a group of people staring at recently repainted walls. This future, while clear of boredom, would be full of boringness. Such a future would seem quite a letdown compared to alternative possibilities that we might imagine.
Squinting a little, one might view today’s streaming services and recommender systems as (very primitive forms of somewhat misaligned) boredom prostheses. In the ideal case, they keep us consuming a personalized content stream indefinitely—with suitable intermezzos in which we buy all the stuff that is pushed to us in the ads. The mechanism selects new content to preempt boredom, ensuring that we stay “engaged”. The problem is that while these commercial systems may be somewhat effective at averting subjective boredom, they are generally not designed to avoid objective boringness.
That said, I will admit there is still a concern that, as we consider longer and longer intervals, there may come a time when all the activities available to somebody become objectively uninteresting, because all novelty and interestingness has been used up.
Drinking tea (or coffee, if you prefer) may not be a source of an intense flash of value, the way that an epiphany into some deep truth about human nature may be if discovery of such truths has interestingness-value. But it is quite renewable. The 162,330th cup of tea, on your 200th birthday, may not be less valuable than the one you had a century earlier. And whereas the supply of human-accessible profound truths might be limited, you can always put another kettle on.
Ethics & Moral Philosophy
No matter how affluent everyone is—indeed especially if everyone is very wealthy—you could, in principle, create additional happiness by bringing additional happy people into existence. There certainly are folk who think that would be a good thing, such as total utilitarians, and who could thus remain motivated.
We often think of economic inequality as bad. In a Malthusian context, however, it appears to have a silver lining. Given unrestricted population growth, inequality is the only way that at least some fraction of the population can enjoy consistently above-subsistence-level incomes. If one holds that it is intrinsically important that there exist at least a few people who enjoy the finer things in life, then such an unequal arrangement might be deemed better than one in which there exists a slightly larger number of people but where everybody has a “muzak and potatoes” life (to borrow a phrase from Derek Parfit). Historically, there have also been instrumental benefits to having some rich folk around who could patronize the arts and sciences and create pockets of privilege, sufficiently isolated from the immediate struggle for survival, so that new things could be invested in and tried out.
The idea is that there could be outcomes that are feasible in every other way, and are highly desirable, yet which are impossible for us to achieve morally. This is easiest to see if we consider an ethical system that includes deontological principles. For example, some people might hold (incorrectly, in my view) that there is an absolute moral prohibition against using genetic engineering to enhance human capacities. Let us suppose that a similar prohibition would apply to any other technology whereby comparable outcomes could be achieved (perhaps on grounds that they would all involve “playing God”). Then it could be the case that even though the outcome where humans or posthumans enjoy happy lives with enhanced capabilities would be preferable to the present world—and perhaps to any alternative future—yet no morally permissible path to this superior outcome lies open to us.
Think of how much more challenging the work of an author would be if the characters in her novels, simply by being imagined by the author or the reader, were thereby themselves actually coming to experience phenomenal states. That could make it morally impermissible to write tragedies and tales of woe.
Philosophers have developed various accounts of what endows a being with moral status. In some of these, consciousness (or the capacity for consciousness) is not a necessary condition for having moral status. While having a capacity for suffering is generally acknowledged to be a sufficient condition for having at least some form of moral status, there might be alternative attributes that could ground moral status—such as having a sophisticated conception of oneself as persisting through time; having agency and the ability to pursue long-term plans; being able to communicate and respond to normative reasons; having preferences and powers; standing in certain social relationships with other beings that have moral status; being able to make commitments and to enter into reciprocal arrangements; or having the potential to develop some of these attributes. If moral status can be based on any of those traits, then there would be an additional class of beings who could not be brought into existence without thereby also bringing into effect moral responsibilities which may constrain how these beings may be used or treated.
POTENTIAL IMPEDIMENTS TO AUTOMATION Sentience and moral patiency Regulation Status symbolism Solidarity Religion, custom, sentimentality, and peculiar interests Trust and data
One might think that as our challenges get smaller and more parochial and become less a matter of life and death, their ability to generate passion and engagement would decline. But this is not clearly the case. More people jump out of their seats when their soccer team scores a goal than when an international agency publishes a report saying that a hundred thousand fewer children died from preventable diseases this year than last. (We take this to be completely normal, but I wonder, if we could see ourselves through the eyes of angels, whether we would not recognize in this pattern of excitement and indifference something quite perverse—the warped sentiments of a moral degenerate? Is it not, implicitly, a sort of emotional middle finger to the suffering and desperation of other sentient beings?)
So now, every hour of quality time you spend with your child is an hour of even higher quality time it is deprived of spending with the robot. Spending the hour with the artificial caregiver would, we may assume, be more fun for the child as well as more educational and more nurturing of their emotional and social needs. You could choose to play with your child yourself; but in doing so you would be selfishly prioritizing your own enjoyment at the expense of the child’s welfare and development. Although this might give you some fun, it would hardly fill your life with purpose.
The ground for such a position would be similar to the ground for why one might think, in general, that it would be undesirable or at least suboptimal to spend the rest of one’s life in Nozick’s experience machine (which we’ll get back to shortly). This thought experiment has been taken to show that our well-being has an objective component—that how well our lives go for us is not determined solely by our mental states, by what we think and feel, but also by our relationship to external reality. On this view, it matters whether our beliefs are true and our projects successful, independently of whether we ever find out. Along the same lines, it might matter whether we really remain in contact with somebody to whom we have bonded. Interacting with a simulacrum of this person would, ceteris paribus, be less good, even if we never notice the difference. One might, for example, have the intuition that it is bad for a husband to be cuckolded even if he never discovers the betrayal and even if his wife does not change her behavior toward him. And—if one holds this view—one might likewise think it could be bad for a child if, one night while they were sleeping, their parents were swapped out for an indistinguishable set of robot impostors.
Nozick writes: “Suppose there were an experience machine that would give you any experience you desired. Superduper neuropsychologists could stimulate your brain so that you would think and feel you were writing a great novel, or making a friend, or reading an interesting book. All the time you would be floating in a tank, with electrodes attached to your brain. Should you plug into this machine for life, preprogramming your life’s experiences?”Nozick argues that if we reject the offer to plug in, it shows that we value things other than (or in addition to) subjective experience.
This sublimation of NPCs into VPCs would require a big burst of computation—perhaps entire childhoods and personal histories would need to be simulated to generate the fully realistic VPCs that you are about to engage in conversation. Although VPCs would make it technically feasible to generate a very wide range of interaction experiences, the use of VPCs would introduce moral complications. It might be impossible for you to have certain experiences in the experience machine without violating ethical constraints, just as it would be in external reality.
A “multiplayer” version of the experience machine has been proposed (including by Nozick himself), in which many people together plug into the experience machine. This would enable us to have real interactions with other real people, including particular existing people who are important to us—thus obviating one common ground for refusing to enter the experience machine. However, in this setup you no longer have complete control over the experiences you have, since that will now depend on the independent choices of other people. This scenario thus violates a key premiss of the original thought experiment.
The question might be, if our dreams became a lot more detailed, realistic, and coherent, whether, when we are dreaming of other people, those people might not then actually enter existence sufficiently to become moral patients. It might then be morally problematic to dream or fantasize sufficiently realistically about other people without their prior consent (and without satisfying various other ethical constraints).
Firafix: How would we—I mean—if we regarded fictional characters as having some moral status, what should we do about that? Tessius: I haven’t thought it through. Maybe other things equal, we ought to be writing more comedies and fewer tragedies. More happy endings. I kind of like the fact that many stories end with “and they lived happily ever after”. But maybe the monsters too should live happily ever after. Firafix: It would work for me. I usually prefer to read happy stories anyway. But I might have an uncommon taste. Kelvin: There is some value in understanding bad things, so that we can more effectively work to counter them. But yes on balance there should probably be more of a tilt toward the positive. There could be other reasons for that as well.
Even without conflict or malevolence, increases in power are not axiomatically beneficial. It is possible to use power imprudently. I think if we want to specify a bundle of civilizational properties that is close to axiomatically beneficial, it would have to include at least three attributes: not just power over nature, but also cooperation with our fellow beings, and also wisdom. And even then it is not axiomatic. With great wisdom and cooperation, technological progress could still turn out to be harmful if we have bad luck. We may wisely take a risk that is ex ante worth taking; only to discover, ex post, that it was a mistake.
If you’ve read Nozick’s reflections on his experience machine, you may recall that he wrote: “we want our emotions, or certain important ones, to be based upon facts that hold and to be fitting. . . . What we want and value is an actual connection with reality.”.
Philosophers sometimes refer to those things that are (or, on more objectivist metaethical accounts, ought to be) valued for their own sakes as “final values”.
Psychological and cultural facts about what we value in this way—and, on some metaethical views, also facts about what is valuable in this way—may change over time. In this sense, final values come and go.
How good is this life for the person whose life it is? How much good does this life (directly, by its own existence, as opposed to via its wider causal effects) contribute to the world? The answers to these questions can come apart. For example, according to average utilitarianism, a life could be good for the person yet bad for the world. This would happen if the well-being of that life is high but not as high as the average level of well-being in the world. More generally, unless the value of the world is a simple sum of the values of the individual lives it contains, we should not expect the answers to the two questions to coincide.
Author
Mauro Sicard
CEO & Creative Director at BRIX Agency. My main interests are tech, science and philosophy.