The Great Mental Models explores powerful thinking tools for making better life decisions.
The following are the key points I highlighted in this book. If you’d like, you can download all of them to chat about with your favorite language model.
The quality of your thinking depends on the models that are in your head.
Largely subconscious, mental models operate below the surface. We’re not generally aware of them and yet they’re the reason when we look at a problem we consider some factors relevant and others irrelevant. They are how we infer causality, match patterns, and draw analogies. They are how we think and reason.
Using the lenses of our mental models helps us illuminate these interconnections. The more lenses used on a given problem, the more of reality reveals itself. The more of reality we see, the more we understand. The more we understand, the more we know what to do.
Simple and well-defined problems won’t need many lenses, as the variables that matter are known. So too are the interactions between them. In such cases we generally know what to do to get the intended results with the fewest side effects possible. When problems are more complicated, however, the value of having a brain full of lenses becomes readily apparent. That’s not to say all lenses (or models) apply to all problems. They don’t. And it’s not to say that having more lenses (or models) will be an advantage in all problems. It won’t. This is why learning and applying the Great Mental Models is a process that takes some work. But the truth is, most problems are multidimensional, and thus having more lenses often offers significant help with the problems we are facing.
The third flaw is distance. The further we are from the results of our decisions, the easier it is to keep our current views rather than update them.
What you need is to understand the principles, so that when the details change you are still able to identify what is really going on.
Rather than update our views, we double down our effort, accelerating our frustrations and anxiety. It’s only weeks or months later, when we’re spending massive amounts of time fixing our mistakes, that they start to increase their burden on us. Then we wonder why we have no time for family and friends and why we’re so consumed by things outside of our control.
Better models mean better thinking.
Understanding not only helps us decide which actions to take but helps us remove or avoid actions that have a big downside that we would otherwise not be aware of. Not only do we understand the immediate problem with more accuracy, but we can begin to see the second-, third-, and higher-order consequences. This understanding helps us eliminate avoidable errors. Sometimes making good decisions boils down to avoiding bad ones.
You’ve likely experienced this first hand. An engineer will often think in terms of systems by default. A psychologist will think in terms of incentives. A business person might think in terms of opportunity cost and risk-reward. Through their disciplines, each of these people sees part of the situation, the part of the world that makes sense to them. None of them, however, see the entire situation unless they are thinking in a multidisciplinary way.
As more and more people know what model you’re using to manipulate them, they may decide not to respond to your incentives.
Human beings are not simple automatons: A more complete model would hone in on other motivations they might have besides financial ones.
When ego and not competence drives what we undertake, we have blind spots. If you know what you understand, you know where you have an edge over others. When you are honest about where your knowledge is lacking you know where you are vulnerable and where you can improve. Understanding your circle of competence improves decision-making and outcomes.
How do you know when you have a circle of competence? Within our circles of competence, we know exactly what we don’t know. We are able to make decisions quickly and relatively accurately. We possess detailed knowledge of additional information we might need to make a decision with full understanding, or even what information is unobtainable. We know what is knowable and what is unknowable and can distinguish between the two. We can anticipate and respond to objections because we’ve heard them before and already put in the work of gaining the knowledge to counter them. We also have a lot of options when we confront problems in our circles. Our deep fluency in subjects we are dealing with means we can draw on different information resources and understand what can be adjusted and what is invariant.
We know his incentive in this situation; it’s to get us to spend as much as possible while still retaining us as a customer.
I couldn’t have given her $200 million worth of Berkshire Hathaway stock when I bought the business because she doesn’t understand stock. She understands cash. She understands furniture. She understands real estate. She doesn’t understand stocks, so she doesn’t have anything to do with them. If you deal with Mrs. B in what I would call her circle of competence…. She is going to buy 5,000 end tables this afternoon (if the price is right). She is going to buy 20 different carpets in odd lots, and everything else like that [snaps fingers] because she understands carpets. She wouldn’t buy 100 shares of General Motors if it was at 50 cents a share.
First principles thinking is one of the best ways to reverse-engineer complicated situations and unleash creative possibility. Sometimes called reasoning from first principles, it’s a tool to help clarify complicated problems by separating the underlying ideas or facts from any assumptions based on them. What remain are the essentials.
Running through the scenario 100,000 times, how many times do you go broke and how many times do you triple your dough?
This gives you some real decision-making power: It tells you about the limits of what you know and the limits of what you should attempt. It tells you, in an imprecise but useful way, a lot about how smart or stupid your decisions were regardless of the actual outcome.
Thought experiments tell you about the limits of what you know and the limits of what you should attempt. In order to improve our decision-making and increase our chances of success, we must be willing to probe all of the possibilities we can think of. Thought experiments are not daydreams. They require both rigor and work. But the more you use them, the more you understand actual cause and effect, and the more knowledge you have of what can really be accomplished.
In mathematics they call these sets. The set of conditions necessary to become successful is a part of the set that is sufficient to become successful. But the sufficient set itself is far larger than the necessary set. Without that distinction, it’s too easy for us to be misled by the wrong stories.
Almost everyone can anticipate the immediate results of their actions. This type of first-order thinking is easy and safe but it’s also a way to ensure you get the same results that everyone else gets. Second-order thinking is thinking farther ahead and thinking holistically. It requires us to not only consider our actions and their immediate consequences, but the subsequent effects of those actions as well. Failing to consider the second- and third-order effects can unleash disaster.
It is often easier to find examples of when second-order thinking didn’t happen—when people did not consider the effects of the effects. When they tried to do something good, or even just benign, and instead brought calamity, we can safely assume the negative outcomes weren’t factored into the original thinking. Very often, the second level of effects is not considered until it’s too late. This concept is often referred to as the “Law of Unintended Consequences” for this very reason.
Warren Buffett used a very apt metaphor once to describe how the second-order problem is best described by a crowd at a parade: Once a few people decide to stand on their tip-toes, everyone has to stand on their tip-toes. No one can see any better, but they’re all worse off.
Probabilistic thinking is essentially trying to estimate, using some tools of math and logic, the likelihood of any specific outcome coming to pass.
Another common asymmetry is people’s ability to estimate the effect of traffic on travel time. How often do you leave “on time” and arrive 20% early? Almost never? How often do you leave “on time” and arrive 20% late? All the time? Exactly. Your estimation errors are asymmetric, skewing in a single direction. This is often the case with probabilistic decision-making. Far more probability estimates are wrong on the “over-optimistic” side than the “under-optimistic” side. You’ll rarely read about an investor who aimed for 25% annual return rates who subsequently earned 40% over a long period of time. You can throw a dart at the Wall Street Journal and hit the names of lots of investors who aim for 25% per annum with each investment and end up closer to 10%.
Simpler explanations are more likely to be true than complicated ones. This is the essence of Occam’s Razor, a classic principle of logic and problem-solving. Instead of wasting your time trying to disprove complex scenarios, you can make decisions more confidently by basing them on the explanation that has the fewest moving parts.
If all else is equal, that is if two competing models both have equal explanatory power, it’s more likely that the simple solution suffices.
Learning & Knowledge
We are passive, thinking these things just happened to us and not that we did something to cause them. This passivity means that we rarely reflect on our decisions and the outcomes. Without reflection we cannot learn.
A quick glance at the Nobel Prize winners list show that many of them, obviously extreme specialists in something, had multidisciplinary interests that supported their achievements.
Newtonian physics is still a very useful model. One can use it very reliably to predict the movement of objects large and small, with some limitations as pointed out by Einstein. And, on the flip side, Einstein’s physics are still not totally complete: With every year that goes by, physicists become increasingly frustrated with their inability to tie it into small-scale quantum physics. Another map may yet come.
That’s how good maps are built: feedback loops created by explorers.
As your competitors gain knowledge of the model, they respond in kind by adopting the model themselves, thus flattening the field.
For most of us, climbing to the summit of Mount Everest is outside our circles of competence. Not only do we have no real idea how to do it, but—even more scary—should we attempt it, we don’t even know what we don’t know.
You can learn from your own experiences. Or you can learn from the experience of others, through books, articles, and conversations. Learning everything on your own is costly and slow. You are one person. Learning from the experiences of others is much more productive. You need to always approach your circle with curiosity, seeking out information that can help you expand and strengthen it.
If you know the first principles of something, you can build the rest of your knowledge around them to produce something new.
Essentially, they were looking for the foundational knowledge that would not change and that we could build everything else on, from our ethical systems to our social structures.
The second thing we can do is to learn how to fail properly. Failing properly has two major components. First, never take a risk that will do you in completely. (Never get taken out of the game completely.) Second, develop the personal resilience to learn from your failures and start again. With these two rules, you can only fail temporarily.
Critical Thinking & Analysis
However, not every model is as reliable as gravity, and all models are flawed in some way. Some are reliable in some situations but useless in others. Some are too limited in their scope to be of much use. Others are unreliable because they haven’t been tested and challenged, and yet others are just plain wrong. In every situation, we need to figure out which models are reliable and useful. We must also discard or update the unreliable ones, because unreliable or flawed models come with a cost.
“Every statistician knows that a large, relevant sample size is their best friend. What are the three largest, most relevant sample sizes for identifying universal principles? Bucket number one is inorganic systems, which are 13.7 billion years in size. It’s all the laws of math and physics, the entire physical universe. Bucket number two is organic systems, 3.5 billion years of biology on Earth. And bucket number three is human history, you can pick your own number, I picked 20,000 years of recorded human behavior. Those are the three largest sample sizes we can access and the most relevant.” —Peter Kaufman
Too often that lens is driven by our particular field, be it economics, engineering, physics, mathematics, biology, chemistry, or something else entirely. Each of these disciplines holds some truth and yet none of them contain the whole truth.
Reality is messy and complicated, so our tendency to simplify it is understandable. However, if the aim becomes simplification rather than understanding we start to make bad decisions.
Some of the biggest map/territory problems are the risks of the territory that are not shown on the map. When we’re following the map without looking around, we trip right over them. Any user of a map or model must realize that we do not understand a model, map, or reduction unless we understand and respect its limitations. If we don’t understand what the map does and doesn’t tell us, it can be useless or even dangerous.
Models, then, are most useful when we consider them in the context they were created. What was the cartographer trying to achieve? How does this influence what is depicted in the map?
Jacobs’ book is, in part, a cautionary tale of what can happen when faith in the model influences the decisions we make in the territory. When we try to fit complexity into the simplification.
Maps have long been a part of human society. They are valuable tools to pass on knowledge. Still, in using maps, abstractions, and models, we must always be wise to their limitations. They are, by definition, reductions of something far more complex. There is always at least an element of subjectivity, and we need to remember that they are created at particular moments in time.
The idea here is that if you can’t prove something wrong, you can’t really prove it right either.
In a true science, as opposed to a pseudo-science, the following statement can be easily made: “If x happens, it would show demonstrably that theory y is not true.” We can then design an experiment, a physical one or sometimes a thought experiment, to figure out if x actually does happen.
For example, if we are considering how to improve the energy efficiency of a refrigerator, then the laws of thermodynamics can be taken as first principles. However, a theoretical chemist or physicist might want to explore entropy, and thus further break the second law into its underlying principles and the assumptions that were made because of them.
Socratic questioning can be used to establish first principles through stringent analysis. This is a disciplined questioning process, used to establish truths, reveal underlying assumptions, and separate knowledge from ignorance.
Socratic questioning generally follows this process: Clarifying your thinking and explaining the origins of your ideas. (Why do I think this? What exactly do I think?) Challenging assumptions. (How do I know this is true? What if I thought the opposite?) Looking for evidence. (How can I back this up? What are the sources?) Considering alternative perspectives. (What might others think? How do I know I am correct?) Examining consequences and implications. (What if I am wrong? What are the consequences if I am?) Questioning the original questions. (Why did I think that? Was I correct? What conclusions can I draw from the reasoning process?)
The Five Whys is a method rooted in the behavior of children. Children instinctively think in first principles. Just like us, they want to understand what’s happening in the world. To do so, they intuitively break through the fog with a game some parents have come to dread, but which is exceptionally useful for identifying first principles: repeatedly asking “why?”
The goal of the Five Whys is to land on a “what” or “how”. It is not about introspection, such as “Why do I feel like this?” Rather, it is about systematically delving further into a statement or concept so that you can separate reliable knowledge from assumption. If your “whys” result in a statement of falsifiable fact, you have hit a first principle. If they end up with a “because I said so” or ”it just is”, you know you have landed on an assumption that may be based on popular opinion, cultural myth, or dogma. These are not first principles.
«Science is much more than a body of knowledge. It is a way of thinking.»
To improve something, we need to understand why it is successful or not. Otherwise, we are just copying thoughts or behaviors without understanding why they worked.
Starting in the 1970s, scientists began to ask: what are the first principles of meat? The answers generally include taste, texture, smell, and use in cooking. Do you know what is not a first principle of meat? Once being a part of an animal.
Reasoning from first principles allows us to step outside of history and conventional wisdom and see what is possible. When you really understand the principles at work, you can decide if the existing methods make sense. Often they don’t.
Thought experiments can be defined as “devices of the imagination used to investigate the nature of things.” 1 Many disciplines, such as philosophy and physics, make use of thought experiments to examine what can be known.
A better way to answer the “who would win” question is through a remarkable ability of the human brain—the ability to conduct a detailed thought experiment. Its chief value is that it lets us do things in our heads we cannot do in real life, and so explore situations from more angles than we can physically examine and test for.
In order to place that bet, you would want to estimate in how many possible basketball games does Woody Allen beat LeBron James. Out of 100,000 game scenarios, Allen probably only wins in the few where LeBron starts the game by having a deadly heart attack.
Let’s now explore few areas in which thought experiments are tremendously useful. Imagining physical impossibilities Re-imagining history Intuiting the non-intuitive
Imagining physical impossibilities: Albert Einstein was a great user of the thought experiment because it is a way to logically carry out a test in one’s own head that would be very difficult or impossible to perform in real life. With this tool, we can solve problems with intuition and logic that cannot be demonstrated physically.
Re-imagining history: A familiar use of the thought experiment is to re-imagine history. This one we all use, all the time. What if I hadn’t been stuck at the airport bar where I met my future business partner? Would World War I have started if Gavrilo Princip hadn’t shot the Archduke of Austria in Sarajevo? If Cleopatra hadn’t found a way to meet Caesar, would she still have been able to take the throne of Egypt?
These approaches are called the historical counter-factual and semi-factual. If Y happened instead of X, what would the outcome have been? Would the outcome have been the same?
As popular—and generally useful—as counter- and semi-factuals are, they are also the areas of thought experiment with which we need to use the most caution. Why? Because history is what we call a chaotic system. A small change in the beginning conditions can cause a very different outcome down the line.
This is why any comprehensive thought process considers the effects of the effects as seriously as possible.
He developed second-order thinking into a tool, showing that if you don’t consider “the effects of the effects,” you can’t really claim to be doing any thinking at all.
Life is filled with the need to be persuasive. Arguments are more effective when we demonstrate that we have considered the second-order effects and put effort into verifying that these are desirable as well.
Consider the headline “Violent Stabbings on the Rise.” Without Bayesian thinking, you might become genuinely afraid because your chances of being a victim of assault or murder is higher than it was a few months ago. But a Bayesian approach will have you putting this information into the context of what you already know about violent crime. You know that violent crime has been declining to its lowest rates in decades. Your city is safer now than it has been since this measurement started. Let’s say your chance of being a victim of a stabbing last year was one in 10,000, or 0.01%. The article states, with accuracy, that violent crime has doubled. It is now two in 10,000, or 0.02%. Is that worth being terribly worried about? The prior information here is key. When we factor it in, we realize that our safety has not really been compromised.
Why are more complicated explanations less likely to be true? Let’s work it out mathematically. Take two competing explanations, each of which seem to equally explain a given phenomenon. If one of them requires the interaction of three variables and the other the interaction of thirty variables, all of which must have occurred to arrive at the stated conclusion, which of these is more likely to be in error? If each variable has a 99% chance of being correct, the first explanation is only 3% likely to be wrong. The second, more complex explanation, is about nine times as likely to be wrong, or 26%. The simpler explanation is more robust in the face of uncertainty.
How do you know something is as simple as it can be? Think of computer code. Code can sometimes be excessively complex. In trying to simplify it, we would still have to make sure it can perform the functions we need it to. This is one way to understand simplicity. An explanation can be simplified only to the extent that it can still provide an accurate understanding.
Hard to trace in its origin, Hanlon’s Razor states that we should not attribute to malice that which is more easily explained by stupidity.
“I would say you’ve fallen into the commonest fallacy of all in dealing with social and economic subjects—the ‘devil’ theory. You have attributed conditions to villainy that simply result from stupidity…. You think bankers are scoundrels. They are not. Nor are company officials, nor patrons, nor the governing classes back on earth. Men are constrained by necessity and build up rationalizations to account for their acts.”
Reality & Perception
When you learn to see the world as it is, and not as you want it to be, everything changes. The solution to any problem becomes more apparent when you can view it through more than one lens. You’ll be able to spot opportunities you couldn’t see before, avoid costly mistakes that may be holding you back, and begin to make meaningful progress in your life.
You only think you know, as a matter of fact. And most of your actions are based on incomplete knowledge and you really don’t know what it is all about, or what the purpose of the world is, or know a great deal of other things. It is possible to live and not know. » Richard Feynman
In life and business, the person with the fewest blind spots wins. Removing blind spots means we see, interact with, and move closer to understanding reality. We think better.
The biggest barrier to learning from contact with reality is ourselves. It’s hard to understand a system that we are part of because we have blind spots, where we can’t see what we aren’t looking for, and don’t notice what we don’t notice.
Our failures to update from interacting with reality spring primarily from three things: not having the right perspective or vantage point, ego-induced denial, and distance from the consequences of our decisions.
The first flaw is perspective. We have a hard time seeing any system that we are in.
Admitting that we’re wrong is tough. It’s easier to fool ourselves that we’re right at a high level than at the micro level, because at the micro level we see and feel the immediate consequences.
Increasingly, our understanding of things becomes black and white rather than shades of grey. When things happen in accord with our view of the world we naturally think they are good for us and others. When they conflict with our views, they are wrong and bad.
The map of reality is not reality. Even the best maps are imperfect. That’s because they are reductions of what they represent. If a map were to represent the territory with perfect fidelity, it would no longer be a reduction and thus would no longer be useful to us. A map can also be a snapshot of a point in time, representing something that no longer exists. This is important to keep in mind as we think through problems and make better decisions.
When we read the news, we’re consuming abstractions created by other people. The authors consumed vast amounts of information, reflected upon it, and drew some abstractions and conclusions that they share with us. But something is lost in the process. We can lose the specific and relevant details that were distilled into an abstraction. And, because we often consume these abstractions as gospel, without having done the hard mental work ourselves, it’s tricky to see when the map no longer agrees with the territory. We inadvertently forget that the map is not reality.
We run into problems when our knowledge becomes of the map, rather than the actual underlying territory it describes.
In order to use a map or model as accurately as possible, we should take three important considerations into account: Reality is the ultimate update. Consider the cartographer. Maps can influence territories.
Reality is the ultimate update: When we enter new and unfamiliar territory it’s nice to have a map on hand. Everything from travelling to a new city, to becoming a parent for the first time has maps that we can use to improve our ability to navigate the terrain. But territories change, sometimes faster than the maps and models that describe them.
Consider the cartographer: Maps are not purely objective creations. They reflect the values, standards, and limitations of their creators.
Maps can influence territories.
And maybe we just think very differently about something. When it comes down to it, everything that is not a law of nature is just a shared belief. Money is a shared belief. So is a border. So are bitcoin. So is love. The list goes on.
Sometimes it is easy to imagine ten different ways a situation could have played out differently, but more of a stretch to change the variables and still end up with the same thing.
We often spend lots of time coming up with very complicated narratives to explain what we see around us.
A multitude of aspects of the natural world that were considered miraculous only a few generations ago are now thoroughly understood in terms of physics and chemistry. At least some of the mysteries of today will be comprehensively solved by our descendants. The fact that we cannot now produce a detailed understanding of, say, altered states of consciousness in terms of brain chemistry no more implies the existence of a ‘spirit world’ than a sunflower following the Sun in its course across the sky was evidence of a literal miracle before we knew about phototropism and plant hormones.
“It would be even more mysterious to me if the matter we can see with our eyes is all the matter that exists.”
When we see something we don’t like happen and which seems wrong, we assume it’s intentional. But it’s more likely that it’s completely unintentional.
Wisdom & Practical Philosophy
The skill for finding the right solutions for the right problems is one form of wisdom.
The author and explorer of mental models, Peter Bevelin, put it best: “I don’t want to be a great problem solver. I want to avoid problems—prevent them from happening and doing it right from the beginning.”
The second flaw is ego. Many of us tend to have too much invested in our opinions of ourselves to see the world’s feedback—the feedback we need to update our beliefs about reality. This creates a profound ignorance that keeps us banging our head against the wall over and over again. Our inability to learn from the world because of our ego happens for many reasons, but two are worth mentioning here. First, we’re so afraid about what others will say about us that we fail to put our ideas out there and subject them to criticism. This way we can always be right. Second, if we do put our ideas out there and they are criticized, our ego steps in to protect us. We become invested in defending instead of upgrading our ideas.
As Confucius said, “A man who has committed a mistake and doesn’t correct it, is committing another mistake.”
We also tend to undervalue the elementary ideas and overvalue the complicated ones.
But simple ideas are of great value because they can help us prevent complex problems.
Most geniuses—especially those who lead others—prosper not by deconstructing intricate complexities but by exploiting unrecognized simplicities.
Ego, of course, is more than the enemy. It’s also our friend. If we had a perfect view of the world and made decisions rationally, we would never attempt to do the amazing things that make us human. Ego propels us. Why, without ego, would we even attempt to travel to Mars? After all, it’s never been done before. We’d never start a business because most of them fail. We need to learn to understand when ego serves us and when it hinders us. Wrapping ego up in outcomes instead of in ourselves makes it easier to update our views.
“To the man with only a hammer, everything starts looking like a nail.”
What is common to many is taken least care of, for all men have greater regard for what is their own than for what they possess in common with others. –Aristotle
I’m no genius. I’m smart in spots—but I stay around those spots. Thomas Watson
Whenever we are getting advice, it is from a person whose set of incentives is not the same as ours. It is not being cynical to know that this is the case, and to then act accordingly.
Critically, we must keep in mind that our circles of competence extend only so far.
«As to methods, there may be a million and then some, but principles are few. The man who grasps principles can successfully select his own methods. The man who tries methods, ignoring principles, is sure to have trouble.»
Creativity is intelligence having fun.
Moreover, the trolley problem remains relevant to this day as technological advances often ask us to define when it is acceptable, and even desirable, to sacrifice one to save many (and lest you think this is always the case, Thomson conducts another great thought experiment considering a doctor killing one patient to save five through organ donation).
What’s not obvious is that the gap between what is necessary to succeed and what is sufficient is often luck, chance, or some other factor beyond your direct control.
Assume you wanted to make it into the Fortune 500. Capital is necessary, but not sufficient. Hard work is necessary, but not sufficient. Intelligence is necessary, but not sufficient. Billionaire success takes all of those things and more, plus a lot of luck. That’s a big reason that there’s no recipe.
To be successful at a professional level in any sport depends on some necessary conditions. You must be physically capable of meeting the demands of that sport, and have the time and means to train. Meeting these conditions, however, is not sufficient to guarantee a successful outcome. Many hard-working, talented athletes are unable to break into the professional ranks.
Prioritizing long-term interests over immediate gains Constructing effective arguments
Going for the immediate payoff in our interactions with people, unless they are a win-win, almost always guarantees that interaction will be a one-off.
Being aware of second-order consequences and using them to guide your decision-making may mean the short term is less spectacular, but the payoffs for the long term can be enormous.
A little time spent thinking ahead can save us massive amounts of time later.
There are two ways to handle such a world: try to predict, or try to prepare. Prediction is tempting. For all of human history, seers and soothsayers have turned a comfortable trade. The problem is that nearly all studies of “expert” predictions in such complex real-world realms as the stock market, geopolitics, and global finance have proven again and again that, for the rare and impactful events in our world, predicting is impossible! It’s more efficient to prepare.
The test of a first-rate intelligence is the ability to hold two opposing ideas in mind at the same time and still retain the ability to function. One should, for example, be able to see that things are hopeless yet be determined to make them otherwise.
Instead of trying to divine the decisions that will bring wealth, we first try to eliminate those behaviors that are guaranteed to erode it. There are some pretty obvious ones. Spending more than we make, paying high interest rates on debt so that we can’t tackle paying back the principal, and not starting to save as early as we can to take advantage of the power of compounding, are all concrete financial behaviors that cost us money.
Think about not only what you could do to solve a problem, but what you could do to make it worse—and then avoid doing that, or eliminate the conditions that perpetuate it.
«He wins his battles by making no mistakes.» Sun Tzu
«Hence to fight and conquer in all your battles is not supreme excellence; supreme excellence consists in breaking the enemy’s resistance without fighting.»
Anybody can make the simple complicated. Creativity is making the complicated simple.
And for patients, Occam’s Razor is a good counter to hypochondria. Based on the same principles, you factor in the current state of your health to an evaluation of your current symptoms. Knowing that the simplest explanation is most likely to be true can help us avoid unnecessary panic and stress.
I need to listen well so that I hear what is not said.
Always assuming malice puts you at the center of everyone else’s world. This is an incredibly self-centered approach to life.
Author
Mauro Sicard
CEO & Creative Director at BRIX Agency. My main interests are tech, science and philosophy.