The Great Mental Models – General Thinking Concepts

(My) intro

I’m not new in this “meta-thing” of learning how to learn, knowing how we see and perceive reality (from a neurological and psychological perspective), what it still amaze me is how some people live without being aware of a lot of things. I’m not talking about specific knowledge in a specific field, obviously I can’t pretend anybody to be a modern Renaissance person like Leonardo da Vinci, simply because the topics and the depth of knowledge is growing to the point that we would need more than one life only to read the titles of academic papers – if you think I’m exaggerating, test it yourself: go to Google Scholar and search for a more or less wide topic, you’ll be impressed by the number of scientific articles the search engine will find. So I am not telling we should deep dive (see Why are we so superficial?) in everything, because we are one step before: understanding reality, modeling what happens inside and outside us. The reason why math, physics, psychology and a lot of other subjects elaborate models is because we want to try to describe a certain phenomenon (not always “certain”, that’s why we have statistics… sorry for the joke about “certain”), so to better understand and sometimes also try to predict the a small portion of the future.
I keep learning almost every day (a really wide range of topics) and often I find something so exciting that I continue to wonder; after some “a-ha”/”wow” moments, I see things differently (more than just knowing a simple notion that you can learn scrolling down a social media feed, assuming you cultivated a decent “informative” feed, but by the way I suggest to destroy every social media account). I mean: you start looking and thinking differently, you will not see the things in the same way; you may even realize how much few we know and understand in general, it’s like being often in the “valley of despair” in the Dunning-Kruger effect. Sometimes I wonder how some people (the common person that learned only the basics at school and never questioned anything) see the world, how and how much they “perceive” of the world – and this goes from the simple case of ignoring the meaning of “standard deviation” in a distribution about some data told to the generic audience, to understanding really complex situations like economics, climate, wars, pandemics, and everything, from micro to macro.
Our life is a travel: not only someone live in a non-deliberate way, like a castaway at the mercy of the waves (so completely under the influences of external agents, with the idea of having almost no control), but someone is not even aware of being in travel, barely perceiving the movement, heedless of whatever happens. And mind you I’m not saying this in a classist way, because this doesn’t have to do with predefined “castes”: there are affluent professionals, with a high standard of living and a high level of education, who nevertheless are “dull” (I think I’ll read “On the ignorance of the learned” by William Hazlit, soon or later), while there are “simple people” who love to question everything, trying to develop a better understanding of the reality.

About this post

This is not an exhaustive work (posted here under the fair unless the author thinks otherwise), please buy and read the original book (please believe me: it’s “The Great Mental Models” are great also in the print, excellent quality indeed, a pleasure to see and touch) and… wait, here you can ask: “Why should I read this post instead of searching for example the one written by James Clear, the author of “Atomic Habits” (a nice book I already wrote about)?” Well, this is mine. I mean: my elaboration, enriched with my knowledge, previous experience and further thoughts. You can see it as a way to see the concepts through my mind. I do think this is usually much more interesting than viewing half-naked people on social media (or wherever you like), since this is, to me, the ultimate nudity and (I know this is a personal point of view) much more interesting to discover in people. It may seam absurd, but I stressed more on the intro and on “lateral considerations” rather than just on the models themselves: after all, you can find structured “recap” of all them summarized by many other people on the web (including, of course, the primary source: the author itself, on his blog: fs.blog).

NOTE: as in similar previous posts, what is written between parenthesis are my thoughts.

Intro

The quality of your thinking depends on the models that are in your head (compare with: the quality of your life depends on the quality of your questions). We can learn to see the world as it is, and not as we want it to be (otherwise it’s like children pretending a different reality, failing in the evolution during Children development). The reason we should acquire more wisdom through books like this one is that education don’t prepare us for the real world, since they are focused in memorizing notions, not in developing critical thinking) and even in advanced master’s program they fall to proper “update” our way of thinking (they’re more interested in notions). We are lucky to spot people like Charlie Munger, intellectually stimulating, who open the door to unexpected intellectual pleasure. To suck someone’s brain, we can simply read a book (you can see the opposite now with endless streaming where info is diluted or extremely opposite stupid short videos). In books like this, it’s true in a way what Publius Terentius wrote: “Nothing has yet been said that’s not been said before”, but there’s a valuable work in curating, editing, shaping the work of others (see also “Steal like an artist”, after all we are “Standing on the shoulders of giants“).
“People search for certainty. But there is no certainty. People are terrified—how can you live and not know? It is not odd at all. You only think you know, as a matter of fact. And most of your actions are based on incomplete knowledge and you really don’t know what it is all about, or what the purpose of the world is, or know a great deal of other things. It is possible to live and not know.” said Richard Feynman (yes, we can’t live thinking “ignorance is bliss”, but we miss all the beauty).

Acquiring wisdom

“I believe in the discipline of mastering the best of what other people have figured out”, Charlie Munger (so here you can see the value of mastering “Deep Work” in your deliberate learning/practice during an intentional life).
Usually, the person with the fewest blind spots wins (see, about ourselves, the Johari window), so removing (ora at least acknowledging and minimizing) blind spots help us to move closer to understanding reality, we think better and so make better decision. Peter Bevelin said: “I don’t want to be a great problem solver, I want to avoid problems” (this is actually the “lateral thinking” or “thinking out of the box” typical for ones working in intelligence and security).
Mental models describe the way the world works, they shape the way we think and understand (but be aware that some models don’t fit properly in some situations, ending with distortion and oversimplification). The models are great in multidisciplinary thinking (see “Range” by Epstein).
We can better understand reality if we use the lenses of mental models (I see here that my metaphor on “Different Glasses” isn’t so original :D), starting with awareness, like David Foster Wallace described: “There are these two young fish swimming along and they happen to meet an older fish swimming the other way, who nods at them and says “Morning, boys. How’s the water?” And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes “What the hell is water?” (you can have a different way of thinking this if you switch the old and the young fishes, since sometimes the opposite is true: after a long time, you start ignoring the environment, so become unaware but in another way).
We face different flaws:

  1. Perspective: we have a hard time seeing any system that we are in (see Galileo and Cartesio for more), so we should be open to other perspectives.
  2. Ego: many of us tend to have too much invested in our opinions (“Ego Is the Enemy” and we identify with our opinions, as said in The Scout Mindset)
  3. Distance: the further we are from the results of our decisions, the easier it is to keep our current views rather than update them (that’s why we can “zoom out” if we don’t see other solutions, we can ask an external point of view, less emotionally involved, or we may want to wait more to cool down – but remember that sometimes the opposite is true: you may need to be closer, to see more details, to find way maximize resolution like in Radar measurements/discrimination or in Satellite imaging – in the words of photojournalist Robert Capa: “If your pictures aren’t good enough, you aren’t close enough”).

Once you find yourself doing a mistake, correct it: “A man who has committed a mistake an doesn’t correct it, it’s committing another mistake” (probably Confucius), you can grow through the pain of updating our existing false belief (Bayesian updates on their map like children do growing up).
“Most geniuses – especially those who lead others – prosper not by deconstructing intricate complexities but by exploiting unrecognized simplicities” (see again “Range” by Epstein: linking present knowledge differently or making a different use).
Like other books on this topic, also here the author recommend to don’t just study but use the insights for a positive change in our life (maybe an “utilitarian” way of see things, but you can also learn for the sake of knowing more, just remember that those who adapt better and faster are the one to survive in the future, I could suggest to try to find the values that argmax the “adaptation function”, like:

is the probability to survive at the time t and t+1 is the future instant, so you should actually focus on our “future self”, working now on our capabilities, to “Be prepared” – as Lord Gen. Robert Baden-Powell would say). //sorry for the bad formatting, Latex stopped working here…
We should better abandon a wrong view of the reality or the “passive feeling” (fixed mindset due to our inability to think/change or due to believe in a strong and wrong way on too much extreme fatalism or the contrary, aprioristic determinism, regardless of the scientific view of complex quantum systems in entanglment or the simple magical thinking of astrology – put in simple words: let’s go back to start with the cause-effect principle we learned at the primary school, plus the principle of responsibility/liability/accountability).
Don’t waste the observation and everything at the fundamentals of the scientific thinking, otherwise is like coming back to the middle age and to the time where people thought bloodletting was good (to more recent mistakes, you can also study the story of the lobotomy, with all the interest behind – if you prefer the history, instead, search for the human sacrifice some populations performed to be sure that the sun would rise the next day, mistaking correlation for causation). Be aware that not models don’t fit everywhere (if you only have a hammer, you’ll look at everything like it’s a nail, it will be like not-so-smart children trying to force a cube in the space designed for a sphere… actually I found once a military officer literally doing that with a plug in a different socket, with a possible flame/electric hazard – of course he was not like me, he was not an engineer, you see here the importance of scientific thinking), to quote again the recently passed away Charlie Munger: “80 or 90 important models will carry about 90% of the freight in making you a worldly-wise person. And, of those, only a mere handful really carry very heavy freight”.

Only shape she has is cube, but now she’s facing circle-shaped holes – Image generated by me with DALL-E

Not only we are biased, we also have access to small part of the manifestation of reality, like in the famous story of the blind men and the elephant (that I cited in older posts), where everyone think they’re touching a different object/animal, focusing on a specific part and what they experience of the animal (and experience and imagination play a bigger role in case of representation, like for the hat / elephant (again) eaten by a snake in Saint-Exupéry’s “The little prince”, not to mention what we can see when we are trapped in the Plato’s cave).
“The chief enemy of good decisions is a lack of sufficient perspectives on a problem”, said Alain de Botton (writing about enemies, I’ll quote also the super-famous Sun Tzu: “If you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not the enemy, for every victory gained you will also suffer a defeat. If you know neither the enemy nor yourself, you will succumb in every battle”).
In this era, we need not only specialized people, but also generalists (see more in “Range” by Epstein) since “Disciplines, like nations, are a necessary evil that enable human beings of bounded rationality to simplify their goals and reduce their choices to calculable limits. But parochialism is everywhere, and the world badly needs international and interdisciplinary travelers to carry new knowledge from one enclave to another” (Herbert Simon).

01. The Map is not the Territory

Maps are reductions of what they represents and (unless we’re taking into account the variable “time”) snapshots of a point in time, representing something that no longer exists (so here you can see again the importance of updating maps and the same is true when we go back in time when we have a recent map, let’s say if you want to study what was present there before). Maps help us reduce complexity to simplicity, they can be explanatory and predictive (see the same for deterministic informatics and/or machine learning). In the words of the mathematician Alfred Korzybski (that made popular this expression about maps, in a paper in 1931, illustrating the relationship between mathematics to human language and of both to physical reality):

  1. A map may have a structure similar or dissimilar to the structure of territory.
  2. Two similar structures have similar ‘logical’ characteristics. Thus, if in a correct map, Dresden is given as between Paris and Warsaw, a similar relation is found in the actual territory.
  3. A map is not the actual territory.
  4. An ideal map would contain the map of the map, the map of the map of the map, etc., endlessly…We may call this characteristic self-reflexiveness.

(I can add here that the issues in resolutions and in the impossibility to represent every aspect/layer on a map – unless you’re working with multilayer maps, like in multi-spectral Radar imaging – were well known in the past, when someone started to propose maps to some emperors and commanders, some of them yelling that a map was too small to represent the majesty of cities like Rome or, when the cities where enlarged on the map trying to not insult them, that the distance was not realistic to the proportions). We can’t trust for example 100% GPS maps (there are people driving/falling into the water since they were just following the GPS, like in this case).
Also, some phenomenon/things may need different maps depending on the conditions: physicists discovered they would have needed another map (so: other models) for small-scale quantum physics (the same is true when I studied and worked with electronics at radiofrequency: the very same piece of electrical conductor “switched” from a simple piece of metal to a whole circuit, if you’re curios this is the difference between Lumped element model and distributed element model, the latter used when the wavelength becomes comparable to the physical dimensions of the circuit).
“Remember that all models are wrong: the practical question is how wrong do they have to be to not be useful”, said George Box.
When you deal with a map:

  1. Reality is the ultimate update;
  2. Consider the cartographer (ability/knowledge/experience, biases and the goal when they created the map);
  3. Maps can influence the territory (decision makers fooled by a map and so changing the territory by their decision).

(I want also to add an important consideration about maps: when you deal with data, such as metrics from statistics, make sure you have a proper understanding of mathematics, I see too many people really illiterate about data, that don’t even understand the difference between average and median: how are they supposed to use data maps on phenomenon and be aware of some data manipulation? Solution: study at least the basic concepts).

Image created by me with DALL-E

02. Circle of competence

“I’m not genius. I’m smart in spots – but I stay around those spots”, Thomas Watson.
(When we listen to an expert, we may be impressed by their knowledge, but sometimes we forget that is probably vertical knowledge in just one or two fields. A few months ago, an Italian professor of logics, Piergiorgio Odifreddi, invited in Camogli, said that the only way to remain mentally sane and with self-esteem during conventions full on Nobel Prizes is to remind ourselves that even those people know a really small amount of the total knowledge).
To understand what a circle of competence is, imagine an old man knowing everything of his village, he’s the well-respected “Lifer” who knows everything about the place and the people, a guide to everyone. Then, imagine the “Stranger” just entering the town, in a couple of days he saw the most important spots and spoke to the major “celebrities” (the sheriff, the mayor, the teacher, the doctor) and, simply for this reason, the Stranger thinks he knows almost everything of the town. We sometime consider ourselves competent like a Lifer when probably we are just Strangers. Sun Tzu said “We shall be unable to turn natural advantage to account unless we make use of local guides” (in fact, military officers abroad engage with people that know the place, you must study also the culture there, to avoid risks and small “diplomatic incidents”, since even a mindlessly made gesture can be understood as an offence). In extreme situations, climbing a mountain with or without a local sherpa can make all the difference between surviving or not (same in war zones).
Within our circle of influence, we know exactly what we don’t know. A circle of competence can’t be built overnight, it requires time and deliberate learning/practice and must be updated: the world is dynamic (in intelligence terminology: once you acquire the solid foundation on a target, so the “basic intelligence”, you need to update your knowledge if you want to stay current, hence “current intelligence”). Learn from the mistakes of others: you can’t live long enough to make them all yourself (“Ars longa, vita brevis“).
Get rid of your ego (after all, we’re “bayesianly updating” our models) so you can sometimes solicit external feedback from expert professionals to build and maintain your circle (you can even ask a coach), you can also keep a private and easy journal of your own performance to give yourself self-feedback.
And when we are outside our circle of competence?

  1. Learn at least the basics of the realm you’re operating in, while acknowledging we are strangers, not lifer (I suggest a method like the one illustrated in “The First 20 Hours” by Josh Kaufman, to quickly understand what it’s really important at the beginning and to deconstruct in simple elements you can easily learn);
  2. Talk to someone whose circle of competence in the area is strong (when you’re a stranger, don’t be afraid to ask information), but be aware that this person could have some incentives to suggest you something (e.g.: you ask for the best restaurant in town and this person is a relative of a chef in a specific restaurant, or someone suggest you to buy something he receives commissions – try to mitigate this situation with the help of the Internet);
  3. Use a broad understanding of the basic mental models of the world to augment your limited understanding of the field in which you find yourself a stranger.

Even Queen Elizabeth I of England admitted she didn’t knew everything to rule a Country, so took counsel from (trusted) others (humbleness to admit we don’t know everything is not a sign of weakness; if considered otherwise, we are in a toxic context).
(Opposite to the culture that want us to be extremely little competent in everything, but knowing anything in depth, instead of become “So good they can’t ignore you” as Cal Newport would say) Warren Buffett once spoke about Rose Blumkin, CEO of one of Buffett’s businesses, in the example that she refused to deal with stocks since she only understood (and master) cash and furniture, so she focused on remaining in her circle to bring out the best from her activity (obviously there are fields in which better we know the basics, but the concept is that is OK to stay within our circle if we want to operate as a professional – and keep expanding the circle you’re strong within, so enhance your strength, don’t waste too much time in becoming better in your weaknesses).

As many mental models, this one can be observed “from the inside” (our circles) and “from outside” (circles of others): about the latter, keep in mind that someone speaking can leverage on halo effect, specially combined with authority bias: remember their circles of competences, specially when they claim something really far away from those circles. Whether our or others’ circles, we may of course move out, only remembering that we’re exploring fields where we may know really little and so we could be really ignorant (strictu sensu, without any offense, we ignore a lot), so be cautious in thinking and taking decisions.

Image created by me with DALL-E

03. First principles

The great Richard Feynman expressed in a few words one of the main problem in education: “I don’t know what’s the matter with people: they don’t learn by understanding; they learn by some other way—by rote or something. Their knowledge is so fragile!”.
When we don’t really understand something (or when we want to be sure we properly understood it), more than the “Falsifiability” approach (if Karl Popper forgives me, the brutally short version is: we have an assumption or a common belief to test, we just observe if there are exceptions that falsify so prove it’s wrong – while universal statements cannot be verified, they can be proven false), we can go to the basics. There are different techniques, one of them is the Socratic questioning:

  1. Clarifying your thinking and explaining the origins of your ideas (Why do I think this? What exactly do I think?)
  2. Challenging assumptions (How do I know this is true? What if I thought the opposite?)
  3. Looking for evidence (How can I back this up? What are the sources?)
  4. Considering alternative perspectives (What might others think? How do I know I am correct?)
  5. Examining consequences and implications (What if I am wrong? What are the consequences if I am?)
  6. Questioning the original questions (Why did I think that? Was I correct? What conclusions can I draw from the reasoning process?)

Another way is the “The five whys”: ideally, you should be able to go deeper and answering a waterfall/chain of “why?” (this is common for parents that find themselves speechless when a 5 years old child start asking “OK, everyone does it, but why?” and then it can be like an earthquake if you never questioned the fundamental aspects of our world, life and society).
Carl Sagan said “Science is much more than a body of knowledge. It is a way of thinking” (this is something I tried countless time to explain to some teachers, that are only interested in children memorizing dates and names in the short period, without taking care of reasons and consequences). A lot of great discoveries in science come from challenging assumptions. Sometime is “Incremental innovation” (compare also with japanese “Ikigai” and with the compound interest effect), but sometimes is a “Paradigm shift”: instead of focusing on fine-tuning what already exists, huge changes and revolution can start by challenging the status quo and identifying what are first principle of what we do, like it’s for meat alternatives: researchers found that what makes meat so tasteful is the consistence and the response to the Maillard reaction, it doesn’t matter it was part of a living being, so now they are producing alternatives without killing animals.
(I can’t stress never enough on the importance of understanding the fundamentals, specially in this era we need people who have strong understanding of the basics, better if they experimented rather than just learned in theory).

Image created by me with DALL-E

04. Thought Experiment

Thought experiments can be defined as “devices of the imagination used to investigate the nature of things”.
If you had to bet on the result of a match between 2 professional players or between 1 professional player and an average guy, probably you start with a thought experiment: you imagine the possible result based on your knowledge, simulating the game in your mind based on physical aspect and the skills you guess they have. Thought experiments are more than daydreaming, they are more likely to follow a scientific approach, with these steps:

  1. Ask a question;
  2. Conduct background research;
  3. Construct hypothesis;
  4. Test with thought experiment;
  5. Analyze outcomes and draw conclusions;
  6. Compare to hypothesis and adjust accordingly (new questions, etc.)

These experiments are generally useful in contexts like:

  1. Imagining physical impossibilities – Albert Einstein was a great user and this is also the case for something involving catastrophes or choices like in the trolley experiment (and now with a lot of possible impacts of autonomous systems, like self-driving cars);
  2. Re-imagining history (this is something used also when studying military strategies, not to be confused with the writing of alternative history novels like “The Man in the High Castle”);
  3. Intuiting the non-intuitive, to verify if our natural intuition is correct by running experiments in our deliberate conscious mind.
Image created by me with DALL-E

05. Second-order thinking

Second-order thinking is thinking farther ahead and thinking holistically: it requires us to not only consider our actions and their immediate consequences, but the subsequent effects of those actions as well (this is something exceptionally useful in complex systems, like politics and economics, but also environment and human nutrition – that’s why I appreciated the systemic approach in books like Se pianto un albero posso mangiare una bistecca?, Fa bene o fa male? and many many others). In 1963, ecologist and economist Garrett Hardin proposed the first law of ecology: “You can never merely do one thing” (and this is often rendered even in an humorous way in cartoons like The Simpsons, when sometimes they show, with thought experiments of course, the effect of just slightly changing something in the environment and then a complete different world will appear). A well-known effect of this second-order thinking is when we speak about mis(use) of antibiotics in meat (but a more recent well-known case is the series of consequences due to pandemic-related politics by different countries). The example recounted by Warren Buffett is the crowd at a parade: once a few people decide to stand on their tip-toes, everyone has to stand on their tip-toes, so no one can see any better (except the first row), but they’re all worse off. A way to overcome our short sight in similar cases, it’s prioritizing long-term interests over immediate gains and constructing effective arguments (compare this with the usual politics of some countries, where politicians want to immediately counter-react, overreact and propose stupid stuff like forcing positive discrimination). This mental model is useful also in arguments, for example talking with a boss or a colleague, when you can show you already took into consideration the long-term second-order effect to propose a change.
Obviously at a certain point we must truncate the number of orders, specially when there are probabilities, like “if you eat this, you can develop this that can lead to that” (or, if you use a fishbone chart, you can’t follow every final point, specially when the weight of the cause or effect is extremely low, see also Pareto 80/20).

Be capable of looking at the end of the domino chain, remembering also there may be some butterfly effects – Image created by me with DALL-E

06. Probabilistic thinking

Probabilistic thinking is essentially trying to estimate, using some tools of math and logic, the likelihood of any specific outcome coming to pass. The probabilistic machinery in our mind, with the heuristics illustrated in books like Kahneman’s “Thinking, fast and slow” (but here I can add a lot of books, including the ones from another famous author, Nassim Taleb), served (and still does) us in a time before computers and when human life was about survival.
It’s recommended to be familiar with at least these 3 important aspects of probability:

  1. Bayesian thinking: given that we have limited but useful information about the world, and are constantly encountering new information, we should probably take into account what we already known when we learn something new. If you are worried because media said that criminality rate doubled in the last year, you should also take into account that the criminality rate dropped a lot in the last decade, so you shouldn’t be more worried than you were a few years before – in technical terminology: remember the “base rate”; similar to that, there is also the conditional probability: observe the conditions preceding an event you’d like to understand;
  2. Fat-tailed curves: in a normal distribution – the famous bell curve – understand if you are talking about a phenomenon described with thin tails (so: most of the phenomenon is comprised a zone near the average) or with fat tails: in this case, an extreme variation/situation is more likely to happen – in the words of Nassim Taleb, be aware of the black swan, a rare event that is actually not too rare as we thought;
  3. Asymmetries: not all the phenomenon are perfectly balanced, there are situations in which you can have much more probability to obtain a result in a direction rather than in another, for example the probability to arrive 20% late compared to the probability to arrive 20% early – a common example is about insurance companies that strongly rely on statistics to spread the price of rare huge losses across a large enough population.
Image created by me with DALL-E

07. Inversion

Most of us tend to think one way about a problem: forward; with inversion, we can flip the problem and think backward (or, in some cases, to think in “negative logic” instead of the usual “positive logic”).
One approach to apply inversion is starting by assuming that you’re trying to prove is either true or false, then show what else would have to be true. This is quite common during investigations, the way of thinking like: “If I was this person doing that, what else I could have done?” (it works also on the other side, like: “If I wanted to avoid that, what else I could have tried to avoid/delete?”). The other approach is asking yourself what you are avoiding (instead of “I want to that”, focus on what you do not want, this is used sometimes also in psychology). A famous marketing example is the one about Lucky Strike cigarettes: instead of thinking ways to tell women to want a cigarette, they reasoned on the opposite and made the famous campaign in which cigarettes are proposed as an alternative to sweets after dinner (so kind of a substitute), even suggesting/paying restaurants to add cigarettes to the menu alongside with desserts.
Inversion is also applied following the “force field analysis” by Kurt Lewin:

  1. Identify the problem;
  2. Define your objective;
  3. Identify the forces that support change towards your objective;
  4. Identify the forces that impede change towards your objective;
  5. Strategize a solution, both augmenting or adding to the forces in step 3 or reducing or eliminating the forces in step 4 (this is also at the core of Atomic Habits).
    Sun Tzu said “He wins his battles by making no mistakes”. Sometimes, it’s much better to find causes to be reduced/eliminated instead of focusing in adding more (let’s see that as minimalism, “via negativa” and remembering that sometimes adding/doing stuff could be even worse, see Iatrogenesis).
Image created by me with DALL-E

08. Occam’s razor

Charles Mingus said: “Anybody can make the simple complicated. Creativity is making the complicated simple”. Simpler explanations are more likely to be true than complicated ones (remember: this is just a model, it’s not always true and applicable, when you’re dealing with intelligence or investigations sometimes there’s really something complex behind, but often it’s better focus on the simpler direct explanation, unless you are one of the many that see conspirations everywhere). The medieval logician William of Ockham wrote that “a plurality is not to be posited without necessity”; so, all else being equal, if when you have several possible “explanations candidates” for something, it’s more likely that the simple solution suffices (or that the others concur with slightly better approximations but maybe with more flaws or more variables/conditions to be validated). After Ockham, David Hume adopted the same approach when investigating supposed miracles, e.g.: the miracle witness saw/described the event wrongly or it was a phenomenon not (yet) explained by science (for more, see Carl Sagan’s “The demon haunted world”… yes, it’s where the famous “extraordinary claims require extraordinary proof” comes from). Not only a complex explanations made with complex systems and several variables involved are more likely to be wrong, since they require a certain “alignment of conditions” to be verified, compared to simple/direct cause, but also simplicity increase efficiency, since we have limited resources (most of all: time), so imagine that a patient goes to the doctor during the winter with a fever: should the MD quarantine the patient to the Center for Disease Control asking to check for a ultra-rare disease or can the MD just assume it’s seasonal flu? (Remember we wrote: “all else being equal” between possibilities, since it’s clear that with other concurring symptoms and maybe after an exotic vacation, we enhance/update the Bayesian probability). The same is true in management, when managers are struggling to find causes of losses/failures and maybe they’re just neglecting the principles on what the company was founded (I am big fan of reducing simplification, but remember that we live in a complex world, so it’s not always the case that the simplest solution is the best one, not to mention all the cases of (self-)deception, intentional or not).

The straightforward path is often the most likely to explain the connection between the point A and the point B, instead of analyzing tortuous lateral paths. – Image created by me with DALL-E

09. Hanlon’s razor

Simply put, this model says that we should not attribute to malice that which is more easily explained by stupidity (in my career, I had to deal really a lot with that, looking if something bad inside our organization was deliberately done by an “insider threat” with malicious intent or it was rather more likely just neglicence or even something done in good faith).
Since we are human and so we make mistakes, the explanation most likely to be right is the one that contains the least amount of intent (so you can search for lack of attention/will, incompetence, bad judgement or whatever cause that is not intentional, before thinking it was done with the specific purpose to harm/damage or take advantage).
Look at Hanlon’s razor as a tool to minimize your confirmation bias when you start assuming the other is doing things with the worst malice purpose ever (in military history, there are a lot of cases in which a conflict escalated since the first action was not intentional – or let’s say “not fully intentional”, like shooting at something genuinely mistaken for something else – and then assumed as a deliberate fully-intentional action, so replied with what was “proportional” to who decided to counter-react but absolutely not proportional to the eyes of the ones who started the first action, since they didn’t actually know the consequences of what they were doing in the first place – in every day life, it could be walking into a private property without signs: can you imagine instead you are lost and accidentally cross a country border not properly signaled and then the border police start shooting at you assuming you were intentionally try to enter the country without notice? So the police can sure block you, but using the Hanlon razon they can start assuming you entered the country by mistake, then can validate if ti was not the case. Finally, again: you think that there is maybe some conspiracy by politicians and economists, but it’s far more likely they are not competent and failed to apply or even ignored tools like the models just explained that can help in reasoning and taking good decisions, not actively deciding with malice).

We hope these policemen will use the Hanlon’s razor, to understand he’s not a spy crossing the border, but just a confused tourist with a map that sure is not the actual territory! – Image created by me with DALL-E

Last considerations

This is indeed a great book, my favorite one of the Farnam Street books collection. I won’t repeat also here the thoughts of Pirandello, Watzlawick and many others on reality and perceptions (not to mention books like “Deviate” by Beau Lotto), you can read my old posts. I can just say that I met a lot of people that can really benefit by the understanding and the application of at least a few mental models. Talking for example about the first model, the map, I had a collaborator of mine (a soldier) who, once arrived in a new place, quickly started to walk not knowing the right direction to the destination. In his mind, it was just “better moving than waiting” (exactly the opposite reasonable thought/advice by Seneca: “When a man does not know what harbor he is making for, no wind is the right wind”). This was “physical”, but metaphorically speaking I see a lot of people going nowhere without at least a small map.
If you’re a video-games nerd: a lot of people think having understood everything, then they rush headlong like Leroy Jenkins (World of Warcraft), ruining their and others’ lives.
Of course I knew (and already applied) a lot of models even before this book, but reading it is really a nice “journey”, plus the author(s) provide a lot of links to other thoughts and facts.
Since online I found some hard critics (you can read some here), I want to emphasize again the law of the instrument: Abraham Maslow wrote, in 1966, “If the only tool you have is a hammer, it is tempting to treat everything as if it were a nail”. So, make your best to collect and practice with the most tools, to use the best one in the appropriate situation. And make use of them! 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.