Nassim Taleb is a very bright guy. He has some strange social and communication skills (he savagely ridicules those he thinks exemplify foolish qualities, he’s annoyingly arrogant, and he comes across as a bit scatterbrained both in TV interviews and in some of his writing). Because of this, it’s a bit surprising that his work (The Black Swan, Fooled by Randomness) has become so popular. But, given that he “predicted” the financial crisis, and made a lot of money by betting on it (or, more accurately, betting against it not occurring), he’s recently become a celebrity and is being treated as a business guru (he was very visible at Davos this year, where the charts above and below were presented). His books present some interesting, and truly original, thinking.
One of my current work projects is to get business leaders to think of environmental sustainability as congruent in the long term with business sustainability, and to think of both in the context of risk management. I recently presented a paper on this subject that I co-authored, in London at a Prince’s Trust event on sustainability. It argues that the risk management models currently used in business need to be enhanced to consider:
Taleb’s key argument fits well with the above ideas. His thesis for the book is:
We favor the visible, the embedded, the personal, the narrated, and the tangible; we scorn the abstract. Everything good (e.g. aesthetics, ethics) and wrong (e.g. being fooled by randomness) with us seems to flow from this.
We are wired, he argues, to deal with immediate emergencies (fight or flight, when being pursued by predators), and to optimize the likelihood of procreation. Because of this, our brains, emotions and instincts can be “fooled”. Several of these types of foolishness are now getting our species into deep trouble:
Because of these biases, we are, he argues, very poor at assessing risks (both their likelihood and severity). I would say (as someone who has struggled with large organizations that have a strong, unacknowledged bias against true innovation) we are equally poor at assessing opportunities (both their likelihood of success and their consequences if they do succeed), because of these same biases.
So, looking at the risks in the top chart above, Taleb would probably say that (1) we are probably underestimating the consequences of many of the risks on the left (low perceived likelihood) side of this chart, (2) we are missing a raft of risks on this left side that we have forgotten can occur or can’t even imagine occurring, and (3) the combined probability of at least one (and probably several, possibly interdependent) of these ‘individually low-likelihood’ risks occurring is very high, and that occurrence, far more than the higher-probability “known” risks on the right side of the chart, is what we should really be considering, and preparing for.
This is at least easier when we know what those risks are. We can anticipate the consequences of another disastrous war, next time in Iran or North Korea (and Obama’s decision to escalate the war in Afghanistan today signals that he is also incapable of learning the lessons of history, which is an ominous sign). We know, from the sad lesson of Katrina, that the consequence of natural catastrophes in the 21st century will be to abandon afflicted cities to die (we simply cannot afford to rebuild them), just as we abandon old buildings and factories.
From the 1970s and 2008, we have an inkling of the consequence of huge oil price spikes (though the gnomes of Davos still cannot get themselves to acknowledge that the real risk is not a price spike, but the end of oil as the engine of our economy). From the great blackouts we’ve experienced, we’re reluctantly aware that the decaying and neglected infrastructure in our cities everywhere is going to cause us enormous problems, but because we consider it (for now) a low-likelihood catastrophe (and because we can’t afford to fix it) we just put it out of our minds. Same thing with pandemics and water crises: we know they’re coming, and that they will both cause a horrific economic downturn (and the indirect economic consequences will probably kill more people than the diseases and droughts will kill directly), but because they’re still ‘unlikely’ in any year (and hence ‘unlikely’ to occur in the 10-year horizon of the charts above), we do nothing.
Another real problem is all the risks that are not even on the chart. What if the real political crisis is not war in Iran, but the collapse of Mexico? There are plenty of warning signs for this, but we haven’t even begun to consider what happens when organized crime takes over an anarchic state right beside us, and fifty million people seek asylum elsewhere. What if the real terrorism risk in not an ‘international’ threat but a bunch of whacked-out individuals who manage to produce (not as hard as you might think) weaponized anthrax and use it as a carrier for smallpox? What about a good old fashioned nuclear war between India and Pakistan? I can imagine dozens of risks, some of which have a long history of occurring but not recently (think Mao and ask why a populist coup in China is not on the risk list above), that belong in the upper left corner of this chart. They are all perhaps ‘unlikely’, but taken together, their probability is as high as the probability of an attack on the US was a year before 9/11, and their consequence is likely to be much greater.
My pick for ‘breakout’ risk of the year? Food crisis (notice they call it “food price volatility”: the gnomes can’t quite get themselves to use the real ‘f’ word famine). It’s in the upper left (#1) but there’s lots of evidence it should be in the upper right. Unless they’ve read something about history, people think famine is something that only happens in Africa and Asia. But then, last year the gnomes only gave “asset price collapse” (their euphamism for global depression) a 20% chance of occurring in the next decade, and still don’t think that it’s much more likely than that.
Another weakness in our analysis of risks is that we tend to view them all as temporary ‘events’ that need to be mitigated and survived, until things “return to normal”. But just as some innovations (what Christensen and Raynor call “disruptive innovations”) permanently change the business landscape, some risks (climate change, the end of oil) will, when they occur, usher in permanent structural changes in our world and in our economic and political systems. That’s something Taleb doesn’t deal with, but which I hope to continue to write about. The real value of scenario planning, simulations and other adaptation risk response strategies is not so much that they help us anticipate system shocks (though they can do that), as that they help us prepare for permanent shifts in our world, and help us learn to cope with complexity.
PS: Taleb’s advice is to shun the mainstream media, avoid self-help books and advice that would presume to make us who we are not, never complain (it’s no one’s “fault” so there is no point), never take compliments or harsh criticism too seriously (they say more about the speaker than about you), avoid superstition, never gamble, always be skeptical of apparent causality, and try to avoid path-dependent decisions (those you make unaware of your anchoring bias, described above). He also suggests not scheduling your life tightly, since he’s observed that “time optimizers” are generally more unhappy because they set up more opportunities for “failure”. I actually find his advice (despite his warning against taking advice from anyone) more interesting than a lot of his explanation of how we are fooled by (im)probabilities.