As I wrote a few weeks ago, Antifragile was a better book than I expected it to be. One passage that stuck in my mind long after I read it was the following:
Risk management professionals look in the past for information on the so-called worst-case scenario and use it to estimate future risks […] They take the worst historical recession, the worst war, the worst historical move in interest rates, or the worst point in unemployment as an exact estimate for the worst future outcome. But they never notice the following inconsistency: this so-called worst-case event, when it happened, exceeded the worst [known] case at the time.
I have called this mental defect the Lucretius problem, after the Latin poetic philosopher who wrote that the fool believes that the tallest mountain in the world will be equal to the tallest one he has observed.
When I first read this, I thought Taleb might be describing a valence effect; the tendency for people to overestimate the likelihood of good things happening. But the more I thought about it, the more I realized something else was going on. People expect positive outcomes to be exceeded but they don’t usually consider that negative results can also be topped.
In sports the maxim ‘records are meant to be broken’ exhorts athletes and teams to strive for even better performance. But we rarely consider that negative records can be broken as well. The 1972-73 Philadelphia 76ers were long considered to have an untouchable losing record with a .110 winning percentage (9 wins in 82 games). But the 2011-12 Charlotte Bobcats proved us wrong with just 7 wins in a strike-reduced 66-game season (a .106 winning percentage). To add insult to injury, the Bobcats ended the season with 23 consecutive losses.
The Lucretius problem is exacerbated by the fact data captured in the past is not necessarily as reliable or as complete as data captured today. As a simple example, it is difficult to compare the intensity of The Great Lisbon Earthquake of 1755 to China’s Tangshan Quake of 1976. Measurement techniques have changed fundamentally in 200 years.
This means historical data often includes bias or variability. If it isn’t accounted for in the statistical or predictive model, the results are suspect. As Taleb remarks in Antifragile:
We overestimate the validity of what has been recorded before and thus the trends we draw might tell a different story if we had the dark figure of unreported data.
So much for the oft-quoted proverb: ‘Those who do not learn from history are doomed to repeat it.’ Maybe history isn’t the greatest teacher after all.
Sounds like a serious case of analytic Declinism.