Fragility is a well known concept. A glass cup is fragile to movements, leather is fragile to heat, and electronic devices are fragile to water. Fragility is the vulnerability of some object, notion, or idea to some stressor. As seen throughout Plato’s Dialogues, Socrates argues that there exists an opposite to everything in life. Adopting this view, we can deduce the opposite of the fragile. To do so, we must first define what is fragile in more philosophical terms: what is fragile is what is harmed by volatility. In such case, the opposite of fragile must be what gains from volatility. We call this opposite the Antifragile.
Given two opposites, there must also exist a neutral, thus, for the case of the fragile and the antifragile, neutral is what is robust. In this sense, what is robust is what neither gains nor is harmed by volatility. A rock is robust to being thrown around, it does not break, and while at the same time, it does not improve in any measure. On the other hand, the best example of something antifragile is mother nature. Nature gains from volatility, up to some point – this happens to be the basis of the Darwinistic view to life.
The concept of antifragility is widely misunderstood by those in power. Many antifragile systems are forced into being fragile by the over interventionism of those who think that what is stable is desirable. To understand this point, we must first introduce the concept of multiplicative outcomes. Suppose that you are reading some news article by some journalist who thinks that his very limited statistics background justifies his harmful claims. He claims that the mortality rate of some new disease is lower that the probability of dying in a car crash on a short trip to the grocery store, and hence, you should not be concerned. This statement ignores the second rate effect. Dying in a car crash will not cause people other than those involved in the same crash to die. While contracting and dying from some new disease will most certainly cause the disease to spread to those around in an exponential manner. The disease has multiplicative outcomes while the car crash does not. Fragile systems are known for their multiplicative fragility. Think of the banking system, the failure of one bank increases the probability of other banks failing. The banking business is just one example of fragility induced by the over interventionism of governments (primarily through bailouts). If on the other hand, banks were subjected to endure the outcomes of their bad decisions, then, in the long run, we would have a less fragile banking system. A key takeaway is that what requires excessive maintenance is fragile.
Antifragile systems could be viewed as systems with leverages: small potential losses traded for large, sometimes unbounded, gains. We could extended this idea to our personal choices in what is called optionality. We have optionally when we have the option to opt out of some situation if the outcomes turn bad, while on the other hand, have the option to stay and reap the benefits when the outcomes are good. It is through optionally that we become Antifragile. In effect, when we increase our exposure to positive Black Swans and reduce our exposure to negative Black Swans, we become Antifragile. Here, Black Swans are the rare, highly impacting, and unpredictable events. One might ask, how do we increase or decrease our exposer to events that we can not predict? To answer this question, we need to to draw a distinction between predicting an event and acknowledging that an event could happen.
The fallacy of predictability
Many scientists have attempted to predict the exact time, location, and magnitude of future earthquakes. However, none till this day managed to develop a reliable model. Earthquakes are tricky, a false positive(predicting an earthquake when there will be none) could cause unwarranted social panic, and a false negative(predicting no earthquake when there will be one) could cause local authorities to relax building codes which will cause under preparedness.
Earthquakes are unpredictable in their nature. However, we know that some areas of the world are more earthquake prone than others. If we were to examine the historical occurrences in which we plot the frequency of earthquakes having a magnitude grater than or equal to some cutoff, for a range of cutoffs, we would observe that the relationship assumes a power law distribution. For example, we could observe that for every 15 earthquakes of at least magnitude 3, there are 2 earthquakes of at least magnitude 4. If we assume that earthquakes do really follow a power law, then we could estimate the expected frequency of high impacting earthquakes, even if we never have observed any before. For example, if we calculate that a magnitude 8 or more earthquake is expected every 500 years, then we should not be as worried as if the expected frequency was once every 50 years.
This notion of having an idea of the expected frequency of various events of different impacts feeds into the idea of increasing exposure to positive Black Swans and decreasing exposure to negative Black Swans. We use occurrences of low impact events to have an estimate of the frequency of high impact events.
Are you fragile?
How could you know whether you are fragile or antifragile? Simple; perform a small, hopefully non harmful, experiment. Increase your exposure to some stressor by a small amount and record how much you are harmed. Now double the exposure and record how much you are harmed. If doubling exposure doubles the harm, then you are in a linear system and should not worry much. However, if doubling exposure more than doubles the harm, then you are in a multiplicative system, and hence, are fragile.
The nonlinearity of outcomes is the basis to life. Compare jumping of a one meter edge 20 times versus jumping of a 20 meter cliff one time. The outcomes of these two very unsafe scenarios are different even though the sum of exposure to stressors equate. We humans are fragile to falling down from heights.
We can test for antifragility using the same test. We increase our exposure to some favorable event and test whether the sum of the outcomes resulting from small exposures equals the outcome of an outright large exposure. One example of this test would be weigh lifting. Lifting 1 kilogram for 100 repetitions is not as effective as lifting 50 kilograms for two repetitions (heavy lifting is much more beneficial). Therefore, we are antifragile to weigh lifting, up to a point.
Convexity and Concavity (the technical part)
We have established that what is fragile does not like variability, and vice versa for the antifragile. Now we formalize these ideas using mathematics. Jensen’s inequality relates the average of the function with the function of the average. For the rest of this section, assume that the higher the value of \(f(x)\) is, the better the outcome.
We write the mathematical representations of the two expressions above to avoid any confusions. The average of the function \(f\) over the interval \([a,b]\) is
$$\bar{f}(x) = \frac{1}{b-a} \int_{a}^{b}{f(x)} \ dx$$
While the function of the average of the interval \([a,b]\) is
$$f(\bar{x}) = f(\frac{a+b}{2})$$ For convex functions – functions that are convex when looked from below in a cartesian sense – Jensen’s inequality asserts that the average of the function is grater than or equal to the function of the average. \(f(x) = x^2\) and \(f(x) = e^x\) are two examples of convex functions. The implication here is that for inputs \(x_1,\dots,x_n\)
$$\frac{f(x_1)+\dots+f(x_n)}{n} \geq f(\frac{x_1+\dots+x_n}{n} )$$ Therefore, it is more favorable for the inputs to be more volatile, assuming the same mean or average, for convex systems. Hence antifragile systems are convex. On the contrary, concave functions – such as \(f(x) = 1/x\) or \(f(x) = ln(x)\) – have the opposite effect; the average of the function is less than or equal to the function of the average. Hence, fragile systems are said to be concave.
The formalization of convexity and concavity plays a big role in understanding the fragility, and antifragility, of complex systems. If the response \(f\) of some system to various inputs \(x_1,\dots,x_n\) resembles a concave function, then the system is fragile, and the opposite is true for convex functions and the antifragile.
Conclusion
Understating antifragility is a way to improve ones outcomes in life, not control them. As the ancient stoics would put it – focus on changing what you directly influence – which thus translates into increasing exposure to positive Black Swans and decreasing exposure to negative Black Swans.
This post is in large part influenced by Nassim Nicholas Taleb’s Incerto book series
Leave a Reply