foobuzz

by Valentin, June 25 2022, in books

Antifragile (Nassim Nicholas Taleb, 2012)

I've read Antifragile from Nassim Nicholas Taleb, and it was quite interesting.

The point of the book is that the usual spectrum we consider when pondering fragility, which goes from fragile to robust, is actually only half of a larger spectrum, the missing half going from robust to antifragile. Antifragile systems are systems that explicitly improve when submitted to shocks, errors or mishaps. The book contain various examples: natural selection relies on variability, which itself comes from random mutations in the DNA; the human body builds resilience when it gets sick (e.g. vaccination); etc.

An important characteristic of antifragile systems is that they frequently endure small shocks and errors. Small enough not to bring them down, but sizable enough for them to be valuable to the system. Antifragile systems tend to be organic, complex, and decentralized systems, which grew from a bottom-up build-up, rather than a top-down design.

The book builds on the Black Swan idea developed by the same author in a previous book. This idea stipulates that it is useless to try to make models to predict the future, because at some point some shit we never saw before will hit the fan, having a massive impact on everything we take for granted. Such unforeseen events (e.g. wars and epidemics) are so big, that History as a whole is mostly shaped by them, and is therefore a sequence of twists and turns, like the story of a novel, rather than anything smooth that could have been modeled.

In my opinion, the most fascinating part of the argumentation in the book is the one about non-linearity. Basically, Taleb argues that everything good in life (or in business, or in whatever system) comes from rare opportunities having a massive payoff, not from smooth very-focused sailing. Therefore, all one needs to do is to repeatedly put oneself in situations in which the payoff can be huge, and in which the worst case is bounded (what he calls convex situations). Very focused undertakings with "strategic" plans tend to be the contrary: the nominal course of events is the actual best case, but in the meantime anything can happen that will make it go berserk (concave situations).

That's why the author despises so much everything that is top-down, bureaucrats, bankers, politicians, strategic visions, etc. The best things happen by playing around and tinkering stuff with trial and error. The book gives several examples of the history of science having been rewritten to make it look like some discovery was inevitable, when in fact it was some experimentator playing around in the lab.


I found general idea of antifragility quite interesting, and ever since I've read the book, I've tried to apply this new concept in both my personal life (I guess I should try to be less of a control freak) and my professional life (when thinking about the software systems I develop). The book offers no magic repice on how to make a system antifragile, but rather various heuristics of what such system usually looks like, so I guess one must pass by some trial and error before getting the hang of it (trial and error being one usual characteristics of antifragility).

I am only moderately convinced by the idea of antifragility and related ideas expressed in the book. Since the author is dealing with systems that are immensely complex and that are deeply understood by no one, many of the author's argument are based on heuristics, anecdotes, and intuitions. Which doesn't mean that the ideas are incorrect, but that they're more suggestions and interesting perspectives to look at things, rather than a complete theory of how things works (which makes sense, since the author seems to dislike such theories).

The recurring thematic which I disliked the most in the book is the anthropomorphism around nature: the author truly is a fan of "mother nature", which is the epitome of a wise and all-knowing antifragile system, in comparison to medicine, which is over-medicating people and harming them in the process. This view seems extreme to me. Natural selection comes from random mutations that are selected for survival: there is no guaranty of speed in the way something will be fixed, and there is no guaranty of optimality in how well it'll be fixed. So to me it's extremely vulnerable to Black Swan events as it's way too slow to handle sudden changes in the environment, contrarily to medicine, which at least has a shot, by virtue of being engineered.

That being said, even without full commitment to the theory, the book offers perspectives which are very interesting and that I'll keep in mind. Furthermore, I rather share many of the views expressed by the author, who places the practitioner over the academic; bottom-up tinkering over top-down designing; natural exercising over treadmill running; city-states over centralized empires; grandma advice over bureaucrats policies; non-interventionism when no intervention is necessary; etc. The author exudes a sort of hippy vibe, if not anarchist at times. Anything related to bureaucracy, MBAs, politicians, corporate bullshit, academic bullshit, etc, is held in contempt. Taxi drivers, artisans, and all the authentic people of life, are respected.

Finally the book is regularly amusing because of the frankness and mockery of the author in the way he expresses his contempt of things he doesn't like. For example, policy makers are "lecturing birds how to fly" when they define a process for something that could have been perfectly working without their intervention.

Quotes

It is far easier to figure out if something is fragile than to predict the occurrence of an event that may harm it.

Just as we are not likely to mistake a bear for a stone (but likely to mistake a stone for a bear), it is almost impossible for someone rational, with a clear, uninfected mind, someone who is not drowning in data, to mistake a vital sign, one that matters for his survival, for noise — unless he is overanxious, oversensitive, and neurotic, hence distracted and confused by other messages.

If you put 90% of you funds in boring cash (assuming you are protected from inflation) or something called "numeraire repository of value," and 10% in very risky, maximally risky securities, you cannot possibly lose more than 10%, while you are exposed to massive upside. Someone with 100% in so-called "medium" risk securities has a risk of total ruin from the miscomputation of risks.

Reflecting on the observation that the patent for wheeled luggage was given in 1972:

The simpler and more obvious the discovery, the less equipped we are to figure it out by complicated methods. The key is that the significant can only be revealed through practice. How many of these simple, trivially simple heuristics are currently looking and laughing at us?

Think of the following event: A collection of hieratic persons (from Harvard or some such place) lecture birds on how to fly. Imagine bald males in their sixties, dressed in black robes, officiating in a form of English that is full of jargon, with equations here and there for good measure. The bird flies. Wonderful confirmation! They rush to the department of ornithology to write books, articles, and reports stating that the bird has obeyed them, an impeccable causal inference. The Harvard Department of Ornithology is now indispensable for bird flying. It will get government research funds for its contribution.

My experience of good practitioners is that they can be totally incomprehensible — they do not have to put much energy into turning their insights and internal coherence into elegant styles and narratives.

People with too much smoke and complicated tricks and methods in their brains start missing elementary, very elementary things. Persons in the real world can't afford to miss these things; otherwise they crash the plane. Unlike researchers, they were selected for survival, not complications.

Corporations are in love with the idea of the strategic plan. They need to pay to figure out where they are going. Yet there is no evidence that strategic planning works — we even seem to have evidence against it. A management scholar, William Starbuck, has published a few papers debunking the effectiveness of planning — it makes the corporation option-blind, as it gets locked into a non-opportunistic course of action.

The predictors' reply when we point out their failures has typically been "we need better computation" in order to predict the event better and figure out the probabilities, instead of the vastly more effective "modify you expose" and learn to get out of trouble, something religion and traditional heuristics have been better at enforcing than naive and cosmetic science.

Someone with a linear payoff needs to be right more than 50% of the time. Someone with a convex payoff, much less. The hidden benefit of antifragility is that you can guess worse than random and still and up outperforming. Here lies the power of optionality — your _function of something_ is very convex, so you can be wrong and still do fine — the most uncertainty, the better.

What is top-down is generally unwrinkled (that is, unfractal) and feels dead.

Pharmaceutical companies are under financial pressures to find diseases and satisfy the security analysts. They have been scraping the bottom of the barrel, looking for disease among healthier and healthier people, lobbying for reclassification of conditions, and fine-tuning sales tricks to get doctors to overprescribe. Now, if your blood pressure is in the upper part of the range that used to be called "normal", you are no logr "normotensive" but "pre-hypertensive," even if there are no symptoms in view.

Druin Burch, in Taking the Medicine, writes: "The harmful effects of smoking are roughly equivalent to the combined good ones of every medical intervention developed since the war... Getting rid of smoking provides more benefit than being able to cure people of every possible type of cancer."

Postdictors, who explains things after the fact — because they are in the business of talking — always look smarter than predictors.