To recap: If we subject something that is fragile to stress, shock, randomness or volatility, we can expect that it will be harmed. If we do the same to something that is robust or resilient, we can expect that it will resist harm - but won't be any better off than when we started. But what if something that is subjected to stress, shock, randomness or volatility actually benefited? That would be the opposite of fragile: antifragile. So how do we determine if something is fragile or antifragile? Below are a few questions to consider:
Are our designs fragile or antifragile?
One way of using this lens is to look at our software designs and architectures. If we are locked into a single database vendor, for example, we might be fragile with respect to that vendor's product and viability. In contrast, when we abstract our dependencies on 3rd party tools and libraries, we give ourselves more optionality and less fragility.
Design concepts such as Single Responsibility Principle (a class should do one thing and do it well) and Dependency Injection promote antifragility, because we can more easily rearrange our software to adapt to new needs. These are of course supported by test automation, so that we can more easily detect when we've broken something.
Another consideration is performance and scalability. A fragile system is one in which performance degrades exponentially as the load goes up. This means that the system is more vulnerable to unexpected events (e.g. a regional Internet outage could trigger an unexpected load and a cascade failure of the system).
Are our teams and organizations fragile or antifragile?
Teams and organizations that go for 100% efficiency tend to be fragile, because there is insufficient slack (both in time and in redundancy of knowledge/skill) to adapt to unexpected changes. Teams and organizations that deliberately learn from "things that didn't go well" (e.g. software defects, failed business proposals, etc.) tend towards antifragile, because disorder and randomness are harnessed to promote growth.
What about our agile methods?
Tinkering, in the form of experiments with potentially better ways of doing things, is inherently antifragile, as these can be designed to have a positive "long tail", so that the cost of experimentation is contained but the potential for better outcomes is not constrained. Along with optionality and the ratcheting effect of learning, these give us the possibility of continually improving our outcomes.
How will you apply these concepts? I've really only "scratched the surface" in terms of potential applications of Taleb's ideas. I can imagine many more ways of taking this work and using it for positive outcomes. What about you? Will you take these concepts and put them to work? And if so, how?