A Crash Course In Complexity
Editor | On 28, Jun 2018
â€œIf a factory is torn down, but the rationality which produced it is left standing, then that rationality will simply produce another factory. If a revolution destroys a government, but the systematic patterns of thought that produced that government are left intact, then those patterns will repeat themselvesâ€¦ thereâ€™s so much talk about the system. And so little understanding.â€
Robert Pirsig, Zen & The Art Of Motorcycle Maintenance
â€œSome problems are so complex that you have to be highly intelligent and well-informed just to be undecided about them.â€
Laurence J Peter
One of my favourite experiences in the whole world is watching a murmuration of starlings. Half a million small birds flying in spectacular close formation before they roost for the night, creating an ever changing pattern in the sky. Not only is it beautiful to watch, itâ€™s also a terrific example of a complex system in action. No-one can (including the starlings themselves) predict what the shape of the murmuration will look from one moment to the next. The overall shape of the murmuration is what complexity scientists would describe as â€˜emergentâ€™.
Emergence is a key property of any complex system. Hereâ€™s a list of some of the other important characteristics of complex adaptive systems, as may be relevant to the context of problem solving and innovation :
- There is no definitive formulation of â€˜the problemâ€™â€¦ you donâ€™t understand it till you solve it
- There is no end to the problem
- Solutions are not true-or-false, but merely â€˜goodâ€™-or-â€˜badâ€™
- There is no immediate and no ultimate test of a solution to the problem. Every instant of the problem is essentially unique (â€˜you can never step in the same river twiceâ€™)
- Every solution to the problem is a â€˜one-shot operationâ€™; because there is no opportunity to learn by trial-and-error, every attempt counts significantly
- There is not an enumerable set of potential solutions, nor is there a well-described set of permissible operations that may be incorporated into a plan
- The problem is a symptom of another problem.
- The existence of discrepancies when representing the problem can be explained in numerous ways â€“ there is no such thing as â€˜the root causeâ€™
- Small discrepancies in understanding or modelling of the system can quickly get magnified into extreme differences in outcome (the apochrypal butterfly flapping itâ€™s wings and causing a hurricane, and pretty much any attempt to predict weather)
- The choice of explanation determines the nature of the problemâ€™s resolution
- Every complex system â€˜emergesâ€™ from the interaction of one or more basic underlying principles (â€˜levers of influenceâ€™/â€™DNAâ€™). The more complex the problem, the more hierarchical levels of these underlying principles there are likely to be.
- The quality of the solution is determined by the proportion of â€˜allâ€™ the underlying principles that are understood and have been incorporated into the solution model (â€˜Only variety can absorb varietyâ€™)
- The connections between the things in the system are more important than the things.
- The â€˜bestâ€™ way to solve a complex problem is to make modification at the principle level.
In the case of the starling murmuration the underlying principles from which the overall shape and pattern emerges is quite simple. There is no controlling starling with a master plan, there is simply a heuristic â€˜fly as close to your neighbours as possibleâ€™ being enacted by each and every starling. Watch a murmuration for a while and you notice that there are never any impacts. Try and focus on one single starling and youâ€™ll start to see how it applies the simple â€˜stay as close as possibleâ€™ â€˜ruleâ€™ and how tiny variations in their distance serves to trigger enormous changes in the shape of the overall murmuration.
If murmurating starlings illustrate emergence in action from a relatively simple heuristic, mathematician Benoit Mandelbrot went a whole world further when he first published pictures of what we now know as the Mandelbrot Set. Amazing levels of complexity all arising from some apparently benign and extremely simple mathematics â€“ f(z) = Z2 + C. How could it possibly be that such simplicity could produce such hypnotic beauty?
The Mandelbrot Set kind of looks like nature, but actual nature is where we need to head in order to find the most amazing illustrations of emergent behaviour. One of the most remarkable nature-made structures is the termite mound. Amazing structures, sometimes several metres high, built by hundreds of thousands of termites, with, again, no master plan.
Termites are small and their brains even smaller, so the instructions they are able to follow are relatively simple. Not quite so simple as â€˜fly as close to your neighbour as possibleâ€™, but not by much. Termite mounds emerge from a set of â€˜rulesâ€™ that look something like this:
- If Queen pheromone level exceeds threshold, go collect material
Walk around randomly;
- if you find useful material pick it up; if you find other useful material, put what you have down.
- If temperature or oxygen level inside the mound drops below a comfortable level, block exit and entrance holes; if temperature or oxygen level exceeds a comfortable level, clear the exits and entrances.
- If unexpected holes appear; fill them
And thatâ€™s pretty much it.
Every termite mound emerges to be completely unique, but at the same time, theyâ€™re all pretty much the same. And they will be continue to be so as long as termites keep applying the same basic principles.
If we wished to encourage termites to make a â€˜betterâ€™ mound, the way to do it would be to change those principles. Any other means of altering the design of a mound wouldnâ€™t work â€“ the termites would simply keep applying their already established rules and the mound as we know it today would consistently â€˜re-emergeâ€™. Take a chainsaw and lop the top off a mound, and within a very short while it will magically re-appear.
The ultimate point here is that whenever we (humans) attempt to change a complex system by any other means that changes to the underlying â€˜â€™DNAâ€™ principles, the system will always naturally return to its original state. We see this in action with the large majority of change initiatives inside organisations: a willful manager decides they want to (say) create an â€˜innovation cultureâ€™ across the business and then try and bludgeon everyone into complying with their desire. If theyâ€™re lucky, they might get some actual useful innovative output for a while, but if they havenâ€™t understood or made the change happen at the â€˜DNAâ€™ level, very soon after they depart for pastures new, the system will revert to its previous un-innovative state.
Even when we try and change a system at the principle level, in the majority of cases, we end up with a mutated system that is worse than it was before. We can observe a very simple example of a â€˜principle-levelâ€™ change in a starling murmuration if a hungry falcon happens to turn up. Add a falcon to the murmuration and the â€˜fly as close to your neighbor as possibleâ€™ heuristic very quickly gets replaced by another rule: â€˜get away from the falconâ€™. Add the falcon to the system and the beautiful display of mass aerobatics quickly turns into a chaotic mess of starlings flying into one another.
A big part of the SI research is trying to get to an understanding of how complex systems â€“ like â€˜societyâ€™ â€“ work at this â€˜principleâ€™ level. TrenDNA, for example, has DNA in its title because our aim was to uncover the underlying principles that influence and drive why systems emerge in the way they do. The Strauss and Howe â€˜generation cycleâ€™ theory and the underpinning â€˜DNAâ€™ idea that society emerges from the way in which parents raise their children and how, for each generation, the choices a parent makes are in turn influenced by how they were raised by their parents. There is no absolute rule that says society has to go through a crisis period every four generations, but merely that, so long as this parental influence â€˜DNAâ€™ remains present in the way it is, itâ€™s very likely weâ€™ll keeping seeing the same four generation archetype picture emerging and re-emerging.
Trying to innovate in this kind of complex environment, we propose, fundamentally means altering the system at this principle level, and, moreover, doing it in such a way that, unlike what happens when we add a falcon to a murmuration, we find a change or combination of changes that somehow cause the system to emerge in a manner that is fundamentally better. Thereâ€™s no rule that says society â€˜has toâ€™ enter a crisis period every four generations, but weâ€™ll only ever successfully avoid periodic crises if we manage to somehow alter the parent-child influence model in some way. And, moreover, to do it in such a manner that we avoid the human equivalent of falconry.
We might go so far as to say that innovation only really has the chance to happen when we are able to make changes at the core principle level. Put another way, if we donâ€™t understand what the principles from which our current system has emerged, our chances of innovation success are diminishingly small. Which means, we think, that the key question for any prospective innovation team, when theyâ€™re thinking about â€˜what donâ€™t we know yet?â€™ is how well we do or donâ€™t know what the underlying principles â€“ the f(z) = Z2 + C â€“ of our system are.
Spend a few moments thinking about whether you think you might know what they are for a system youâ€™re currently responsible for improving, and we suspect youâ€™ll quickly begin to see why such a large proportion of innovation attempts end in failure. Itâ€™s not supposed to be a scary thought, but it probably is anyway. Call that an underlying principle.