Image Image Image Image Image Image Image Image Image Image
Scroll to top

Top

A Crash Course In Complexity

A Crash Course In Complexity

| On 28, Jun 2018

Darrell Mann

“If a factory is torn down, but the rationality which produced it is left standing, then that rationality will simply produce another factory. If a revolution destroys a government, but the systematic patterns of thought that produced that government are left intact, then those patterns will repeat themselves… there’s so much talk about the system. And so little understanding.”
Robert Pirsig, Zen & The Art Of Motorcycle Maintenance

“Some problems are so complex that you have to be highly intelligent and well-informed just to be undecided about them.”
Laurence J Peter

One of my favourite experiences in the whole world is watching a murmuration of starlings. Half a million small birds flying in spectacular close formation before they roost for the night, creating an ever changing pattern in the sky. Not only is it beautiful to watch, it’s also a terrific example of a complex system in action. No-one can (including the starlings themselves) predict what the shape of the murmuration will look from one moment to the next. The overall shape of the murmuration is what complexity scientists would describe as ‘emergent’.

Emergence is a key property of any complex system. Here’s a list of some of the other important characteristics of complex adaptive systems, as may be relevant to the context of problem solving and innovation :

  1. There is no definitive formulation of ‘the problem’… you don’t understand it till you solve it
  2. There is no end to the problem
  3. Solutions are not true-or-false, but merely ‘good’-or-‘bad’
  4. There is no immediate and no ultimate test of a solution to the problem. Every instant of the problem is essentially unique (‘you can never step in the same river twice’)
  5. Every solution to the problem is a ‘one-shot operation’; because there is no opportunity to learn by trial-and-error, every attempt counts significantly
  6. There is not an enumerable set of potential solutions, nor is there a well-described set of permissible operations that may be incorporated into a plan
  7. The problem is a symptom of another problem.
  8. The existence of discrepancies when representing the problem can be explained in numerous ways – there is no such thing as ‘the root cause’
  9. Small discrepancies in understanding or modelling of the system can quickly get magnified into extreme differences in outcome (the apochrypal butterfly flapping it’s wings and causing a hurricane, and pretty much any attempt to predict weather)
  10. The choice of explanation determines the nature of the problem’s resolution
  11. Every complex system ‘emerges’ from the interaction of one or more basic underlying principles (‘levers of influence’/’DNA’). The more complex the problem, the more hierarchical levels of these underlying principles there are likely to be.
  12. The quality of the solution is determined by the proportion of ‘all’ the underlying principles that are understood and have been incorporated into the solution model (‘Only variety can absorb variety’)
  13. The connections between the things in the system are more important than the things.
  14. The ‘best’ way to solve a complex problem is to make modification at the principle level.

In the case of the starling murmuration the underlying principles from which the overall shape and pattern emerges is quite simple. There is no controlling starling with a master plan, there is simply a heuristic ‘fly as close to your neighbours as possible’ being enacted by each and every starling. Watch a murmuration for a while and you notice that there are never any impacts. Try and focus on one single starling and you’ll start to see how it applies the simple ‘stay as close as possible’ ‘rule’ and how tiny variations in their distance serves to trigger enormous changes in the shape of the overall murmuration.

If murmurating starlings illustrate emergence in action from a relatively simple heuristic, mathematician Benoit Mandelbrot went a whole world further when he first published pictures of what we now know as the Mandelbrot Set. Amazing levels of complexity all arising from some apparently benign and extremely simple mathematics – f(z) = Z2 + C. How could it possibly be that such simplicity could produce such hypnotic beauty?

The Mandelbrot Set kind of looks like nature, but actual nature is where we need to head in order to find the most amazing illustrations of emergent behaviour. One of the most remarkable nature-made structures is the termite mound. Amazing structures, sometimes several metres high, built by hundreds of thousands of termites, with, again, no master plan.

Termites are small and their brains even smaller, so the instructions they are able to follow are relatively simple. Not quite so simple as ‘fly as close to your neighbour as possible’, but not by much. Termite mounds emerge from a set of ‘rules’ that look something like this:

  • If Queen pheromone level exceeds threshold, go collect material
    Walk around randomly;
  • if you find useful material pick it up; if you find other useful material, put what you have down.
  • If temperature or oxygen level inside the mound drops below a comfortable level, block exit and entrance holes; if temperature or oxygen level exceeds a comfortable level, clear the exits and entrances.
  • If unexpected holes appear; fill them

And that’s pretty much it.

Every termite mound emerges to be completely unique, but at the same time, they’re all pretty much the same. And they will be continue to be so as long as termites keep applying the same basic principles.

If we wished to encourage termites to make a ‘better’ mound, the way to do it would be to change those principles. Any other means of altering the design of a mound wouldn’t work – the termites would simply keep applying their already established rules and the mound as we know it today would consistently ‘re-emerge’. Take a chainsaw and lop the top off a mound, and within a very short while it will magically re-appear.

The ultimate point here is that whenever we (humans) attempt to change a complex system by any other means that changes to the underlying ‘’DNA’ principles, the system will always naturally return to its original state. We see this in action with the large majority of change initiatives inside organisations: a willful manager decides they want to (say) create an ‘innovation culture’ across the business and then try and bludgeon everyone into complying with their desire. If they’re lucky, they might get some actual useful innovative output for a while, but if they haven’t understood or made the change happen at the ‘DNA’ level, very soon after they depart for pastures new, the system will revert to its previous un-innovative state.

Even when we try and change a system at the principle level, in the majority of cases, we end up with a mutated system that is worse than it was before. We can observe a very simple example of a ‘principle-level’ change in a starling murmuration if a hungry falcon happens to turn up. Add a falcon to the murmuration and the ‘fly as close to your neighbor as possible’ heuristic very quickly gets replaced by another rule: ‘get away from the falcon’. Add the falcon to the system and the beautiful display of mass aerobatics quickly turns into a chaotic mess of starlings flying into one another.

A big part of the SI research is trying to get to an understanding of how complex systems – like ‘society’ – work at this ‘principle’ level. TrenDNA, for example, has DNA in its title because our aim was to uncover the underlying principles that influence and drive why systems emerge in the way they do. The Strauss and Howe ‘generation cycle’ theory and the underpinning ‘DNA’ idea that society emerges from the way in which parents raise their children and how, for each generation, the choices a parent makes are in turn influenced by how they were raised by their parents. There is no absolute rule that says society has to go through a crisis period every four generations, but merely that, so long as this parental influence ‘DNA’ remains present in the way it is, it’s very likely we’ll keeping seeing the same four generation archetype picture emerging and re-emerging.

Trying to innovate in this kind of complex environment, we propose, fundamentally means altering the system at this principle level, and, moreover, doing it in such a way that, unlike what happens when we add a falcon to a murmuration, we find a change or combination of changes that somehow cause the system to emerge in a manner that is fundamentally better. There’s no rule that says society ‘has to’ enter a crisis period every four generations, but we’ll only ever successfully avoid periodic crises if we manage to somehow alter the parent-child influence model in some way. And, moreover, to do it in such a manner that we avoid the human equivalent of falconry.

We might go so far as to say that innovation only really has the chance to happen when we are able to make changes at the core principle level. Put another way, if we don’t understand what the principles from which our current system has emerged, our chances of innovation success are diminishingly small. Which means, we think, that the key question for any prospective innovation team, when they’re thinking about ‘what don’t we know yet?’ is how well we do or don’t know what the underlying principles – the f(z) = Z2 + C – of our system are.

Spend a few moments thinking about whether you think you might know what they are for a system you’re currently responsible for improving, and we suspect you’ll quickly begin to see why such a large proportion of innovation attempts end in failure. It’s not supposed to be a scary thought, but it probably is anyway. Call that an underlying principle.