Image Image Image Image Image Image Image Image Image Image
Scroll to top

Top

If Methods Don’t Work, What Does?

If Methods Don’t Work, What Does?

| On 03, Feb 2019

Darrell Mann

I’ve used the above slide at a number of conferences now. Usually to a reaction of either stunned silence or an eventual half-hearted challenge from an advocate of one of the methods I’ve chosen to include in the picture. It comes from an ongoing piece of research we’ve been conducting with client and other organisations to try and reveal the under-pinning DNA of their innovation success stories. Attempting such a feat is difficult at the best of times; doing it in the complex/chaotic environment that invariably accompanies any kind of discontinuous jump is a particularly hazardous job. A big part of the challenge is to somehow parallel-test the negative hypothesis. In this case, that goes something like this:

  1. Identify the tools, methods and processes that you think contributed to the success of an innovation project.
  2. Identify the tools, methods and processes that were utilized during innovation projects that subsequently proved to be unsuccessful

When it is possible to meaningfully grasp the answers to these two questions, ‘success’-contributing methods turn out to be no more or less likely to be present in both questions. Thus, as 98% of all innovation attempts fail, so we see that those that claim to have made use of Agile methods – to choose one at random – will also fail 98% of the time.

The only vaguely meaningful way to test the efficacy of one method over another would be to conduct some kind of parallel set of innovation-project experiments: one using TRIZ, for example, one using (another random choice), Design-Thinking, and one – the ‘control group’ using no formal support method at all. Such experiments are both expensive and fraught with the difficulties of trying to compare the un-compare-able. There are one or two examples, but really precious few that would pass any kind of academic scrutiny. That’s one of the big challenges of working in complex environments.

The most usual ‘alternative’ is the Jack Welch SixSigma ‘myth-builder’ strategy. Which basically involves never doing any kind of double-blind experiment, and instead telling people that ‘quantifying how much Six Sigma helped is good for your career’. Then, hey-presto, and surprise-surprise, any piece of moderately successful work ever done thereafter gets a Six Sigma label attached to it, until Jack is able to announce to the world’s media that the savings amount to ‘$9B’. Every cent of which is utterly fictitious.

Then there’s a final twist of the knife pertaining specifically to the tools and methods that might be brought to bear during the ‘fuzzy-front-end’ period of an innovation project. By the time the project gets somewhere near to delivering success, this fuzzy-front-end is a long distant memory, and, unfortunately, if you’re part of the method team, participants’ memories tend to be short. After the 99% perspiration, people, in other words, tend to have forgotten where the 1% inspiration came from. It was the perspiration that delivered the success, not the weird bit at the beginning.

Taken all together, the overall picture can tend to make people in the innovation world depressed if you’re not careful. Well, actually, the reaction can be quite amusing if you’re that way inclined. Which we tend to be on occasion since it turns out to be a good initial filter mechanism for sifting out the pretend-innovators from the potentially real ones.

The real ones, when they’re exposed to these kinds of result are much more inclined to ask whether there is anything at all that meaningfully distinguishes the 98% of innovation attempt failures from the 2% that end up being successful.

It turns out there is.

In the world of tangible tools, here are the things that we can safely say have a statistically significant likelihood of contributing to innovation success and a corresponding absence in innovation attempt failures:

  • The ability to identify and resolve trade-offs & compromises
  • Having a clear meta-level compass heading relating to customer value
  • Systems Thinking
  • Management of the unknowns trumps management of the Gantt Chart
  • Clear understanding of complexity – rapid-learning cycles, ‘first principles’, s-curves, patterns – and need for requisitely fast learning cycles
  • Clear understanding of (von Clausewitz) ‘critical mass at the critical point’
  • Requisite understanding the customer/consumer say/do gap and how to deal with it

Important as these tangible – i.e. very teachable – elements are, they tend to get dwarved a little by the intangibles. Which look something like this:

  • An excess of influencing skills
  • Strong ability to working together in cross-disciplinary teams
  • Persistence/bloody-mindedness/willingness to stick-with-difficult-stuff
  • Strong ability to live with continual ‘failure’
  • Acknowledgement that ‘ideas’ have zero value
  • Ability to design and manage a clear sense of progress across the team

This stuff is much more difficult to teach. Not impossible, but it does require a heap more time than most organisations are prepared to devote to the task. Interventions, these days, able to be assisted by the fact that we’re able to measure a lot of these intangible success-driving elements, including the ones that spill over into the tangible arena.

Give us electronic narrative Company inputs like: Online presence URLs, LinkedIn/Social-Media links to key individuals, Annual Reports, Patents, Strategic plans, Press releases and, in some cases, specific narrative-inducing questionnaire responses, and here’s the first prototype ‘InnovationDNA’ intangibles psychometric tool output we’re able to automatically generate. Well, ‘almost’ completely automatically…

Give us a shout if you’d like to know more.