Managing The (Autonomous Vehicle) Unknowns
Editor | On 07, Jul 2019
Darrell Mann
Two birds with one stone. Someone was quizzing me about my (no doubt glib) comment that innovation projects were nothing to do with Gantt charts and all about managing the unknowns. ‘Is it really possible to manage the unknowns?’ was the question. That’s one bird. The other is about another (no doubt glib) comment I’ve made several times in conferences and presentations that the whole of the current wave of interest in autonomous vehicles is a big charade. A big expensive charade at that.
The ‘stone’ then is how might we ‘manage the unknowns’ in an autonomous vehicle context? Or, if I was going to join the army of gullible lemmings and develop my own autonomous vehicle, how would I set about doing it.
Let me start by taking a lead from what has lately become known as the Rumsfeld Matrix. I’m not sure Donald actually invented it, but his ‘unknown unknowns’ political tongue-twister has entered folklore sufficiently comprehensively that most people now think he did. Anyway, even though Rumsfeld only talked about three of the quadrants in the Matrix, here’s our version of what the full 2×2 looks like:
Figure 1: The Rumsfeld Matrix (Redux)
From an innovation perspective the four quadrants of the Matrix give our fledgling autonomous vehicle project four different kinds of unknown to manage.
The most obvious unknown category is the stuff we know we don’t know. From a TRIZ perspective, this should actually be fairly easy to map. It’s the ‘yes, but’ quadrant – we know where we’re trying to get to, and so we can make a list of all the things we believe will prevent us from achieving our goal. Because we know TRIZ, we know that this list of ‘yes, buts’ is mapping all of the contradictions that stand between where we are and where we’re trying to be. We also know that ‘where we’re trying to be’ is best mapped using the Ideal Final Result part of TRIZ, and if we’re looking to really identify ‘all’ the unsolved contradictions, we ought to use the Ideal Final Attribute template as a means of mapping all of the attributes that are going to be relevant and defining what ‘ideal’ looks like for each of these attributes for each of the different stakeholders.
If we do that job properly, its quite a big job. We’ll no doubt have over a hundred different attributes to worry about and probably a couple of dozen different stakeholders to think about. A couple of dozen because, if we’re talking about fully autonomous vehicles, that has all sorts of implications for every aspect of society not just people who want an autonomous vehicle to get them from A to B. We’d need to think about what is ideal for the politicians, for example, the city planners, the organisations that are going to have to ensure we have 100% GPS signal, the energy providers, the insurers, the pedestrians, the police and emergency services. Everyone:
Figure 2: Hypothetical Ideal Final Attribute Analysis Template For Autonomous Vehicles
Now, I could map all those things, but because I know it will be hard work and that I am incredibly lazy, I know, too, that I’m only going to do the work if it’s a good use of my time. And in this case, it turns out I can do a much simpler version of the analysis and quickly realise the futility of not just the full exercise, but also the entire autonomous vehicle story. I’m going to leave the completed template to your imagination. It would be the best possible way to identify the known unknowns, if we were building the story for a proper project.
My much simpler version says, let’s think about the different stakeholders and their ‘yes, buts’ – i.e. what will they say to us if we ask them why they don’t like or want autonomous vehicles – and let’s do that in the context of the five different Levels of vehicle autonomy the industry has defined. Here’s what that table looks like:
Table 1: Contradictions To Be Solved In Order To Achieve Different Levels Of Autonomy
For anyone thinking that maybe the big Ideal Final Attribute analysis doesn’t seem so bad after all, now I’ve introduced the 5 different Levels of autonomy, it’s worth noting that, if I really am going to be thorough, I should complete the Figure 2 template separately for each of the five Levels. Anyway, we’ll come back to this table in a little while. For the moment all we need to register from our ‘managing the unknowns’ process requirement is that in addition to this Table 1 list, we also have a series of meta-unknowns in that we don’t know the relative importance of each of them. As a project manager, I know I need to ultimately solve – or at least ‘manage’ – all of them, but it would be nice to know which ones are more important than others to the various different stakeholders right now. To answer that question, I’m probably going to use something like PanSensic to help track what the various different stakeholders are saying.
Let’s for the moment assume that once I’ve done this, I’ve mapped my ‘known unknowns’. Now I can move on to the ‘difficult’ unknown unknowns. In Donald Rumsfeld terms, and I imagine in the minds of many non-TRIZ people, this is the trickiest domain. We don’t know what we don’t know. Moreover, in the traditional mindset, this typically gets extended to mean ‘we can’t know what we don’t know’. Putt a TRIZ hat on, though, and the ‘unknown unknowns story becomes rather less of a concern. This is because we know that everything evolves in the direction of the Ideal Final Result, the mythical future point at which all of the contradictions have been solved. The point here is the word ‘point’. The fact that the IFR – mythical or otherwise – is a point means that the evolution process is convergent. Which in turn means that as we get closer and closer to the ideal, the number of possible solutions becomes less and less. Which also means there’s less for us to have to worry about from the ‘unknown unknowns’ perspective the further into the future we decide to look.
Well, almost. There are two issues that we have to consider when we’re in the unknown-unknowns quadrant of the Rumsfeld Matrix:
-
- The ‘IFR Cones’ are based on the evolution of solutions that deliver a function. The self-driving car makes for a good example of an Ideal Final Result car. But, of course, ultimately, the customer doesn’t necessarily want a car at all. Rather they want the higher level functions of the car. They want (ideal) ‘mobility’ and all of the (ideal) emotional functions like status and autonomy, belonging, competence and meaning associated with it. What this means is that, if we’re smart innovators we need to keep our eye out for higher-level functions that might come along and displace the function we’re currently working on. TRIZ will also likely tell us what these higher level functions are (i.e. we could construct a universal hierarchy of function cones if we had the time and energy), so strictly speaking they’re not ‘unknown-unknowns at all. But that’s probably a tad too abstract for most, so let’s just leave it at the idea of a need to be looking upward to the higher-level functions that might come along and displace us.
- One of the ironies of the TRIZ findings is that the long-term future is rather more predictable than the short term. In the long-term, everything becomes more ideal. In the short-term, the world is crammed full of difficult to predict human foibles. Political foibles, social foibles, people foibles. People (and governments) sometimes, in the case of things like Brexit, ‘cut off their noses to spite their face’. Any and all of these kinds of ‘random’ u-turns, deviations and ego-driven cul-de-sacs can very easily appear and cause problems for our beautiful project. The Middle East, for example, could decide to switch off the flow of oil to the rest of the world and trigger a massive restriction on fuel that causes people to (temporarily) rethink their travel needs and modes. (Temporarily because, what will also be triggered is a massive swathe of panic research on alternative fuels.) A massive solar flare might knock out all of the world’s electrical systems. Some freedom denying Government may decide to turn the internet off. There are a million and one scenarios we could decide to take into consideration in our innovation project manager’s ‘unknown-unknowns’ search-space. A million of those scenarios most likely won’t happen. The trick is knowing which is the ‘one’. And the way to home in on this one is to apply the lens of time: ‘how likely is this to happen over the course of the duration of my project?’ Draw a likelihood-consequence Risk Map and do a big FMEA analysis if you’re so inclined. Me, I’m too lazy. Especially if I already know that my autonomous vehicle project is a non-starter.
I’ll let you worry about how big either of these unknown-unknown search spaces are. For me, the IFR Cone tells me 98% of what I need to know for most projects, but prudence tells me not to ignore the other 2%. As usual with these matters, it’s all about context. And particularly time-based context: what’s the time horizon of the project, and what’s the pulse rate of the industry my project is in?
Okay, so back to the Rumsfeld Matrix. In theory, ‘managing the unknowns’ is all about the two quadrants on the right-hand side. But this is innovation world we’re dealing with here, and because innovation largely starts from the challenge of rules and assumptions, it means there’s a fair likelihood that one or two of the things ‘I know I know’ are wrong, and even more likely some of my instinct-based tacit ‘unknown knowns’ knowledge is also likely to be wrong. So, at the very least, I ought to manage the potential errors in my ‘knowns’.
Again, if this sounds somewhat abstract, TRIZ comes along to the rescue again. Two ways. One, the directions and trajectories of successful evolution are very clear so anything I think I know that contradicts one or more of these directions ought to sound a mental alarm bell. If something is going in the ‘wrong’ direction, does that mean that it’s wrong or that TRIZ is wrong. For most people, by far and away the most likely – and certainly most prudent – answer is that they are wrong and TRIZ isn’t. From our TRIZ-research perspective, it is pretty much part of our DNA now to actively look for the exceptions so that we get to test and make a more resilient model of the TRIZ method.
Two, one of the more recent TRIZ ‘blinding flashes of the obvious’ is that it is all about distilling the world down to a core set of ‘first principles’. One of the lovely thing about first principles is that they’re pretty stable. Again, from a research perspective, we know its always necessary to challenge the validity of those first principles (if Einstein hadn’t challenged the ‘first principle’ Newtonian ‘Laws’, we wouldn’t be able to do 90% of the things we have come to take for granted in our 21st Century world. From the pragmatic innovation project manager’s perspective, however, it is highly unlikely that their job and the aim of their project is to challenge matters at the first principle level. Ultimately – again – project context should dictate how much or little time the team devotes to managing the unknowns associated with the things we’ve put into the ‘known’ category.
Okay, so there’s the ‘managing the unknowns’ story: use the Rumsfeld Matrix to provide the structure, use TRIZ to tell you what’s relevant and what isn’t, and use the context of the project to tell you how much time to devote to the other three quadrants. From my personal experience on actual innovation projects, ‘managing the unknowns’ means distributing my time like this:
Figure 3: The Pragmatic Project Manager’s ‘Managing The Unknowns’ Time Split
(assuming a working knowledge of TRIZ/SI)
So, back to the autonomous vehicle part of the article, and the contradiction horror story that is Table 1. And here’s the heart of why it’s a horror story. Figure 4 shows our ‘Eisenhower’s (Innovation) Box’ story from the Seven Habits of Highly Effective Innovation Project Managers article:
Figure 4: Eisenhower’s (Innovation) Box
The big idea behind this 2×2 Matrix is to highlight one of the main problems of our subject-matter-expert (SME) dominated world. The world needs these SMEs. They are a necessary but not sufficient part of any innovation story. The insufficient part is the innate human desire to stay within our comfort zones. If I’ve spent twenty years being educated and working as a chemist, if you give me a problem, I guarantee the answer I’m going to bring back to you will involve chemistry. And if we look at what’s happening in the autonomous vehicles world, all the technical people are either working on the ‘happy’ problems or, if I’m an ambitious, naïve, upstart (like Google), the ‘Heroism’ problems. The ‘business’ people, meanwhile, are collectively rubbish at innovation and are looking to the technical people to make things happen.
Taken together, technical people staying in their comfort zone plus business people hiding their heads in the sand, means that all the important, high impact, problems that need to be solved if autonomous vehicles are going to become an actual reality – the problems that sit in the ‘Hide’ quadrant – are all being treated in exactly that way. The industry is trying to sweep them under the rug and hope no-one will notice.
Let’s just take one or two of these problems to hopefully bring the overall futility point home. One of the big contradictions that needs to be solve in order to make the transition from Level 2 to Level 3 is the driver inattention-attention problem. This is the scenario where the vehicle is driving itself, with the driver expected to keep an eye on things so they can step in and take over when something untoward starts to go wrong. As the Tesla driver who autonomously drove under an eighteen-wheeler because the vehicle’s sensor system didn’t pick up the white truck found out at the cost of his life, a vehicle that is 99% autonomous lulls you into a false sense of security. Over time, the ‘driver’ devolves to become basically an idiot. This is a really tough problem to solve. Highly notably, Ford has said that the transition from Level 2 to Level 3 is ‘too difficult’ and that they are, as a company, jumping straight to Level 4. Which in effect means that the vehicle is ‘100%’ autonomous (okay, I know there’s no such thing as 100%, so let’s call it a reliability of somewhere around six-nines. Maybe five if we’re being generous).
So now let’s look at one of the contradictions that needs to be solved to attain Level 4 autonomy. There’s probably a handful of ‘Hide’ type inter-disciplinary contradictions here, any one of which kills the whole premise of autonomous vehicles, but for the sake of argument let’s just go with the malevolent-pedestrian problem. This is the problem whereby someone with a criminal intent realizes that if they step out into the path of an oncoming autonomous vehicle, it will have to either stop or injure them. One possibility, therefore is that the vehicle is smart enough to recognize the criminal intent and ‘decide’ that injuring the criminal is the right course of action. Good luck designing and qualifying that algorithm. Basically, the car is going to have to stop. So now, we need a vehicle that will lock all the doors and/or de-activate so that the criminal either can’t get into the car, or, if they do, the car will no longer function. Better yet, the car now informs the emergency services that a crime is underway. From a technical perspective, all of these things are no doubt ‘do-able’. But none of them handles the situation in which the criminal gets creative and decides to use a tyre-iron to smash a window, knock the driver unconscious, but keep them in the vehicle so the car still thinks the designated owner is still okay. So know we need an autonomous vehicle with bulletproof glass, or a means of detecting that the owner has been knocked unconscious. Again, ultimately solvable from a technical perspective. But again largely irrelevant because the criminal then evolves the next level of sophistication. And so on until we all come to our senses and realise that if we can solve all these problems, we’ve probably re-invented transport in the process so either none of us owns a vehicle any more or we’re all living in our augmented reality world staying home on the sofa. Not to mention any of the legal liability ramifications of any of this stuff.
Basically, all of the ‘Hide’ problems demand a universal, cross-functional world to fix. Ford alone can’t fix them. Google alone can’t fix them. Government alone can’t fix them. The lawyers definitely can’t fix them. They all have to work together, and none of them have even the first clue how to do it. How to divide up the risks. How to divide up the hypothetical rewards. So everyone perceives their own win. And by the time they do, again, autonomous vehicles will be utterly irrelevant to the world.
By all accounts the autonomous vehicle ‘industry’ is currently spending $50B a year to move the technology forward. When I think about what that money could do that might actually turn out to be useful to mankind, its enough to make a grown man cry. Probably best not to think about it. Which, it seems, is precisely what the industry is doing.