Make Better Design Decisions With VOC
By Anthony Curtis and Kimberly Watson-Hemphill
Many companies launch product or service design efforts based on whatever knowledge they already have about customer needs from questionnaires, focus groups, the opinions of marketing staff and senior engineers … and sometimes the CEO. Often this information is more opinion than data. Teams read through this existing customer information, whether it is relevant to the current project or not, then dive directly into design work. Then, the company has little or no further contact with customers until the product or service is released into the marketplace (Figure 1).
In this model, customers are not engaged in the initial development of the ideas or prototyping efforts. The risks of this non-data-driven approach are evident. This pattern provides just one feedback cycle from the market – and it comes after all development costs have been spent and change is extremely expensive. At this juncture company officials say things like, “The customers do not understand all our features.” “They treat us like a commodity.” “They do not recognize the value of our differentiation.” Yet the fault lies with the company, not with customers.
Companies that have advanced their voice of the customer (VOC) methods to the next level have dozens, if not hundreds, of VOC cycles built into their development processes. They do a lot of quick back-and-forth cycles with customers throughout the design phases, incorporating detailed customer preference information in analysis of trade-off decisions (Figure 2).
To get a deeper understanding of customer needs, innovative companies have explored two ideas – rapid prototyping and tools for making design trade-off decisions.
Rapid Prototyping
The principle behind rapid prototyping is that it is best to prototype a few features, get customer reactions, then go back into brainstorming and come back with a few more single-feature (or functionality) prototypes. In this way, there are many feedback cycles during the development process, before a lot of money is spent. This, of course, contrasts with developing a prototype of a service or product consisting of all major features and getting one – possibly fatal – cycle of feedback from the market.
Establishing small, quick cycles of “concept-prototype-concept” has an important advantage: it allows the company to remain flexible as it develops new offerings. This flexibility to adapt is paramount. A perfectly flexible design process can introduce a change with no impact on the overall lead time. The key themes are:
- Do many little tests rather than a few big tests.
- Use quick cycles to test ideas (not full solutions) with customers.
- Check ideas while they are still raw; do not wait until everything is set in stone.
- Be sparked by the observations of customers.
This approach can be and is used in any sector. For example, engineers who were designing the software needed to fly the Space Shuttle came up with 19 different screens they felt astronauts would need. In a break from tradition, management had the engineers build mockups of these screens before writing the hundreds of thousands of lines of source code needed to generate the real thing. The astronauts’ reaction to the mockups was intense: “Whoever designed these screens had obviously never flown!” The engineers’ reactions were normal – denial, anger, etc. But when they conferred with the astronauts, the result was win-win – an enormous time savings to the engineering company and much more intuitive, easy-to-use screens for the astronauts.
Making the Trade-Off Decisions
Because everyone, from product designers to service managers, is making decisions about how to “give customers what they want,” there has been an uncontrolled proliferation in products/services, options, features, etc. When business leaders have little idea about what customers would be excited to buy, they allow, if not openly encourage, this proliferation. The result is an explosion in development costs in companies and in the number of products or services that are not economically profitable.
The root of the problem is a failure to understand specifically what customers value about a company’s products or services, how much they value it and what a company can afford to deliver. Unfortunately, at a typical VOC seminar or in most literature on VOC techniques, almost all of the attention is on what customers value, with very little attention to the “how much” or “affordability” questions.
It is one thing to come up with design options based on customer needs. It is another thing entirely to know that if you offer Feature X, you can ignore Feature Y, take the cheaper of two options for Feature Z, and still raise the price by 20 percent. The latter level of specificity is essential if a company wants to maximize the return on its design dollars. Fortunately, there are a number of easy-to-use statistical tools that can help provide this essential information. Here’s a quick look at three tools that can be of particular use and bear further investigation.
Key Buying Factor Analysis
A company asked an important customer to identify what they thought was most important in the company’s offering, then rate the company against several competitors. The bars on a resultant chart (Figure 3) indicate the customer’s importance rating; each line tracks how well the company or one of its competitors did relative to those ratings. The most important takeaway from this chart is that the company did poorly on the customer’s top seven attributes; they only did well on the remaining eight attributes. This company (and its competitors) had mistakenly focused on attributes that were lower in importance to the customer. The results showed up in the offering’s poor financial performance.
This type of chart helps pinpoint what customers consider value-added. If the company that did this analysis can improve performance across all customers’ priorities, it could gain an advantage. There are several additional things to note on the chart: First, the customer was not looking for increased options (which translates to increased complexity of the offering). Second, price is nowhere near top on the list, which means that if the company can deliver on the important service support functions rated highly by the customer (on-time delivery, correct invoices, etc.), they can charge enough to make the offering profitable.
Design of Experiments
Design of experiments (DOE) is a set of experiment protocols that allows the testing of multiple factors at one time, rather than the traditional “change-one-thing-at-a-time” approach. The output is a response surface chart that shows which set of input conditions leads to the maximum value of output. As described in the Harvard Business Review article by Stefan Thomke, (R&D Comes to Services: Bank of America’s Pathbreaking Experiment, April 2003), BOA has actually set up a prototype banking center that regularly runs designed experiments. The result is “surges of fresh thinking, improved customer satisfaction … new customers, and deep[er] understanding of service development. The payoff? A crucial edge over less adventurous competitors.” The keys to success are having the ability to test a lot of different things at once – which not only saves time but also reveals how different factors interact.
Conjoint Analysis
As any product or service designer knows, customers rarely base purchasing decisions on just one factor. Almost always they are looking for a specific combination of features, options, price and so on. The tool that helps find the most economically profitable combinations is called conjoint analysis, a scientific approach used to simultaneously analyze multiple factors related to a product, service or process.
The purpose of conjoint analysis is to understand how purchasing decisions will change based on different “packages” of product/service attributes. Consider an example: A major hotel chain was redesigning its frequent guest program. They wanted to understand the value of hotel points and frequent flyer miles. They also were considering offering some free snacks or a free in-room movie. They wanted to understand which of these services customers would prefer, and if the customers would pay more for the amenities. For the conjoint analysis, the team identified two levels for each of five factors:
Room rate: $80 versus $100
Hotel points: 0 points versus 500 points
Frequent flyer miles: 0 miles versus 500 miles
Free in-room movie: No versus yes
Free snacks in room: No versus yes
The results of the analysis are shown in Table 1.
Based on this data and further analysis, the team reached both obvious and not-so-obvious conclusions:
- Not surprisingly, price was most important.
- Customers also were interested in hotel points and free snacks, but frequent flyer miles and free in-room movies were not a factor in their decisions.
- Customers would prefer getting lower room rate of $80 with both of the perks they cared about.
- An option of paying $85 rate and getting hotel points was preferred to the lower price and no options.
About the Authors:
Anthony E. Curtis is a Master Black Belt at George Group specializing in applying Lean Six Sigma within service companies. With more than eight years experience in retail operations, he has managed projects in store operations, customer service, distribution, finance and marketing. Mr. Curtis has applied the DMAIC methodology across both traditional operational models and atypical non-operational models. Contact Anthony Curtis at tcurtis (at) georgegroup.com or visit http://www.georgegroup.com/.
Kimberly Watson-Hemphill is a Master Black Belt with George Group and lead author of their Design for Lean Six Sigma curriculum. She has trained and coached hundreds of Black Belts and Master Black Belts throughout North America and Europe. She has a background in all areas of Lean Six Sigma, new product development and project management, and has worked with Fortune 500 companies in both service and manufacturing industries. Contact Kimberly Watson-Hemphill at kwatson (at) georgegroup.com or visit http://www.georgegroup.com.