Lean Sigma: A Practitioners Guide

14. Customer Surveys

Overview

Although Customer Interviews are the primary means to collect VOC information during a Lean Sigma project, Customer Surveys can be extremely useful to capture additional data for a larger number of individuals outside of the group of Customers interviewed. Even though Customer Surveys tend to yield less return in terms of information content, they allow collection of data from a variety of Customer channels and the data is from the sourcethe actual Customer! Surveys also tend to be less time consuming than interviews and can touch more customers than interviews.

The key to surveying is in the preparation of the document itself and the management of the distribution and collection.

Logistics

The creation of a Customer Survey requires extensive Team input and should not be attempted by the Belt in isolation. However, after the surveys start to be returned, the Belt can work individually to collate and process the data and then feed the results back to the Team.

Creation of a robust Survey can take as long as eight hours if done from scratch. In the Lean Sigma roadmap, it is always done alongside Customer Interviewing, and the work done to create the Interview Guide reduces time to create a Survey to perhaps four hours at most. It is advisable to use help from someone with experience in creating surveys if this is the first time for the Belt.

Roadmap

The roadmap to create a Customer Survey is as follows:

Step 1.

Define the Survey objective. The objective is key to define the scope, define the population, keep the survey focused, and ensure the survey meets the needs of the project. The objective can be derived from two questions:

  • What is the problem to be solved?

  • What new information is required to solve it?

Survey objectives often include the words:

  • Describe or understand a population's characteristics

  • Explain the relationship between two characteristics

  • What are the most important variables to predict some other variable

  • Compare variables and levels to find differences that aren't due to chance

For example, for a problem description, "Based on mistakes in receiving transactions, there might be a need for some training for personnel on the docks. It is not obvious who needs it most, what they need or what alternatives make sense." Some prioritized survey objectives might be

  1. Describe experience level and prior training of anyone who does at least two transactions a week.

  2. Determine the most common needs for training.

  3. Find out if people who fit (1) are satisfied with their current skill and knowledge levels.

  4. Identify whether people who fit (1) are willing to participate in off-hours training.

  5. Determine if increased training increases confidence and satisfaction with the job.

The survey objective can lead to collecting both qualitative data and quantitative data, depending on the problem in hand.

Step 2.

Design the Survey. There is no surefire approach to this, but some simple rules apply:

  • Keep the audience and time in mind.

  • Use simple, common language and take out acronyms, jargon, and abbreviations. Can a co-worker fill it out without asking for instructions?

  • Vague questions tend to get vague answers, so be concrete and specific.

  • Ask one simple question at a time.

  • Watch out for "hot buttons" or prejudicial words that might provoke an unwanted response.

  • Use one of the standard scales (described later in this section).

  • Don't try to put too much on the page, keep a simple, clean layout.

A survey typically is comprised of
  • Demographics, for example:

    • How many years with the company

    • Division

    • Job function

  • Close-ended questions, for example:

    • Yes/No (circle one)

    • Anchored scales (described later in this section)

    • Simple answers (circle one)

  • Open-ended questions such as

    • What did you like most about the course?

    • What is your top priority for performance improvement?

Surveys invariably involve some kind of subjective assessment, which can be fraught with danger if not handled correctly. The simplest approach is to use what is known as an anchored scale to give better consistency of data over time and different people. To create an anchored scale:
  • Specify the measure you are using.

  • Assign points to the scale. This is usually an odd numbered scale, because even numbers force a choice.

  • Anchor the high and low points with some simple language. If needed, fill in an intermediate point with simple language too.

For most uses, the data can be treated from an anchored scale as continuous data or interval data.

Some examples of anchored scales include

  • 1 = Poor; 2 = Fair; 3 = OK; 4 = Good; 5 = Excellent

  • 1 = Strongly Disagree; 2 = Disagree; 3 = Neutral; 4 = Agree; 5 = Strongly Agree

  • 1 = Low; 2; 3; 4 = High

The language at one end has to match the language at the other end and the scale has to be a continuum. Intervals between scale points have to be equal so that the distance between 1 and 2 is the same as the distance between 2 and 3, and so on.

Some inappropriate scales include

  • 1 = Disagree; 5 = Strongly Approve

  • Would you say your current age is 1) too young, 2) about right, 3) not old enough, or 4) too old?

Figures 7.14.1a and 7.14.1b show both pages of a survey used by SBTI for course evaluation. The form includes multiple anchored scales questions about the class and Program, along with some open-ended questions on Figure 7.14.1b. No demographic information is gathered to preserve anonymity.

Figure 7.14.1a. SBTI Course Evaluation (c2001) page 1.

Figure 7.14.1b. SBTI Course Evaluation (c2001) page 2.

Step 3.

Determine the data collection method. There are a number of approaches to collect survey data, the most common being

  • One-on-one can be used to tackle complex questions and is beneficial if it is likely that people won't respond by phone or mail. However, it does require trained, experienced interviewers (to avoid bias) and people can be reluctant to honestly respond face to face. It also tends to take as long as an interview, in which case an interview approach might be better.

  • Phone surveys produce the fastest results and have the best quality control because calls can be monitored. However, there is more risk of the interviewer influencing responses than a mail survey and sometimes respondents give quick answers due to time pressure. It also requires trained interviewers and a high callback rate to ensure sample integrity.

  • Mail surveys require the least amount of trained resources and can be much lower cost. They are the best approach to reach a large number of respondents and have a lower risk of the interviewer influencing responses. Quality control is tough; there might be skipped questions or misunderstandings and a risk of bias from non-response. The response rate is typically low at around 5% to 20% for an external survey and around 30% to 50% for an internal survey.

The approach taken is best determined by the survey objective. To gain a better return rate, talk to Customers individually explaining the purpose before you give them the survey. Response rates are much higher if the purpose and value are understood.

Step 4.

Create a Sampling Plan. A sample (for more detail see "KPOVs and Data" in this chapter) is the collection of only a portion of the data that is available or could be available from a whole population. From the characteristics of the sample, statistical inferences (predictions, guesses) can be made about the population as a whole. Due to the typically much smaller size of the sample versus the whole population, it is a faster, less costly way to gain insight into a process or large population. Surveys are useful to get input from 101000 Customers, but if you are looking at a number greater than 100, consider a sampling approach rather than surveying everyone.

The Sample Plan is affected by two sample properties:

  • Sample size To determine if it is necessary to identify the size of the total population in question. For example, does "Customers" mean current, former, hoped-for, or all of the these? Sometimes no sampling is required. The population might be small enough to survey every data point, known as a census. The Sample Size also depends on the data analysis to be performed on the resulting data. Table 7.14.1 shows minimum needed sample sizes for some common analyses. In general, if the data to be collected is Continuous (see "KPOVs and Data" in this chapter) then only 30 data points are enough; for Attribute data 100 data points are required. Note that when analyzing subgroups within a sample, the sample size is effectively reduced.

    Table 7.14.1. Sample Sizes Needed for Common Analyses

    Tool or Statistic

    Minimum Sample Size

    Average

    510

    Standard Deviation

    2530

    Proportion Defective (P)

    100 and nP 5

    Histogram or Pareto

    50

    Scatter Diagram

    25

    Control Chart

    20

  • Sample Quality A good sample is a miniature version of the population; it is just like it, only smaller. There are a number of ways to make a mistake, so plan to avoid the following:

    • Coverage error People in the sample aren't really representative of target Customers.

    • Sampling error Using a sample always means an estimation. This is a fact of life; however, this error can be quantified and minimized.

    • Measurement error Errors or noise are introduced with the survey tool, from the interviewer or from the respondent.

    • Non-response error People in the sample who didn't respond are different in an important way from those who did.

    • Selection bias "But I only wanted to talk to people who looked nice."

Step 5.

Conduct or send out the Survey. The survey usually takes one week to get the data back. Stress the objective of the study and the importance of their involvement and include the name and telephone number of a Team member who can be contacted for assistance.

Step 6.

Create an Analysis plan, include reporting. Determine how the data will be analyzed, and how it will be reported and to what audience.

Step 7.

Follow-up with a postcard, letter, or phone call to thank the respondents for their participation.

Interpreting the Output

Customer Surveys are one in a series of tools to capture the VOC. The results of the surveys, along with output from Customer Interviews, is affinitized and then translated into a Customer Requirements Tree to identify the Big Ys or KPOVs for the process.

Категории