Six Sigma Fundamentals: A Complete Introduction to the System, Methods, and Tools

The purpose of performing a measurement system analysis (MSA) is to ensure the information collected is a true representation of what is occurring in the process. Perhaps the most important item to remember is that total variation in a process is equal to the sum of the process variation and the measurement system variation. Therefore, minimizing measurement variation ensures that variation reflected by the data collected represents only process variation. As a consequence, MSA is performed on a regular basis to ensure that data is valid and reliable. Typical questions answered by conducting MSA include:

Measurement

People measure product characteristics or process parameters so they can assess the performance of the system of interest. The measured values provide feedback of the process, so that people may adjust settings, replace tools, redesign fixtures or allow the operation to continue on its current course. The measurements are indeed the data that will allow people to make decisions critical to improvement efforts.

As critical as these measurements are, no measurement process is perfect. Sometimes different numbers or readings result when the same part or sample is measured a second time. Different readings may be made by different people, gauges or even by the same person using the same gauge. The difference in successive measurements of the same item is called measurement error. This source of variation must be analyzed, because the validity of the data directly affects the validity of process improvement decisions.

The measurement system is a major component of the process. In fact, studying variation within the parameters of the measurement system is of paramount importance for two reasons:

In addition to being a part of the process or system, measurement activities may also form a process. It is totally inappropriate to view measurement error as merely a function of measurement hardware or instruments. Other components of the measurement process are equally important to measurement error or validity. For example, people contribute to measurement error by having different levels of tactile, auditory or visual perception. These characteristics account for calibration and/or interpretation differences.

Another example in measurement error is the contribution of a method change. This kind of error is one of the largest sources of variation in the measurement process. The significance of this is compounded when different people or instruments are used to evaluate the same item or process. Obviously, a standard procedure is needed for every measurement activity. Only this procedure should be used by all the people who operate test equipment. Measurement errors that are sometimes attributed to the different people collecting the data are actually due to differences in methodology. People are usually able to produce similar readings when they use the same methods for operating the measurement equipment. Other examples where measurement error may be introduced are: changes in environment, changes in test equipment, changes in standards and so on. In dealing with measurement error, one must be familiar with the concepts of true value, accuracy and precision. Each of these is discussed in the following sections.

True value. The true value is a theoretical number that is absolutely the "correct" description of the measured characteristic. This is the value that is estimated each time a measurement is made. The true value is never known in practice because of the measurement scale resolution, or the divisions that are used for the instrument's output. For example, someone may be satisfied with a dimension of up to tenth of an inch (0.1). However, someone else may define that dimension with a different instrument up to the ten thousandth of an inch (0.1043), which, of course, is closer to the true value. The appropriate level of measurement resolution is determined by the characteristic specifications and economic considerations. A common practice and rule of thumb calls for tester resolution that is equal to or less than one-tenth of the characteristic tolerance (the upper specification limit minus the lower specification limit). The true value is considered as part of tester calibration and discussions of measurement accuracy.

Accuracy. The accuracy of a measurement system is the relationship between the average of a number of successive measurements of a single part and that of the true value. When the measurement process yields a mean of successive measurements that is different than the true value, the instrument is not calibrated properly.

Precision. The precision of a measurement system is quantified by the spread of readings that result from successive measurements of the same part or sample. The standard deviation of measurement error is used to quantify the spread of the precision distribution. The common variation that creates the precision distribution comes from two different sources:

Any study that quantifies repeatability and reproducibility contains much diagnostic information. This information should be used to focus measurement system improvement efforts. There are two basic ways of determining the repeatability and reproducibility. The first is the graphical way and the second is the calculation method via (a) standard deviation of measurement error, (b) the precision/tolerance (P/T) ratio, and (c) the long and short method. Although the methods for determining repeatability and reproducibility are beyond the scope of this book, standard deviation and P/T ratios are briefly discussed in the following sections. The reader is encouraged to see forms 13–15 in the CD for an example of the long and short repeatability and reproducibility. For additional reading on these items see Stamatis (2003), AIAG (2002), and Bothe (1997). However, the following statements should be remembered:

The standard deviation of measurement error. Because precision is separated into repeatability and reproducibility, the spread of the precision distribution is calculated as a composite. The standard deviation for measurement error is calculated as:

Se = (SRPT+SRPD)1/2

Six standard deviations (6σ) of measurement error describe the spread of the precision distribution. The magnitude of this spread is evaluated with the precision/tolerance (P/T) ratio.

P/T ratio. A measurement system is declared adequate when the magnitude of measurement error is not too large. One way to evaluate the spread of the precision distribution is to compare it to the product tolerance. This is an absolute index of measurement error, because the product specifications should not change. A measurement system is acceptable if it is stable and does not consume a major portion of the product tolerance. The ratio between the precision distribution (six standard deviations of measurement error) and the product tolerance (upper specification limit and lower specification limit) is called the P/T ratio and quantifies this relationship.

The following general criteria are used to evaluate the size of the precision distribution:

P/T ratio level of measurement error:

Stability and linearity are also measures of measurement and integral part of the repeatability and reproducibility (R & R). Repeatability and reproducibility are indices of measurement error based on relatively short periods of time. Stability describes the consistency of the measurement system over a long period. The additional time period allows further opportunities for the sources of repeatability and reproducibility error to change and add errors to the measurement system. All measuring systems should be able to demonstrate stability over time. A control chart made from repeated measurements of the same items documents the level of a measurement system's stability. On the other hand, linearity is the difference in bias errors over the expected operating range of the measurement system.

Категории