The Evaluation Phase

You figured out which data are important to look at and now you are ready to analyze and evaluate them. This is when one can apply all possible data analysis and statistical techniques to extract the messages within it.

15.2.1 Quantitative Data

For quantitative analysis tools such as control charts, trend charts , histograms, pareto diagrams, and scatter diagrams or statistical techniques ranging from simple tabulation analysis to sophisticated multivariate methods are all fair game. It is our experience that simple techniques can be very powerful and most of the time sophisticated statistical techniques are unnecessary. The key point is to garner useful information from the data. As discussed in previous chapters, we found that using the effort/outcome paradigm is particularly useful in assessing in-process metrics. Of course, the data gathered must include both effort indicators and outcome indicators in order to apply this approach, and this should be a consideration in the planning and preparation phase. At the least, from raw data to useful information, some meaningful comparisons with relevant baseline, plan, or a previous similar product need to take place.

When analyzing the data, it is always good practice to pay particular attention to anything unusual. Good questions to ask in such situations are, "What more can I learn about this?" and "How can I put this into perspective?" Figures 15.3, 15.4, and 15.5 include examples of data that bear further investigation. In Figure 15.3, Team A was significantly behind plan in its functional test and Component X had not even started its testing. In Figure 15.4, the defect arrival pattern of the current project dif-fered from that for previous comparable projects. Was the higher defects volume in the early part of the defect curve due to more effective testing and better progress? Was the testing effectiveness and progress about the same as previous project at this point in the development cycle? In Figure 15.5, the test plan S-curve shows an unusual and potentially unachievable pattern.

Figure 15.3. Data on Functional Tests that Beg for Further Investigation

Figure 15.4. A Defect Arrival Pattern that Deviates from Historical Data

Figure 15.5. A Test Plan S Curve Showing an Unusual Pattern

15.2.2 Qualitative Data

For the qualitative evaluation, information from the interviews and open -ended probing can be classified , grouped, and correlated with existing knowledge and findings from quantitative analyses. The strongest proponents of quantitative methods argue that without metrics, an assessment is just another opinion. While quantitative data is important, our experience indicates that effective quality assessments are characteris-tically based on cross validation of findings and observations of both quantitative data and qualitative evaluation. Expert opinions also carry special weight. To that regard, the assessor should be equipped with acute observations to delineate whether the input he or she is getting is true expert opinion or opinion clouded by other factors. For example, opinions of the quality of the project may be optimistic by the development manager and pessimistic by the testing manager. It is not uncommon that at project checkpoint review meetings, the status of the project goes from excellent to poor, or vice versa, in just a few moments depending on the order of presentations by the development, support, testing, and service groups.

15.2.3 Evaluation Criteria

Evaluation of qualitative data is based on expert judgment and cross validation. For quantitative indicators, you may want to use predetermined criteria to ensure consistency. The following are sample criteria for evaluation of quantitative indicators:

The following are sample criteria for a qualitative indicator (plan change):

The following shows sample criteria for an indicator that may require both qualitative and quantitative evaluation (design status):

Категории