Goal: To identify defects in artifacts created during the analysis and design phases of software construction.
-
Define the scope and depth of the guided inspection.
Inputs:
The project's position in the life cycle.
The materials produced by the project (UML models, plans, use cases).
Outputs:
A specific set of diagrams and documents that will be the basis for the evaluation.
Method:
Define the scope of the guided inspection to be the set of deliverables from a phase of the development process. Use the development process information to identify the deliverables that will be produced by the phase of interest.
Example:
The project has just completed the domain analysis phase. The development process defines the deliverable from this phase as a UML model containing domain level use cases, static information such as class diagrams, and dynamic information such as sequence and state diagrams. The guided inspection will evaluate this model.
-
Identify the basis model(s) from which the material being inspected was created.
Inputs:
The scope of the guided inspection.
The project's position in the life cycle.
Outputs:
The material from which the test cases will be constructed (the model under test MUT).
Method:
Review the development process description to determine the inputs to the current phase. The basis model(s) should be listed as inputs to the current phase.
Example:
The inputs to the domain analysis phase is the "knowledge of experts familiar with the domain." These mental models are the basis models for this guided inspection.
-
Assemble the guided inspection team.
Inputs:
The scope of the guided inspection.
Available personnel.
Outputs:
A set of participants and their roles.
Method:
Assign persons to fill one of three categories of roles: Administrative, Participant in creating the model to be tested, Objective observer of the model to be tested. Choose the objective observers from the customers of the model to be tested and the participants during the creation of the basis model.
Example:
Since the model to be tested is a domain analysis model and the basis model is the mental models of the domain experts, the objective observers can be selected from other domain experts and/or from application analysts. The creation participants are members of the domain modeling team. The administrative personnel can perhaps come from other interested parties or an office that provides support for the conduct of guided inspections.
-
Define a sampling plan and coverage criteria.
Input:
The project's quality plan.
Outputs:
A plan for how test cases will be selected.
A description of what parts of the MUT will be covered.
Method:
Identify important elements of this MUT. Estimate the effort required to involve all of these in the guided inspection. If there are too many to cover, use information such as the RISK section of the use cases or the judgment of experts to prioritize the elements.
Example:
In a domain model there are static and dynamic models as well as use cases. At least one test case should be created for each use case. There should be sufficient test cases to take every "major" entity through all of its visible states.
-
Create test cases from the bases.
Inputs:
The sampling plan.
MUT.
Output:
A set of test cases.
Method:
Obtain a scenario from the basis model. Determine the preconditions and inputs that are required to place the system in the correct state and to begin the test. Present the scenario to the "oracle" to determine the results expected from the test scenario. Complete a test case description for each test case.
Example:
A different domain expert than the one who supported the model creation would be asked to supply scenarios that correspond to uses of the system. The experts also provide what they would consider an acceptable response.
-
Apply the tests to the material.
Inputs:
Set of test cases.
MUT
Output:
Set of test results.
Method:
Apply the test cases to the MUT using the most specific technique available. For UML models in a static environment, such as Rational Rose, an interactive simulation session in which the Creators play the roles of the model elements is the best approach. If the MUT is represented by an executable prototype then the test cases are mapped onto this system and executed.
Example:
The domain analysis model is a static UML model. A simulation session is conducted with the Observers feeding test cases to the Creators. The Creators provide details of how the test scenario would be processed through the model. Sequence diagrams document the execution of each test case. Use agreed-upon symbols or colors to mark each element that is touched by a test case.
-
Gather and analyze test results.
Inputs:
Test results in the form of sequence diagrams and pass/fail decisions. The marked-up model.
Outputs:
Statistics on percentage pass/fail.
Categorization of the results.
Defect catalogs and defect reports.
A judgment of the quality of the MUT and the tests.
Method:
Begin by counting the number of test cases that passed and how many have failed. Compare this ratio to other guided inspections that have been conducted in the organization. Compute the percentage of each type of element that has been used in executing the test cases. Use the marked-up model as the source of this data. Update the defect inventory with information about the failures from this test session. Categorize the failed test cases. This can often be combined with the previous two tasks by marking paper copies of the model. Follow the sequence diagram for each failed test case and mark each message, class, and attribute touched by a failed test case.
Example:
For the domain analysis model we should be able to report that every use case was the source of at least one test case, and that every class in the class diagram was used at least once. Typically, on the first pass, some significant states will be missed. This should be noted in the coverage analysis.
-
Report and feedback.
Inputs:
Test results.
Coverage information.
Outputs:
Information on what new tests should be created.
Test report.
Method:
Follow the standard format for a test report in your organization to document the test results and the analyses of those results. If the stated coverage goals are met then the process is complete. If not, use that report to return to Step 5 and proceed through the steps to improve the coverage level.
Example:
For the domain analysis tests, some elements were found to be missing from the model. The failing tests might be executed again after the model has been modified.
The administrative tasks include running the guided inspection sessions, collecting and disseminating the results, and aggregating metrics to measure the quality of the review. In our example, the administrative work could be done by personnel from a central office.
The persons who created the MUT. Depending on the form that the model takes, these people may "execute" the symbolic model on the test cases or they may assist in translating the test cases into a form that can be executed with whatever representation of the model is available. In our example the modelers who created the domain model would be the "creators."
Persons in this role create the test cases that are used in the guided inspection. In our example they would be domain experts and preferably experts who were not the source of the information that was used to create the model initially.