Defect Taxonomies
'Failure' was simply not a word that would ever cross the lips of Miss Evelyn Duberry, mainly because Evelyn, a haughty socialite with fire-red hair and a coltish gate, could pronounce neither the letters 'f' nor 'r' as a result of an unfortunate kissing gesture made many years earlier toward her beloved childhood parrot, Snippy.
— David Kenyon
Introduction
What is a taxonomy? A taxonomy is a classification of things into ordered groups or categories that indicate natural, hierarchical relationships. The word taxonomy is derived from two Greek roots: "taxis" meaning arrangement and "onoma" meaning name. Taxonomies not only facilitate the orderly storage of information, they facilitate its retrieval and the discovery of new ideas. Taxonomies help you:
- Guide your testing by generating ideas for test design
- Audit your test plans to determine the coverage your test cases are providing
- Understand your defects, their types and severities
- Understand the process you currently use to produce those defects (Always remember, your current process is finely tuned to create the defects you're creating)
- Improve your development process
- Improve your testing process
- Train new testers regarding important areas that deserve testing
- Explain to management the complexities of software testing
Key Point |
A taxonomy is a classification of things into ordered groups or categories that indicate natural, hierarchical relationships. |
In his book Testing Object-Oriented Systems, Robert Binder describes a "fault model" as a list of typical defects that occur in systems. Another phrase to describe such a list is a defect taxonomy. Binder then describes two approaches to testing. The first uses a "non-specific fault model." In other words, no defect taxonomy is used. Using this approach, the requirements and specifications guide the creation of all of our test cases. The second approach uses a "specific fault model." In this approach, a taxonomy of defects guides the creation of test cases. In other words, we create test cases to discover faults like the ones we have experienced before. We will consider two levels of taxonomies—project level and software defect level. Of most importance in test design are the software defect taxonomies. But it would be foolish to begin test design before evaluating the risks associated with both the product and its development process.
Note that none of the taxonomies presented below are complete. Each could be expanded. Each is subjective based on the experience of those who created the taxonomies.
Project Level Taxonomies
SEI Risk Identification Taxonomy
The Software Engineering Institute has published a "Taxonomy-Based Risk Identification" that can be used to identify, classify, and evaluate different risk factors found in the development of software systems.
Class |
Element |
Attribute |
---|---|---|
Product Engineering |
Requirements |
Stability |
Completeness |
||
Clarity |
||
Validity |
||
Feasibility |
||
Precedent |
||
Scale |
||
Design |
Functionality |
|
Difficulty |
||
Interfaces |
||
Performance |
||
Testability |
||
Code and Unit Test |
Feasibility |
|
Testing |
||
Coding/Implementation |
||
Integration and Test |
Environment |
|
Product |
||
System |
||
Engineering Specialties |
Maintainability |
|
Reliability |
||
Safety |
||
Security |
||
Human Factors |
||
Specifications |
||
Development Environment |
Development Process |
Formality |
Suitability |
||
Process Control |
||
Familiarity |
||
Product Control |
||
Development System |
Capacity |
|
Suitability |
||
Usability |
||
Familiarity |
||
Reliability |
||
System Support |
||
Deliverability |
||
Management Process |
Planning |
|
Project Organization |
||
Management Experience |
||
Program Interfaces |
||
Management Methods |
Monitoring |
|
Personnel Management |
||
Quality Assurance |
||
Configuration Management |
||
Work Environment |
Quality Attitude |
|
Cooperation |
||
Communication |
||
Morale |
||
Program Constraints |
Resources |
Schedule |
Staff |
||
Budget |
||
Facilities |
||
Contract |
Types of Contract |
|
Restrictions |
||
Dependencies |
||
Program Interfaces |
Customer |
|
Associate Contractors |
||
Subcontractors |
||
Prime Contractor |
||
Corporate Management |
||
Vendors |
||
Politics |
If, as a tester, you had concerns with some of these elements and attributes, you would want to stress certain types of testing. For example:
If you are concerned about: |
You might want to emphasize: |
---|---|
The stability of the requirements |
Formal traceability |
Incomplete requirements |
Exploratory testing |
Imprecisely written requirements |
Decision tables and/or state-transition diagrams |
Difficulty in realizing the design |
Control flow testing |
System performance |
Performance testing |
Lack of unit testing |
Additional testing resources |
Usability problems |
Usability testing |
ISO 9126 Quality Characteristics Taxonomy
The ISO 9126 Standard "Software Product Evaluation—Quality Characteristics and Guidelines" focuses on measuring the quality of software systems. This international standard defines software product quality in terms of six major characteristics and twenty-one subcharacteristics and defines a process to evaluate each of these. This taxonomy of quality attributes is:
Quality Characteristic |
Subcharacteristic |
---|---|
Functionality (Are the required functions available in the software?) |
Suitability |
Accuracy |
|
Interoperability |
|
Security |
|
Reliability (How reliable is the software?) |
Maturity |
Fault tolerance |
|
Recoverability |
|
Usability (Is the software easy to use?) |
Understandability |
Learnability |
|
Operability |
|
Attractiveness |
|
Efficiency (How efficient is the software?) |
Time behavior |
Resource behavior |
|
Maintainability (How easy is it to modify the software?) |
Analyzability |
Changeability |
|
Stability |
|
Testability |
|
Portability (How easy is it to transfer the software to another operating environment?) |
Adaptability |
Installability |
|
Coexistence |
|
Replaceability |
Each of these characteristics and subcharacteristics suggest areas of risk and thus areas for which tests might be created. An evaluation of the importance of these characteristics should be undertaken first so that the appropriate level of testing is performed. A similar "if you are concerned about / you might want to emphasize" process could be used based on the ISO 9126 taxonomy.
These project level taxonomies can be used to guide our testing at a strategic level. For help in software test design we use software defect taxonomies.
Software Defect Taxonomies
In software test design we are primarily concerned with taxonomies of defects, ordered lists of common defects we expect to encounter in our testing.
Beizer s Taxonomy
One of the first defect taxonomies was defined by Boris Beizer in Software Testing Techniques. It defines a four-level classification of software defects. The top two levels are shown here.
1xxx |
Requirements |
11xx |
Requirements incorrect |
12xx |
Requirements logic |
13xx |
Requirements, completeness |
14xx |
Verifiability |
15xx |
Presentation, documentation |
16xx |
Requirements changes |
2xxx |
Features And Functionality |
21xx |
Feature/function correctness |
22xx |
Feature completeness |
23xx |
Functional case completeness |
24xx |
Domain bugs |
25xx |
User messages and diagnostics |
26xx |
Exception conditions mishandled |
3xxx |
Structural Bugs |
31xx |
Control flow and sequencing |
32xx |
Processing |
4xxx |
Data |
41xx |
Data definition and structure |
42xx |
Data access and handling |
5xxx |
Implementation And Coding |
51xx |
Coding and typographical |
52xx |
Style and standards violations |
53xx |
Documentation |
6xxx |
Integration |
61xx |
Internal interfaces |
62XX |
External interfaces, timing, throughput |
7XXX |
System And Software Architecture |
71XX |
O/S call and use |
72XX |
Software architecture |
73XX |
Recovery and accountability |
74XX |
Performance |
75XX |
Incorrect diagnostics, exceptions |
76XX |
Partitions, overlays |
77XX |
Sysgen, environment |
8XXX |
Test Definition And Execution |
81XX |
Test design bugs |
82XX |
Test execution bugs |
83XX |
Test documentation |
84XX |
Test case completeness |
Even considering only the top two levels, it is quite extensive. All four levels of the taxonomy constitute a fine-grained framework with which to categorize defects.
At the outset, a defect taxonomy acts as a checklist, reminding the tester so that no defect types are forgotten. Later, the taxonomy can be used as a framework to record defect data. Subsequent analysis of this data can help an organization understand the types of defects it creates, how many (in terms of raw numbers and percentages), and how and why these defects occur. Then, when faced with too many things to test and not enough time, you will have data that enables you to make risk-based, rather than random, test design decisions. In addition to taxonomies that suggest the types of defects that may occur, always evaluate the impact on the customer and ultimately on your organization if they do occur. Defects that have low impact may not be worth tracking down and repairing.
Kaner, Falk, and Nguyen s Taxonomy
The book Testing Computer Software contains a detailed taxonomy consisting of over 400 types of defects. Only a few excerpts from this taxonomy are listed here.
User Interface Errors |
Functionality |
Communication |
|
Command structure |
|
Missing commands |
|
Performance |
|
Output |
|
Error Handling |
Error prevention |
Error detection |
|
Error recovery |
|
Boundary-Related Errors |
Numeric boundaries |
Boundaries in space, time |
|
Boundaries in loops |
|
Calculation Errors |
Outdated constants |
Calculation errors |
|
Wrong operation order |
|
Overflow and underflow |
|
Initial And Later States |
Failure to set a data item to 0 |
Failure to initialize a loop control variable |
|
Failure to clear a string |
|
Failure to reinitialize |
|
Control Flow Errors |
Program runs amok |
Program stops |
|
Loops |
|
IF, THEN, ELSE or maybe not |
|
Errors In Handling Or Interpreting Data |
Data type errors |
Parameter list variables out of order or missing |
|
Outdated copies of data |
|
Wrong value from a table |
|
Wrong mask in bit field |
|
Race Conditions |
Assuming one event always finishes before another |
Assuming that input will not occur in a specific interval |
|
Task starts before its prerequisites are met |
|
Load Conditions |
Required resource not available |
Doesn't return unused memory |
|
Hardware |
Device unavailable |
Unexpected end of file |
|
Source And Version Control |
Old bugs mysteriously reappear |
Source doesn't match binary |
|
Documentation |
None |
Testing Errors |
Failure to notice a problem |
Failure to execute a planned test |
|
Failure to use the most promising test cases |
|
Failure to file a defect report |
Binder s Object Oriented Taxonomy
Robert Binder notes that many defects in the object-oriented (OO) paradigm are problems using encapsulation, inheritance, polymorphism, message sequencing, and state-transitions. This is to be expected for two reasons. First, these are cornerstone concepts in OO. They form the basis of the paradigm and thus will be used extensively. Second, these basic concepts are very different from the procedural paradigm. Designers and programmers new to OO would be expected to find them foreign ideas. A small portion of Binder's OO taxonomy is given here to give you a sense of its contents:
Method Scope |
Fault |
|
---|---|---|
Requirements |
Requirement omission |
|
Design |
Abstraction |
Low Cohesion |
Refinement |
Feature override missing |
|
Feature delete missing |
||
Encapsulation |
Naked access |
|
Overuse of friend |
||
Responsibilities |
Incorrect algorithm |
|
Invariant violation |
||
Exceptions |
Exception not caught |
Class Scope |
Fault |
|
---|---|---|
Design |
Abstraction |
Association missing or incorrect |
Inheritance loops |
||
Refinement |
Wrong feature inherited |
|
Incorrect multiple inheritance |
||
Encapsulation |
Public interface not via class methods |
|
Implicit class-to-class communication |
||
Modularity |
Object not used |
|
Excessively large number of methods |
||
Implementation |
Incorrect constructor |
Note how this taxonomy could be used to guide both inspections and test case design. Binder also references specific defect taxonomies for C++, Java, and Smalltalk.
Whittaker s How to Break Software Taxonomy
James Whittaker's book How to Break Software is a tester's delight. Proponents of exploratory testing exhort us to "explore." Whittaker tells us specifically "where to explore." Not only does he identify areas in which faults tend to occur, he defines specific testing attacks to locate these faults. Only a small portion of his taxonomy is presented:
Fault Type |
Attack |
---|---|
Inputs and outputs |
Force all error messages to occur |
Force the establishing of default values |
|
Overflow input buffers |
|
Data and computation |
Force the data structure to store too few or too many values |
Force computation results to be too large or too small |
|
File system interface |
Fill the file system to its capacity |
Damage the media |
|
Software interfaces |
Cause all error handling code to execute |
Cause all exceptions to fire |
Vijayaraghavan s eCommerce Taxonomy
Beizer's, Kaner's, and Whittaker's taxonomies catalog defects that can occur in any system. Binder's focuses on common defects in object-oriented systems. Giri Vijayaraghavan has chosen a much narrower focus—the eCommerce shopping cart. Using this familiar metaphor, an eCommerce Web site keeps track of the state of a user while shopping. Vijayaraghavan has investigated the many ways shopping carts can fail. He writes, "We developed the list of shopping cart failures to study the use of the outline as a test idea generator." This is one of the prime uses of any defect taxonomy. His taxonomy lists over sixty high-level defect categories, some of which are listed here:
- Performance
- Reliability
- Software upgrades
- User interface usability
- Maintainability
- Conformance
- Stability
- Operability
- Fault tolerance
- Accuracy
- Internationalization
- Recoverability
- Capacity planning
- Third-party software failure
- Memory leaks
- Browser problems
- System security
- Client privacy
After generating the list he concludes, "We think the list is a sufficiently broad and well-researched collection that it can be used as a starting point for testing other applications." His assertion is certainly correct.
A Final Observation
Note that each of these taxonomies is a list of possible defects without any guidance regarding the probability that these will occur in your systems and without any suggestion of the loss your organization would incur if these defects did occur. Taxonomies are useful starting points for our testing but they are certainly not a complete answer to the question of where to start testing.
Your Taxonomy
Now that we have examined a number of different defect taxonomies, the question arises—which is the correct one for you? The taxonomy that is most useful is your taxonomy, the one you create from your experience within your organization. Often the place to start is with an existing taxonomy. Then modify it to more accurately reflect your particular situation in terms of defects, their frequency of occurrence, and the loss you would incur if these defects were not detected and repaired.
Key Point |
The taxonomy that is most useful is your taxonomy, the one you create. |
Just as in other disciplines like biology, psychology, and medicine, there is no one, single, right way to categorize, there is no one right software defect taxonomy. Categories may be fuzzy and overlap. Defects may not correspond to just one category. Our list may not be complete, correct, or consistent. That matters very little. What matters is that we are collecting, analyzing, and categorizing our past experience and feeding it forward to improve our ability to detect defects. Taxonomies are merely models and, as George Box, the famous statistician, reminds us, "All models are wrong; some models are useful."
To create your own taxonomy, first start with a list of key concepts. Don't worry if your list becomes long. That may be just fine. Make sure the items in your taxonomy are short, descriptive phrases. Keep your users (that's you and other testers in your organization) in mind. Use terms that are common for them. Later, look for natural hierarchical relationships between items in the taxonomy. Combine these into a major category with subcategories underneath. Try not to duplicate or overlap categories and subcategories. Continue to add new categories as they are discovered. Revise the categories and subcategories when new items don't seem to fit well. Share your taxonomy with others and solicit their feedback. You are on your way to a taxonomy that will contribute to your testing success.
Summary
- Taxonomies help you:
- Guide your testing by generating ideas for test case design
- Audit your test plans to determine the coverage your test cases are providing
- Understand your defects, their types and severities
- Understand the process you currently use to produce those defects (Always remember, your current process is finely tuned to create the defects you're creating)
- Improve your development process
- Improve your testing process
- Train new testers regarding important areas that deserve testing
- Explain to management the complexities of software testing
- Testing can be done without the use of taxonomies (nonspecific fault model) or with a taxonomy (specific fault model) to guide the design of test cases.
- Taxonomies can be created at a number of levels: generic software system, development paradigm, type of application, and user interface metaphor.
References
Beizer, Boris (1990). Software Testing Techniques (Second Edition). Van Nostrand Reinhold.
Binder, Robert V. (2000). Testing Object-Oriented Systems: Models, Patterns, and Tools. Addison-Wesley.
Carr, Marvin J., et al. (1993) "Taxonomy-Based Risk Identification." Technical Report CMU/SEI-93-TR-6, ESC-TR-93-183, June 1993. http://www.sei.cmu.edu/pub/documents/93.reports/pdf/tr06.93.pdf
ISO (1991). ISO/IEC Standard 9126-1. Software Engineering - Product Quality - Part 1: Quality Model, ISO Copyright Office, Geneva, June 2001.
Kaner, Cem,Jack Falk and Hung Quoc Nguyen (1999). Testing Computer Software (Second Edition). John Wiley & Sons.
Whittaker, James A. (2003). How to Break Software: A Practical Guide to Testing. Addison Wesley.
Vijayaraghavan, Giri and Cem Kaner. "Bugs in your shopping cart: A Taxonomy." http://www.testingeducation.org/articles/BISC_Final.pdf