Succeeding with Use Cases: Working Smart to Deliver Quality
Your product has been in final system test for days; or has it been weeks? Surely it must be time to stop testing and release it? It's the moment of decision, and you realize: Damned if you do; Damned if you don't. Release it too early, and you incur the wrath of customers inflicted with a buggy product and the high cost of fixing and testing defects released to production. Hold the product in testing, and you incur the wrath of marketing as they remind you of the revenue that is being lost on top of the cost of too much testing. There is a sweet spot in testing, a point that strikes that perfect balance between releasing the product too early and releasing the product too late.[1] But how do you know you are close to that sweet spot? [1] Although examples from this chapter are couched in terms of final system test, in the Unified Software Development Process the question of whether to stop or continue testing for and fixing defects is one that is pertinent throughout the construction and transition phase during: Testing of increments of the system at the end of each iteration to determine if moving to the next iteration is warranted; final system test at the end of the construction phase to determine if a product is reliable enough for beta test; beta test during transition phase to determine if the system is ready for full commercial release. The second idea Software Reliability Engineering (SRE) brings to use case development is a quantitative way to talk about reliability, providing a sound basis for determining when a product's reliability goal has been reached, testing can terminate, and the product can be released. In this chapter, we'll do the following:
Let's begin by defining what we mean by reliability. |