Design for Trustworthy Software: Tools, Techniques, and Methodology of Developing Robust Software
As we stated, complexity is the mother of all nonconformities. It must be addressed before poka yoke deployment. Chapters 1 and 3 discussed complexity as a challenge to developing trustworthy software from the software developer's perspective. From the user perspective, the question "Why does software have bugs?" will not go away. The response to that question is, software is intrinsically more complex than hardware because it has more states, or modes of behavior. An integrated enterprise business application system, for example, is likely to have 2,500 or more input forms. No machine has such a large number of operating modes. Computers, controlled by software, have more states (that is, larger performance envelopes) than do other, essentially mechanical, systems. Thus, they are intrinsically more complex. Two vital questions from the developer's perspective are "Do we understand the nature of complexities in software?" and "Can we measure them?" The fact is that we do not and are not likely to understand or measure complexities anytime soon. A number of approaches have been taken to calculate or at least estimate the degree of complexity, but the simplest is lines of code (LOC), which is a count of the executable statements in a computer program. Although this metric began in the days of assembly-language programming, it is still used today for programs written in high-level programming languages. Most procedural third-generation memory-to-memory languages such as FORTRAN, COBOL, and ALGOL typically produce six executable ML statements, whereas register-to-register languages such as C, C++, and Java produce about three. Recent studies show a curvilinear relationship between defect rate and LOC; defect density appears to decrease with program size and then increase again when as program modules become very large (see Chapter 3). Curiously, this result suggests that there may be an optimum program size leading to a lowest defect ratedepending, of course, on programming language, project, product, and environment. McCabe proposed a topological or graph-theory measure of cyclomatic complexity as a measure of how many linearly independent paths make up a computer program (see Chapter 3). Kan has reported that, other than module length, the most important predictors of defect rates are number of design changes and complexity level.[14] We may never fully comprehend complexity and measure it as such, but we now have a number of identifiable surrogates:
All these are related to complexity, so where do we start? Hinckley reports a link between assembly time and defect rate. Furthermore, he proposes assembly time as a measure of complexity:[15] Every change that reduces product or process complexity also reduces the total time required to procure materials, fabricate parts, or assemble products. Thus, time is a remarkably useful standard of complexity because it is fungible, or, in other words, an interchangeable standard useful in measuring and comparing the complexity of different product attributes. As a result, we can use time to compare the difficulty of installing a bolt to an alternative assembly method based on a snap-fit assembly. In addition, time has a common international value that is universally recognized and easily understood. It would result in fewer mistakes. The product design for reduced complexity involves addressing time. He cautions that care must be taken when using time as a measure of complexity, because worker skill, training, and the workplace strongly influence how long it takes to perform similar tasks.[16] Task complexity can be reduced by product designs that take less time to complete. This involves addressing two fundamental issues:
Reducing process complexity is as important as reducing product complexity. But unlike manufacturing, software product design is intricately linked to its development process. Reducing process complexity involves asking three basic questions: Is a robust software process in place to attain the complexity reduction objective? Is the process adequately supported by value analysis, standardization for best practices, and necessary documentation? Are the process and its supportive elements being used and observed in practice? This broad framework consists of the following:
Managing complexity is one of the most critical software quality assurance tasks and is also one of the major challenges in software development. It is one of the root causes of variation- as well as mistake-based nonconformities. It must precede any poka yoke deployment, because its correct deployment reduces mistake-based nonconformities substantially. Recognizing flaws in design that result in complexity and detecting mistakes are best done early in the design phases and should be planned accordingly. Complexity, in particular, can be corrected only upstream in the concept development and design stages. For mistake detection, the payoff from 100% inspection upstream at the source is substantially higher than inspections downstream. This sets the stage for mistake reduction using poka yoke. |
Категории