Software Security: Building Security In

One foundational approach that is critical to any science is measurement. As Lord Kelvin put it:

When you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind: it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced to the stage of science.

Measurement is critical to the future of software security. Only by quantizing our approach and its impact can we answer questions such as: How secure is my software? Am I better off now than I was before? Am I making an impact on the problem? How can I estimate and transfer risk?

We can begin to approach the measurement problem by recycling numbers from the software literature. For example, we know that fixing software problems at the design stage is much cheaper than fixing them later in the lifecycle.[7] An IBM study reports relative cost weightings as: design, 1; implementation, 6.5; testing, 15; maintenance, 100. We also know relative cost expenditures for lifecycle stages: design, 15%; implementation, 60%; testing, 25%. These and similar numbers can provide a foundation for measuring the impact of software security.

[7] See Chapter 3, Figure 3-2.

Measuring Return

A preliminary study reported by @stake (now part of Symantec) demonstrates the importance of concentrating security analysis efforts at the design stage relative to the implementation and testing phases (see Figure 2-2). Microsoft reports that more than 50% of the software security problems it finds are design flaws.

Figure 2-2. Return on investment (ROI) as measured by @stake over 23 security engagements.[8]

[8] See the trade magazine article by Kevin Soo Hoo, Andrew Sudbury, and Andrew Jaquith, "Tangible ROI through Secure Software Engineering," Secure Business Quarterly, Q4 2001 <http://www.sbq.com/sbq/rosi/sbq_rosi_software_engineering.pdf>.

Risk management calls for quantitative decision support. Work remains to be done on measuring software security and software security risk, but some metrics are obvious. The most effective metrics involve tracking risk over time.

Measurement and Metrics in the RMF

The most natural and easiest form of measurement in the RMF involves measuring and tracking information about risks and risk status at various times throughout application of the RMF. The Cigital Workbench (explained in the next section) helps to automate this activity. The fact that software development unfolds over time is a boon for measurement because a relative quantity (such as number of risks) measured at two different times can be used to indicate progress.

Risk measurements include but are not limited to:

  • Outstanding risks by priority

  • Identified risks by priority

  • Outstanding risks by type

  • Identified risks by type

  • Outstanding risks by subtype

  • Identified risks by subtype

  • Overall risk mitigation status percentage

  • Risk mitigation by priority: percentage resolved and percentage outstanding

  • Risk mitigation by priority: number resolved and number outstanding

  • Number of outstanding risks by financial impact

  • Number of identified risks by financial impact

  • Number of risks identified without defined mitigation by priority

  • Number of risks identified without defined mitigation by type

  • Risk discovery rate by priority

  • Risk discovery rate by type

  • Risk mitigation rate by priority

  • Risk mitigation rate by type

  • Number of outstanding risks by schedule impact

These kinds of measurements should be made as early as possible and as continuously as possible during the SDLC.

Категории