A Practical Approach to Configuration Verification and Audit

OVERVIEW

EIA-649, the standard for configuration management, requires the organization to verify that a product's stated requirements have been met. Verification can be accomplished by a systematic comparison of requirements with the results of test, analyses, and inspections.

There are three components to establishing a rigorous configuration verification and audit methodology:

  1. Establishing and implementing a standard design and document verification methodology
  2. Establishing and implementing a standard configuration audit methodology
  3. Establishing and implementing a standard testing methodology

This chapter correlates the CM process of verification and audit with traditional software engineering testing methodology.

COMPONENTS OF A DESIGN AND DOCUMENT VERIFICATION METHODOLOGY

The basis of configuration management is the rigorous control and verification of all system artifacts. These artifacts include:

EIA-649 states that the documentation must be accurate and sufficiently complete to permit the reproduction of the product without further design effort. What this means is that if the set of documents described in the list above was given to a different set of programmers, the same exact system would be produced.

Activities that can accomplish this end include:

COMPONENTS OF A CONFIGURATION AUDIT METHODOLOGY

Configuration audit requires the following resources:

Audits can be performed upon implementation of a new system or the maintenance of an existing one. Prior to conducting the audit, the audit plan, which details the scope of the effort, is created and approved by all appropriate personnel.

The audit process itself is not unlike the testing process described in the first part of this chapter. An audit plan, therefore, is very similar to a test plan. During the audit, auditors :

Audit minutes provide a detailed record of the findings, recommendations, and conclusions of the audit committee. The committee follows up until all required action items are complete.

COMPONENTS OF A TESTING METHODOLOGY

The goal of testing is to uncover and correct errors. Because software is so complex, it is reasonable to assume that software testing is a labor-and resource- intensive process. Automated software testing helps to improve testers' productivity and reduce the resources that may be required. By its very nature, automated software testing increases test coverage levels, speeds up test turn -around time, and cuts costs of testing.

The classic software development life-cycle model suggests a systematic, sequential approach to software development that progresses through software requirements analysis, design, code generation, and testing. That is, once the source code has been generated, program testing begins with the goal of finding differences between the expected behavior specified by system models and the observed behavior of the system.

The process of creating software applications that are error-free requires technical sophistication in the analysis, design, and implementation of that software, proper test planning, as well as robust automated testing tools. When planning and executing tests, software testers must consider the software and the function it performs , the inputs and how they can be combined, and the environment in which the software will eventually operate .

There are a variety of software testing activities that can be implemented, as discusses below.

Inspections

Software development is, for the most part, a team effort, although the programs themselves are coded individually. Periodically throughout the coding of a program, the project leader or project manager will schedule inspections of a program (see Appendices F and G). An inspection is the process of manually examining the code in search of common errors.

Each programming language has it own set of common errors. It is therefore worthwhile to spend a bit of time in search of these errors.

An example of what an inspection team would look for follows :

int i; for (i = 0; i < 10; i++) { cout << "Enter velocity for " << i << "numbered data point: " cin >> data_point[i].velocity; cout << "Enter direction for that data point" << " (N, S, E or W): "; cin >> data_point[i].direction; }

The C++ code displayed above looks correct. However, the semicolon was missing from the fifth line of code:

int i; for (i = 0; i < 10; i++) { cout << "Enter velocity for " << i << "numbered data point: "; //the ; was missing cin >> data_point[i].velocity; cout << "Enter direction for that data point" << " (N, S, E or W): "; cin >> data_point[i].direction; }

Walk Throughs

A walk-through is a manual testing procedure that examines the logic of the code. This is somewhat different from an inspection, where the goal is to find syntax errors. The walk-through procedure attempts to answer the following questions:

The best way to handle a walk-through is:

Unit Testing

During the early stages of the testing process, the programmer usually performs all tests. This stage of testing is referred to as unit testing. Here, the programmer usually works with the debugger that accompanies the compiler. For example, Visual Basic, as shown in Figure 9.1, enables the programmer to "step through" a program's (or object's) logic one line of code at a time, viewing the value of any and all variables as the program proceeds.

Figure 9.1: Visual Basic, along with Other Programming Toolsets, Provides Unit Testing Capabilities to Programmers

Daily Build and Smoke Test

McConnell [1996] describes a testing methodology commonly used at companies such as Microsoft that sell shrink-wrapped software. The "daily build and smoke" test is a process whereby every file is compiled, linked, and combined into an executable program on a daily basis. The executable is then put through a "smoke" test, a relatively simple check, to see if the product "smokes" when it runs.

According to McConnell [1996], this simple process has some significant benefits, including:

For this to be a successful effort, the build part of this task should:

The "smoke" part of this task should:

Integration Testing

A particular program is usually made up of many modules. An OO (object-oriented) system is composed of many objects. Programmers usually architect their programs in a top-down, modular fashion. Integration testing proves that the module interfaces are working properly. For example, in Figure 9.2, a programmer doing integration testing would ensure that the Module2 (the Process module) correctly interfaces with its subordinate, Module2.1 (the Calculate process).

Figure 9.2: Integration Testing Proves that Module Interfaces Are Working Properly

If Module2.1 had not yet been written, it would have been referred to as a stub. It would still be possible to perform integration testing if the programmer inserts two or three lines of code in the stub, which would act to prove that it is well integrated to Module2.

On occasion, a programmer will code all the subordinate modules first and leave the higher-order modules for last. This is known as bottom-up programming. In this case, Module2 would be empty, save for a few lines of code to prove that it is integrating correctly with Module2.1, etc. In this case, Module2 would be referred to as a driver.

System Testing

Where integration testing is performed on the discrete programs or objects with a master program, system testing refers to testing the interfaces between programs within a system. Because a system can be composed of hundreds of programs, this is a vast undertaking.

Parallel Testing

It is quite possible that the system being developed is a replacement for an existing system. In this case, parallel testing is performed. The goal here is to compare outputs generated by each of the systems (old versus new) and determine why there are differences, if any.

Parallel testing requires that the end user(s) be part of the testing team. If the end user determines that the system is working correctly, one can see that the customer "has accepted" the system. This, then, is a form of customer acceptance testing.

THE QA PROCESS

As the testing progresses, testing specialists may become involved (see Appendix H for a sample QA Handover Document). Within the vernacular of IT, staff members who are dedicated to performing testing are referred to as quality assurance (QA) engineers and reside within the quality assurance department . QA testers must have a good understanding of the program being tested , as well as the programming language in which the program was coded. In addition, the QA engineer must be methodical and able to grasp complex logic. Generally speaking, technical people with these attributes are hard to come by and even harder to keep as most of them aspire to become programmers themselves .

Even simple software can present testers with obstacles. Couple this complexity with the difficulty of attracting and keeping QA staff and one has the main reason why many organizations now automate parts of the testing process.

THE TEST PLAN

Software testing is one critical element of software quality assurance (SQA) that aims to determine the quality of the system and its related models. In such a process, a software system will be executed to determine whether it matches its specification and executes in its intended environment. To be more precise, the testing process focuses on both the logical internals of the software, ensuring that all statements have been tested , and on the functional externals by conducting tests to uncover errors and ensure that defined input will produce actual results that agree with required results.

To ensure that the testing process is complete and thorough, it is necessary to create a test plan (Appendix E).

A thorough test plan consists of the items listed in Table 9.1. A sample test plan, done by this author's students for an OO dog grooming system, can be found in Appendix E. While all components of this test plan are important, one notes that the test plan really focuses on three things:

  1. The test cases
  2. Metrics that will determine whether there has been testing success or failure
  3. The schedule

Table 9.1: Thorough Test Plan
  1. Revision History
  2. System Introduction

    • 2.1 Goals and Objectives
    • 2.2 Statement of Scope
    • 2.3 Major Constraints
  3. Test Plan

    • 3.1 System Description
    • 3.2 Testing Strategy
    • 3.3 Testing Resources
    • 3.4 Testing Metrics
    • 3.5 Testing Artifacts
    • 3.6 Testing Schedule
  4. Test Procedures

    • 4.1 Class Testing
    • 4.2 Integration Testing
  5. Appendix 1: Class Testing Test Cases

    • 5.1 Application Controller Sub-system
    • 5.2 User Management Sub-system
    • 5.3 Resource Management Sub-system
    • 5.4 Order Sub-system
    • 5.5 Accounting Sub-system
    • 5.6 Customer Relationship Management Sub-system
    • 5.7 Persistence Sub-system
  6. Appendix 2: Integration Testing Test Cases

    • 6.1 Customer Registration
    • 6.2 Reallocate Resources
    • 6.3 Search for Service Provider and Initiate Order
    • 6.4 Place Order
    • 6.5 Pay for Service
  7. Appendix: Project Schedule

The test cases are the heart of the test plan. A test case specifies the exact steps the tester must take to test a particular function of the system. A sample test case appears in Table 9.2.

Table 9.2: Sample Test Case

C1. Log-in Page (based on case A1)

Description/Purpose:

Test the proper display of the login, user verification, and that the user is redirected to the correct screen.

Stubs Required :

  • Data Interface ” The data interface stub/driver is required to verify log-in data as the user enters it through the Web page.
  • Database Interface ” The database interface stub/driver is required to check the database for log-in data.

Steps:

  1. Go to the main log-in screen.
  2. Enter the username and password, both including special characters . This should be the login of a customer.
  3. Execute Data Interface stub within the test framework to confirm that ID and password are a valid pair.
  4. Select Veterinary Services link.
  5. Confirm that the list of dogs that appear are dogs that only belong to that customer.
  6. Go to the main log-in screen.
  7. Enter the username and password, both including special characters. This should be the login of an employee/administrator.
  8. Execute Data Interface stub within the test framework to confirm that ID and password are a valid pair.
  9. Select Veterinary Services link.
  10. Confirm that the list of dogs is a list of all dogs in the database.
  11. Go to the main log-in screen.
  12. Enter a bad user name and password.
  13. Execute Data Interface stub within the test framework to confirm that ID and password are not a valid pair.
  14. Confirm that the log-in screen presented an error message and stayed at the log-in screen.

Expected Results:

  • Successful login with username and matching password.
  • Able to log in with passwords with special characters.
  • Veterinary display screen presents the correct list of dogs in the database, from zero to many. This list of dogs is a list of dogs for a customer if the user is a customer; otherwise , the user is a staff member and has access to all the dogs in the database.

Test plans must be carefully created. A good test plan contains a test case for every facet of the system to be tested.

Some organizations purchase automated testing tools that assist the developer in testing the system. These are particularly useful for:

Success must be measured. The test plan should contain metrics that assess the success or failure of the system's components. Examples of viable metrics include:

TEST AUTOMATION

The usual practice in software development is that the software is written as quickly as possible, and once the application is done, it is tested and debugged . However, this is a costly and ineffective way because the software testing process is difficult, time consuming, and resource intensive . With manual test strategies, this can be even more complicated and cumbersome. A better alternative is to perform unit testing that is independent of the rest of the code. During unit testing, developers compare the object design model with each object and sub-system . Errors detected at the unit level are much easier to fix; one only has to debug the code in that small unit. Unit testing is widely recognized as one of the most effective ways to ensure application quality. It is a laborious and tedious task, however. The workload for unit testing is tremendous, so that to manually perform unit testing is practically impossible and hence the need for automatic unit testing. Another good reason to automate unit testing is that when performing manual unit testing, one runs the risk of making mistakes [Aivazis 2000].

In addition to saving time and preventing human errors, automatic unit testing helps facilitate integration testing. After unit testing has removed errors in each sub-system, combinations of sub-systems are integrated into larger sub-systems and tested. When tests do not reveal new errors, additional sub-systems are added to the group , and another iteration of integration testing is performed. The re-execution of some subset of tests that have already been conducted is regression testing. This ensures that no errors are introduced as a result of adding new modules or modification in the software [Kolawa 2001].

As integration testing proceeds, the number of regression tests can grow very large. Therefore, it is inefficient and impractical to re-execute every test manually once a change has occurred. The use of automated capture/playback tools may prove useful in this case. Such tools enable the software engineer to capture test cases and results for subsequent playback and comparison.

Test automation can improve testers' productivity. Testers can apply one of several types of testing tools and techniques at various points of code integration. Some examples of automatic testing tools in the marketplace include:

WinRunner is probably one of the more popular tools in use today because it automates much of the painful process of testing. Used in conjunction with a series of test cases (see Appendix E, Section 5), a big chunk of the manual processes that constitute the bulk of testing can be automated. The WinRunner product actually records a particular business process by recording the keystrokes a user makes (e.g., emulates the user actions of placing an order). The QA person can then directly edit the test script that WinRunner generates and add checkpoints and other validation criteria.

When done correctly, and with appropriate testing tools and strategies, automated software testing provides worthwhile benefits such as repeatability and significant time savings. This is true especially when the system moves into system test. Higher quality is also a result because less time is spent in tracking down test environmental variables and rewriting poorly written test cases [Raynor 1999].

Pettichord [2001] describes several principles that testers should adhere to in order to succeed with test automation. These principles include:

Testers must realize that test automation itself is a software development activity and thus it needs to adhere to standard software development practices. That is, test automation systems themselves must be tested and subjected to frequent review and improvement to make sure that they are indeed addressing the testing needs of the organization.

Because automating test scripts is part of the testing effort, good judgment is required in selecting appropriate tests to automate. Not everything can or should be automated. For example, overly complex tests are not worth automating. Manual testing is still necessary for this situation. Zambelich [2002] provides a guideline to make automated testing cost-effective . He says that automated testing is expensive and does not replace the need for manual testing or enable one to "down- size " a testing department. Automated testing is an addition to the testing process. Some pundits claim that it can take between three and ten times as long (or longer) to develop, verify, and document an automated test case than to create and execute a manual test case. Zambelich indicates that this is especially true if one elects to use the "record/playback" feature (contained in most test tools) as the primary automated testing methodology. In fact, Zambelich says that record/playback is the least cost-effective method of automating test cases.

Automated testing can be made cost-effective, according to Zambelich, if some common sense is applied to the process:

Isenberg [1994] explains the requirements for success in automated software testing. To succeed, the following four interrelated components must work together and support one another:

  1. Automated testing system. It must be flexible and easy to update.
  2. Testing infrastructure. This includes a good bug tracking system, standard test case format, baseline test data, and comprehensive test plans.
  3. Software testing life cycle. This defines a set of phases outlining what test activities to do and when to do them. These phases are planning, analysis, design, construction, testing (initial test cycles, bug fixes, and retesting), final testing and implementation, and post implementation.
  4. Corporate support. Automation cannot succeed without the corporation's commitment to adopting and supporting repeatable processes.

Automated testing systems should have the ability to adjust and respond to unexpected changes to the software under test, which means that the testing systems will stay useful over time. Some of the practical features of automated software testing systems suggested by Isenberg [1994] include:

When automated testing tools are introduced, there may be some difficulties that test engineers must face. Project management should be used to plan the implementation of testing tools. Without proper management and selection of the right tool for the job, automated test implementation will fail [Hendrickson 1998]. Dustin [1999] has accumulated a list of "Automated Testing Lessons Learned" from his experiences with real projects and test engineer feedback. Some include:

SUMMARY

Test engineers can enjoy productivity increases as the testing task becomes automated and a thorough test plan is implemented. Creating a good and comprehensive automated test system requires an additional investment of time and consideration, but it is cost-effective in the long run. More tests can be executed while the amount of tedious work on construction and validation of test cases is reduced.

Automated software testing is by no means a complete substitute for manual testing. That is, manual testing cannot be totally eliminated; it should always precede automated testing. In this way, the time and effort that will be saved from the use of automated testing can now be focused on more important testing areas.

Configuration management insists that we create and follow procedures for verification of a product's adherence to the specification from which it was derived. A combination of rigorously defined testing, documentation, and audit methodologies fulfills this requirement.

REFERENCES

Aivazis, M., "Automatic Unit Testing," Computer , 33 (5), back cover, May 2000 .

Bruegge, B. and A.H. Dutoit, Object-Oriented Software Engineering: Conquering Complex and Changing Systems , Prentice Hall, Upper Saddle River, NJ, 2000 .

Dustin, E., "Lessons in Test Automation," STQE Magazine , September/October 1999 , and from the World Wide Web: http://www.stickyminds.com/pop_print.asp?ObjectId=1802&ObjectType=ARTCO

Hendrickson, E., The Difference between Test Automation Failure and Success , Quality Tree Software, August, 1998 , retrieved from http://www.qualitytree.com/feature/dbtasaf.pdf

Isenberg, H.M., "The Practical Organization of Automated Software Testing," Multi Level Verification Conference 95 , December 1994 , retrieved from http://www.automated-testing.com/PATfinal.htm

Kolawa, A., "Regression Testing at the Unit Level?," Computer , 34 (2), back cover, February 2001 .

McConnell, Steve, "Daily Build and Smoke Test," IEEE Software , 13 (4), July 1996 .

Pettichord, B., "Success with Test Automation," June 2001 , retrieved from http://www.io.com/~wazmo/succpap.htm

Pressman, R.S., Software Engineering: A Practitioner's Approach , 5th ed., McGraw-Hill, Boston, MA, 2001 .

Raynor, D.A., "Automated Software Testing," retrieved from http://www. trainers -direct.com/resources/articles/ProjectManagement/AutomatedSoftwareTestingRaynor.html

Whittaker, J.A., "What Is Software Testing? And Why Is It So Hard?," IEEE Software , January/February 2000 , 70 “79.

Zallar, K., "Automated Software Testing - A Perspective," retrieved from http://www.testingstuff.com/autotest.html

Zambelich, K., "Totally Data-Driven Automated Testing," 2002 , retrieved from http://www.sqa-test.com/w_paper1.html

Категории