The Rational Unified Process: An Introduction (3rd Edition)

THE SEQUENTIAL PROCESS

It has become fashionable to blame many problems and failures in software development on the sequential, or waterfall, process depicted in Figure 4-1. This is rather surprising because at first this method seems like a reasonable approach to system development.

Figure 4.1. The sequential process

A Reasonable Approach

Many engineering problems are solved using a sequential process, which typically goes through the following five steps:

  1. Completely understand the problem to be solved, its requirements, and its constraints. Capture them in writing and get all interested parties to agree that this is what they need to achieve.

  2. Design a solution that satisfies all requirements and constraints. Examine this design carefully and make sure that all interested parties agree that it is the right solution.

  3. Implement the solution using your best engineering techniques.

  4. Verify that the implementation satisfies the stated requirements.

  5. Deliver. Problem solved!

That is how skyscrapers and bridges are built. It's a rational way to proceed but only because the problem domain is relatively well known; engineers can draw on hundreds of years of experimentation in design and construction.

By contrast, software engineers have had only a few decades to explore their field. Software developers worked very hard, particularly in the seventies and eighties, to accumulate experimental results in software design and construction. In 1980, I would have sworn that the sequential process was the one and only reasonable approach.

If the sequential process is ideal, however, why aren't the projects that use it more successful? There are many reasons:

Let us review two fundamentally wrong assumptions that often hinder the success of software projects.

Wrong Assumption 1: Requirements Will Be Frozen

Notice that in the description of the sequential process we assume in step 1 that we can capture the entire problem at the beginning. We assume we can nail down all the requirements in writing in an unambiguous fashion and begin the project with a stable foundation. Despite all our efforts, though, this almost always proves to be impossible . Requirements will change. We must accept this fact. Unless we are solving a trivial problem, new or different requirements will appear. Requirements change for many reasons. Let's look at a few:

Wrong Assumption 2: We Can Get the Design Right on Paper before Proceeding

The second step of the sequential process assumes that we can confirm that our design is the right solution to the problem. By "right" we imply all the obvious qualities: correctness, efficiency, feasibility, and so on. With complete requirements tracing, formal derivation methods, automated proof, generator techniques, and design simulation, some of these qualities can be achieved. However, few of these techniques are readily available to practitioners , and many of them require that you begin with a formal definition of the problem. You can accumulate pages and pages of design documentation and hundreds of blueprints and spend weeks in reviews, only to discover, late in the process, that the design has major flaws that cause serious breakdowns.

Software engineering has not reached the level of other engineering disciplines (and perhaps it never will) because the underlying " theories " are weak and poorly understood , and the heuristics are crude. Software engineering may be misnamed. At various times it more closely resembles a branch of psychology, philosophy, or art than engineering. Relatively straightforward laws of physics underlie the design of a bridge, but there is no strict equivalent in software design. Software is "soft" in this respect.

Bringing Risks into the Picture

The sequential, or waterfall, process does work. It has worked fine for me on small projects ranging from a few weeks to a few months, on projects in which we could clearly anticipate what would happen, and on projects in which all hard aspects were well understood. For projects having little or no novelty, you can develop a plan and execute it with little or no surprise. If the current project is somewhat like the one you completed last year ”and the one the year before ”and if you use the same people, the same tools, and the same design, the sequential approach will work well.

The sequential process breaks down when you tackle projects having a significant level of novelty, unknowns, and risks. You cannot anticipate the difficulties you may encounter, let alone how you will counter them. The only thing you can do is to build some slack into the schedule and cross your fingers.

The absence of fundamental "laws of software" and the pace at which software evolves make it a risky domain. Techniques for reinforcing concrete have not changed dramatically since my grandfather used them in the early twenties in an engineering bureau . Software tools, techniques, and products, on the other hand, have a lifetime of a few years at best. So every time we try to build a system that is a bit more complicated, somewhat larger, or a little more challenging, we are in dangerous and risky territory, and we must take this into account.

That's why we bring risk analysis into the picture.

Stretching the Time Scale

If you stretch what works for a three-month project to fit a three-year project, you expose the project not only to the changing contexts we have discussed but also to other subtle effects related to the people involved. Software developers who know that they will see tangible results within the next two to three months can remain well focused on the real outcome. Very quickly, they will get feedback on the quality of their work. If small mistakes are discovered along the way, the developers won't have to go very far back in time to correct them.

But picture developers in the middle of the design phase of a three-year project. The target is to finish the design within four months. In a sequential process, the developers may not even be around to see the final product up and running. Progress is measured in pages or diagrams and not in operational features. There is nothing tangible, nothing to get the adrenaline flowing .

There is little feedback on the quality of the current activity, because defects will be found later, during integration or test, perhaps 18 months from now. The developers have few opportunities to improve the way they work. Moreover, strange things discovered in the requirements text mean that developers must revisit discussions and decisions made months ago. Is it any wonder that they have a hard time staying motivated? The original protagonists are no longer in the project, and the contract with the customer is as hard and inflexible as a rock.

The developers have only one shot at each kind of activity, with little opportunity to learn from their mistakes. You have one shot at design, and it had better be good. You say you've never designed a system like this? Too bad! You have one shot at coding, and it had better be good. You say this is a new programming language? Well, you can work longer hours to learn its new features. There's only one shot at testing, and it had better be a no-fault run. You say this is a new system and no one really knows how it's supposed to work? Well, you'll figure it out. If the project introduces new techniques or tools or new people, the sequential process gives you no latitude for learning and improvement.

Pushing Paperwork on the Shelves

In the sequential process, the goal of each step except the last one is to produce and complete an intermediate artifact (usually a paper document) that is reviewed, approved, frozen, and then used as the starting point for the next step. In practice, sequential processes place an excessive emphasis on the production and freezing of documents. Some limited amount of feedback to the preceding step is tolerated, but feedback on the results of earlier steps is seen as disruptive. This is related to the reluctance to change requirements and to the loss of focus on the final product that is often seen during long projects.

Volume-Based versus Time-Based Scheduling

Often, timeliness is the most important factor in the success of a software project. In many industries, delivery of a product on time and with a short turnaround for new features is far more important than delivery of a complete, full-featured , perfect system. To achieve timeliness, you must be able to adjust the contents dynamically by dropping or postponing some features to deliver incremental value on time. With a linear approach, you do not gain much on the overall schedule if you decide in the middle of the implementation to drop feature X. You have already expended the time and effort to specify, design, and code the feature. That's why this model isn't suitable when a company wants to work with schedules that are time-based (for example, in three months we can do the first three items on your list, and three months later we'll have the next two, and so on) and not volume-based (it will take us nine months to do everything that you want).

For these reasons and a few others that we will cover later, software organizations have tried another approach.

Категории