Design for Trustworthy Software: Tools, Techniques, and Methodology of Developing Robust Software

In the year 2000, as they have said at the beginning of each decade for the past 30 years, object-oriented design and analysis advocates said, "This is the decade of objects." Although we hope this will be true in this decade, we can at least say that this is certainly the century of objects. One of the authors witnessed the birth of object-oriented computing. He must confess that even though he was already a programming veteran at the time, he was not completely sure what was being delivered at this blessed event. The inventors were not sure either, but as the baby grew, it became ever stronger and more recognizable as the revolution in computer programming technology it has become today.

Sidebar 14.1: The Birth of Object-Oriented Programming

In 1961, as scientific consultant for Sperry Univac International's Europe, Middle East, and Africa Division, one of the authors helped arrange a $50,000 discount on a Univac 1107 computer to be delivered in late 1962 to the Norwegian Defense Research Establishment. In return for the discount, Kristen Nygaard and Ole-Johann Dahl of the NDRE were to license to Univac the Simula event-oriented simulation package they had been developing in the ALGOL 60 high-level programming language. They had done the basic development and some limited small-scale testing on the small Danish Gier computer, which supported the best Algol 60 compiler available at the time. The initial application for the simulation package was to be management of Norway's forest reserves. Univac had gone to Europe with the large-scale Univac Scientific 1107 marketing program in 1961, together with compilers for the COBOL, FORTRAN, and ALGOL programming languages. Shortly thereafter, the vendor developing the Univac Algol 60 compiler announced that it would not be able to deliver this product so critical to marketing an American mainframe in Europe at that time.

Fortunately, another similar contract with a software license buy-back had been negotiated with Case-Western Reserve University, but the software to be delivered had not yet been specified. Upon hearing the bad news about the initial ALGOL package cancellation, we immediately flew to Sperry Rand Univac headquarters in New York to get permission to ask Case-Western Reserve to do an Algol 60 compiler to fulfill their contract requirement. On arrival in Cleveland the next day, we were pleased to meet an academic development team headed by the world's best programmer, Donald Knuth, supported by undergraduate research assistants Nick Hubacker and Joe Speroni. They said they could deliver the desired compiler in the amazingly short time of six months. The author was stunned by this impossible time estimate but equally amazed by the team abilities. When the compiler was ready six months later, he flew from Sperry Rand International headquarters in Lausanne, Switzerland to Cleveland, Ohio to bring the compiler back to Paris. Univac had time available every night on the University of Paris' Univac 1107 at the Faculty of Sciences center in Orsay, a Paris suburb. Nygaard and Dahl planned to be there to make the final checkout runs for Simula when the compiler arrived.

Unfortunately, the "chicken war" between the United States and the European Economic Community (EEU) was on. The EEU, mainly France, had objected to U.S. chicken producers exporting "factory-grown" frozen chickens to Europe and had limited imports to "processed chickens" only. With typical Yankee ingenuity, the American producers had started putting a dollop of paprika on each chicken before shrink-wrapping and freezing it and then labeling it as "processed chicken." The French were furious, but they had not been specific as to what they meant by processed chicken, so the chickens had to be accepted as processed and therefore were legal imports. French customs officers responded by making entry into France difficult for Americans by dumping the contents of their suitcases on the counter or floor when they said they had nothing to declare upon entry. The author knew he would never get a computer magnetic tape reel with the new UNIVAC 1107 Algol 60 compiler on it past this gauntlet. So he asked the Case-Western Reserve team to punch out the compiler program code in binary on common 80-column punched cards. These cards would likely be perceived as having no commercial value, because they were no longer reusable. This turned out to be five boxes of 2,000 cards each, which filled a large suitcase.

Upon arriving at the Paris Orly airport at 10 p.m. with a small bag of clothing and a large suitcase full of cards, the author, who fortunately spoke fluent French, was confronted by a surly French customs officer. The card file was declared as a compiler for the University of Paris Univac computer. The agent said that the content of the card file was information, and information was taxed when entering or leaving France. The author offered to pay the tax. The inspector went on to suggest that perhaps the contents were a filthy novel that would not be allowed entry in any case. The author observed that filthy novels were exported from France to the United States but never the reverse, to which the agent agreed, with some amusement. The inspector then took about a half-inch of the binary cards from one box and offered to send them to the Ministry of Information for tax evaluation. The author held one of the lacey binary cards up to the light and told the agent that the information was in the holes, but since the holes were not there, how could the agent possibly tax him for something that did not exist? This peculiar Gallic logic prevailed, and he told the author to "Get out of here" (Aller, Monsieur) without even dumping his clothes on the floor.

The author rushed to Orsay by taxi, arriving in the computing center at midnight, where Nygaard and Dahl were nervously pacing. The new Algol 60 compiler was readily installed, and Simula, which had been written completely in ALGOL as its meta-language, was implemented and worked the first time. What a pleasure to be a facilitator for five of the world's best programmersthree from Cleveland and two from Oslo. Simula became the world's first object-oriented program. Years later, Nygaard and Dahl were knighted by the King of Norway and were acknowledged by the international computing community as the inventors of OOP. The author had the privilege of sponsoring Dr. Nygaard on a speaking tour of the United States in 2000, a year before he died at the age 73. Dr. Dahl died six months later.

The genius of Simula was turning each simulated event into what we know today as an object. The events had coded methods and hidden data, they were invoked by messages from other objects, they were polymorphic, and they had inheritance properties. The idea of generalizing them beyond a simulation package was readily accepted. But as with every other new software technology, there is a 20-to-30-year gestation period from the conception in the research laboratory to emergence in the marketplace. Naturally, much has been written about object-oriented analysis and development, which we will not attempt to review here. Our interest is the importance of the object revolution in the ability to deploy objects as components, and its influence on software quality, especially reliability. We will focus rather narrowly on the OOADP process and how it contributes to our two stated goals in tension. We'll note pitfalls and problems that any new technology and its use may engender. We'll also describe the promise of object frameworks with reference to the most ambitious framework attempted so farIBM's SanFrancisco™ project. One of the authors was architect of an effort to reimplement a business system written in COBOL and C that was recast into Java using this framework. He can testify to the ability of an object framework to deliver quality software in much less time, at far less cost, and with a high degree of object componentization and code reusability (see Sidebar 14.2).

Objects are computational modules that have signatures, attributes or properties, and behaviors or methods that carry out computing tasks on data. They communicate with each other by sending and receiving messages. Each object, which is an operational unit of computation, is an instance of a blueprint from which it was constructed, called a class. The classes for an entire application are arranged in a hierarchy or tree-shaped blueprint "sheaf" for the whole application. The analyst specifying the functions for a new business (or any other) computer application builds use cases (as described in Chapter 1) for each function that the ultimate user of the application expects it to perform. From these use cases, what will become objects are defined as classes or abstract program blueprints. The classes are then organized into a hierarchy so that subclasses can inherit properties and methods from superclasses. This minimizes the number of different program pieces that must be written and checked out to complete the application. This feature of OOP maximizes reuse, which tends to resolve the tension between the conflicting software development goals of low time and cost on the one hand and high quality on the other. Reuse is further enhanced by an object characteristic called polymorphisman object's ability to modify its behavior based on the data type in the message that invokes or calls it to perform one of its behaviors. For example, in Java, if the programmer wants to print a primitive data element, he or she can call the println method with a character as data. It will print the character with an integer, it will print the integer with a floating-point number, it will print a decimal with a string, and it will print the string of characters. Any object constructed from its class can inherit a method from a superclass and then override that method to make it specific to its own needs. Thus, reusability is not like making identical cookies with a cookie cutter. Each object can customize the attributes and methods it chooses to inherit from the hierarchy above it.

Having organized the objects defined from the analyst's use cases into a hierarchy, the software architect must then describe how they interact, again based on the use-case analysis. Having defined the interactions, the developer may now describe operations on objects, which are defined by the flow of the intended application. Each of these steps is dependent on the previous step, but as the process goes on, a high degree of iteration is required, because new discoveries at each level require revisiting previous levels. It has been noted that because water does not flow uphill, the spiral model soon replaced the waterfall model for OOADP.[1] If you're unfamiliar with this process, you might want to refer to Ivar Jacobson's book Object-Oriented Software Engineering, a thorough but very accessible reference.[2]

Lest you think that OOADP is the elixir that cures all software development ills, we want to recommend another book, Pitfalls of Object-Oriented Development, by Bruce Webster.[3] It describes the hazards that accompany the new technology for those who have not yet ventured far beyond their procedural program development heartland. This book is not at all discouraging, and it's quite slender compared to all the other OOP books on our shelves.

In our opinion, the best research project exploring the limits of OOADP technology was the IBM SanFrancisco™ (SF) project, which fortunately was well-documented for posterity. Sadly, this was not intended to be a research project, but a new technology middleware approach to enterprise business applications development. It was to be designed in the marketplace by the software vendors known as IBM Partners in Development with IBM funding and facilitation. A few successful deployments of the system were made before IBM abandoned the project, or actually folded it into its WebSphere™ business components product set.

SanFrancisco™ was a type of OOP middleware known as a framework, defined as "a set of cooperating object classes that make up a reusable design for a specific type of software application. Such a framework is typically customized to a particular application by creating application-specific subclasses of the abstract classes in the framework."[4] Figure 14.1 shows the amazing scale of SanFrancisco™. The SF layers lie between the applications to be developed and the hardware they will eventually run on. SF was designed to initially operate on the Windows NT Server, AIX, AS/440, and HP-UX platforms.[5] All of these hardware platforms and their operating systems supported the JVM and Java Libraries. SF was a Java-oriented middleware system. But the capability of Java Wrapper technology and the Native API allowed developers to simply wrap COBOL programs and compile and link C programs using the Native API. Thus, they could encapsulate working code modules and components as quasi-Java components into a Java-based application or system as it was being developed. SF was divided into three layers: Foundation, Common Business Objects, and Core Business Processes. Of the "towers," or Core Business Processes, only General Ledger, Warehouse Management, and Order Management were delivered in the product's first release. In Sidebar 14.2 we report our own experience with the GL tower. Note from the figure that an application can be built on the Foundation layer, the Common Business Objects layer, or a tower or towers of the Core Business Processes layer. However they are based, all SF applications are interoperable via the Foundation layer, which provides the fundamental infrastructure for any SF base application suite or system.

Figure 14.1. The SanFrancisco™ Architecture[6]

P. Monday, J. Carey, and M. Dangler, SanFrancisco Component Framework: An Introduction, p. 23, © 1999 by Paul Monday, James Carey, Mary Dangler, Reprinted by permission of Pearson Education, Inc. All rights reserved.

Initially the Foundation was CORBA- and CORBA Services-based. However, it did not include a CORBA object request broker (ORB), simply because it was 100% Java-based, and the ORB for Java is the remote method invocation (RMI) API.[7] The Foundation's function was to support distributed objects, concurrent access to objects, and persistent or database storage of objects. It also contains the basic services and defines the SanFrancisco Programming Model (SFPM). The services provided are naming, notification, query, and base classes.

The Common Business Objects (CBO) layer consists of general business objects, financial business objects, and generalized mechanisms that are shared by all business applications. General business objects include company, currency, and customer objects. Financial objects include bank, bank account, invoicing, and financial calendar, among others. The generalized mechanisms include the 16 basic patterns defined in SF, including commands, key, cached balances, policies, and keyed attribute retrieval.[8]

The initial Core Business Processes (CBP) were GL, Order Management, and Warehouse Management. AR and AP were under consideration. In fact, at least one Partner in Development firm had a proposal on IBM's table to develop these middleware towers. Figure 14.2 illustrates three types of applications built on SF middleware. The business's financial applications would be built on the financial towers of the CBP. A nonfinancial application such as insurance or transportation would be built on the CBO. A non-centrally-related business function such as patent portfolio management would be built directly on the Foundation. In our opinion, SF's Foundation layer was its true technical genius, but when the project was folded into IBM's Software Division, marketing force trumped technical finesse, and the Foundation was replaced by Sun Microsystems' Enterprise JavaBeans (EJB). As soon as SF's uniqueness and performance capability were compromised, it was "parted out" like an exotic automobile missing its engine, and the parts were shelved at IBM's WebSphere™ operation.

Figure 14.2. Building on IBM SanFrancisco™[9]

P. Monday, J. Carey, and M. Dangler, SanFrancisco Component Framework: An Introduction, p. 24, © 1999 by Paul Monday, James Carey, Mary Dangler. Reprinted by permission of Pearson Education, Inc. All rights reserved.

Sidebar 14.2: The Power of Java Middleware

When one of the authors was Chief Technology Officer of a tier-one business applications software vendor, he was invited to join the IBM SanFrancisco Technical Advisory Group as a representative of his employer. The firm was considering recasting its proprietary 4GL-based applications into object technology using C++ or Java. Naturally, the possibility of using SanFrancisco™ was attractive, because IBM's object framework was nearing completion after four years of development. The firm became the first licensee of the product. It opted to recast its Version 5.0 business software into Java objects, using the IBM framework technology, as a proof of concept or evaluation. At that time, the 4GL-based Version 7.2 was being released and Version 8.0 was in planning. Version 5.0 had just gone off support as obsolete because it did not support multiple currencies, electronic data exchange, or Internet clients. Because these were in the Foundation of the SF framework, they would be "free" and comparable to the same features that had been carefully implemented in Version 6.0. The first module or application to be recast was the General Ledger. Following a use-case analysis and design using SF, we came up with a time estimate of six months for five programmers, or 2.5 person years. When the plan was presented to senior management for go-ahead approval, it was noted that the last time GL was so reprogrammed was to the AS/400 from COBOL to RPG II, and it took 20 people one year. Naturally, the question arose as to how we could do this at one-eighth the cost and in half the time using Java. The answer was simple:

  • Programming in Java was twice as efficient and productive as programming in RPG II.

  • An object class library, particularly a framework, was twice as effective as a COBOL or RPG library.

  • The value proposition of IBM's SF was that half the code shipped on any business application developed using it would come from the six-CD-ROM SF middleware distribution, as opposed to being manually coded by the project team.

Because 2 x 2 x 2 = 8, and 20 person years divided by 8 is 2.5 person years, we should have been close to our estimate. This blithe arithmetic was greeted with considerable incredulity, but we met our goals over the ensuing six months. What was most remarkable, and is generally a beneficial side effect of OOADP, is that the application was developed in vertical "slices" rather than horizontal "layers." When we were half done, we could show management half of the final functionality (such as forms) working, because the functions ran all the way from GUI data entry to the underlying database and back through the computation to the output screen or report. This was no big deal for recasting a working application into a different language, even if it were from procedural languages (COBOL and C) to objects (Java). But for the development of a new application under the careful oversight of the buyers and/or future end users, it is a tremendous advantage. Not only can you meet users' expectations for functionality and usability, but you also reduce development time and cost, because specification misunderstandings can be corrected by the very effective WYSIWYG process. The users will get what they see, and if that isn't what they want, now is the time to fix it. This is one of the most important features of object-oriented programming for making sure that form really does follow function.

Категории