The Art of Software Security Assessment: Identifying and Preventing Software Vulnerabilities
There's almost an air of magic when you first see a modern remote software exploit deployed. It's amazing to think that a complex program, written by a team of experts and deployed around the world for more than a decade, can suddenly be co-opted by attackers for their own means. At first glance, it's easy to consider the process as some form of digital voodoo because it simply shouldn't be possible. Like any magic trick, however, this sense of wonder fades when you peek behind the curtain and see how it works. After all, software vulnerabilities are simply weaknesses in a system that attackers can leverage to their advantage. In the context of software security, vulnerabilities are specific flaws or oversights in a piece of software that allow attackers to do something maliciousexpose or alter sensitive information, disrupt or destroy a system, or take control of a computer system or program. You're no doubt familiar with software bugs; they are errors, mistakes, or oversights in programs that result in unexpected and typically undesirable behavior. Almost every computer user has lost an important piece of work because of a software bug. In general, software vulnerabilities can be thought of as a subset of the larger phenomenon of software bugs. Security vulnerabilities are bugs that pack an extra hidden surprise: A malicious user can leverage them to launch attacks against the software and supporting systems. Almost all security vulnerabilities are software bugs, but only some software bugs turn out to be security vulnerabilities. A bug must have some security-relevant impact or properties to be considered a security issue; in other words, it has to allow attackers to do something they normally wouldn't be able to do. (This topic is revisited in later chapters, as it's a common mistake to mischaracterize a major security flaw as an innocuous bug.) There's a common saying that security is a subset of reliability. This saying might not pass muster as a universal truth, but it does draw a useful comparison. A reliable program is one that's relatively free of software bugs: It rarely fails on users, and it handles exceptional conditions gracefully. It's written "defensively" so that it can handle uncertain execution environments and malformed inputs. A secure program is similar to a robust program: It can repel a focused attack by intruders who are attempting to manipulate its environment and input so that they can leverage it to achieve some nefarious end. Software security and reliability also share similar goals, in that they both necessitate development strategies that focus on exterminating software bugs. Note Although the comparison of security flaws to software bugs is useful, some vulnerabilities don't map so cleanly. For example, a program that allows you to edit a critical system file you shouldn't have access to might be operating completely correctly according to its specifications and design. So it probably wouldn't fall under most people's definition of a software bug, but it's definitely a security vulnerability. The process of attacking a vulnerability in a program is called exploiting. Attackers might exploit a vulnerability by running the program in a clever way, altering or monitoring the program's environment while it runs, or if the program is inherently insecure, simply using the program for its intended purpose. When attackers use an external program or script to perform an attack, this attacking program is often called an exploit or exploit script. Security Policies
As mentioned, attackers can exploit a vulnerability to violate the security of a system. One useful way to conceptualize the "security of a system" is to think of a system's security as being defined by a security policy. From this perspective, a violation of a software system's security occurs when the system's security policy is violated. Note Matt Bishop, a computer science professor at University of CaliforniaDavis, is an accomplished security researcher who has been researching and studying computer vulnerabilities for many years. Needless to say, he's put a lot of thought into computer security from a formal academic perspective as well as a technical perspective. If these topics interest you, check out his book, Computer Security: Art and Science (Addison-Wesley, 2003), and the resources at his home page: http://nob.cs.ucdavis.edu/~bishop/.
For a system composed of software, users, and resources, you have a security policy, which is simply a list of what's allowed and what's forbidden. This policy might state, for example, "Unauthenticated users are forbidden from using the calendar service on the staging machine." A problem that allows unauthenticated users to access the staging machine's calendar service would clearly violate the security policy. Every software system can be considered to have a security policy. It might be a formal policy consisting of written documents, or it might be an informal loose collection of expectations that the software's users have about what constitutes reasonable behavior for that system. For most software systems, people usually understand what behavior constitutes a violation of security, even if it hasn't been stated explicitly. Therefore, the term "security policy" often means the user community's consensus on what system behavior is allowed and what system behavior is forbidden. This policy could take a few different forms, as described in the following list:
Note The Java Virtual Machine (JVM) and .NET Common Language Runtime (CLR) have varying degrees of code access security (CAS). CAS provides a means of extensively validating a package at both load time and runtime. These validations include the integrity of the bytecode, the software's originator, and the application of code access restrictions. The most obvious applications of these technologies include the sandbox environments for Java applets and .NET-managed browser controls. Although CAS can be used as a platform for a rigidly formalized security model, some important caveats are associated with it. The first concern is that most developers don't thoroughly understand its application and function, so it's rarely leveraged in commercial software. The second concern is that the security provided by CAS depends entirely on the security of underlying components. Both the Java VM and the .NET CLR have been victims of vulnerabilities that could allow an application to escape the virtual machine sandbox and run arbitrary code.
In practice, a software system's security policy is likely to be mostly informal and made up of people's expectations. However, it often borrows from formal documentation from the development process and references site and resource security policies. This definition of a system security policy helps clarify the concept of "system security." The bottom line is that security is in the eye of the beholder, and it boils down to end users' requirements and expectations. Security Expectations
Considering the possible expectations people have about software security helps determine which issues they consider to be security violations. Security is often described as resting on three components: confidentiality, integrity, and availability. The following sections consider possible expectations for software security from the perspective of these cornerstones. Confidentiality
Confidentiality requires that information be kept private. This includes any situation where software is expected to hide information or hide the existence of information. Software systems often deal with data that contains secrets, ranging from nation- or state-level intelligence secrets to company trade secrets or even sensitive personal information. Businesses and other organizations have plenty of secrets residing in their software. Financial information is generally expected to be kept confidential. Information about plans and performance could have strategic importance and is potentially useful for an unlawful competitive advantage or for criminal activities, such as insider trading. So businesses expect that data to be kept confidential as well. Data involving business relationships, contracts, lawsuits, or any other sensitive content carries an expectation of confidentiality. If a software system maintains information about people, expectations about the confidentiality of that data are often high. Because of privacy concerns, organizations and users expect a software system to carefully control who can view details related to people. If the information contains financial details or medical records, improper disclosure of the data might involve liability issues. Software is often expected to keep personal user information secret, such as personal files, e-mail, activity histories, and accounts and passwords. In many types of software, the actual program code constitutes a secret. It could be a trade secret, such as code for evaluating a potential transaction in a commodities market or a new 3D graphics engine. Even if it's not a trade secret, it could still be sensitive, such as code for evaluating credit risks of potential loan applicants or the algorithm behind an online videogame's combat system. Software is often expected to compartmentalize information and ensure that only authenticated parties are allowed to see information for which they're authorized. These requirements mean that software is often expected to use access control technology to authenticate users and to check their authorization when accessing data. Encryption is also used to maintain the confidentiality of data when it's transferred or stored. Integrity
Integrity is the trustworthiness and correctness of data. It refers to expectations that people have about software's capability to prevent data from being altered. Integrity refers not only to the contents of a piece of data, but also to the source of that data. Software can maintain integrity by preventing unauthorized changes to data sources. Other software might detect changes to data integrity by making note of a change in a piece of data or an alteration of the data's origins. Software integrity often involves compartmentalization of information, in which the software uses access control technology to authenticate users and check their authorization before they're allowed to modify data. Authentication is also an important component of software that's expected to preserve the integrity of the data's source because it tells the software definitively who the user is. Typically, users hold similar expectations for integrity as they do for confidentiality. Any issue that allows attackers to modify information they wouldn't otherwise be permitted to modify is considered a security flaw. Any issue that allows users to masquerade as other users and manipulate data is also considered a breach of data integrity. Software vulnerabilities can be particularly devastating in breaches of integrity, as the modification of data can often be leveraged to further an attackers' access into a software system and the computing resources that host the software. Availability
Availability is the capability to use information and resources. Generally, it refers to expectations users have about a system's availability and its resilience to denial-of-service (DoS) attacks. An issue that allows users to easily crash or disrupt a piece of software would likely be considered a vulnerability that violates users' expectations of availability. This issue generally includes attacks that use specific inputs or environmental disruptions to disable a program as well as attacks centered on exhausting software system resources, such as CPU, disk, or network bandwidth. |
Категории