Operations Security
Overview
The Operations Security domain of Information Systems Security contains many elements that are important for a CISSP candidate to remember. In this domain we will describe the controls that a computing operating environment needs to ensure the three pillars of information security: Confidentiality, Integrity, and Availability (C.I.A.). Examples of these elements are controlling the separation of job functions, controlling the hardware and media that are used, and controlling the exploitation of common I/O errors.
Operations Security can be described as the controls over the hardware in a computing facility, over the data media used in a facility, and over the operators using these resources in a facility.
We will approach this material from the three following directions:
- Controls and Protections. We will describe the categories of operational controls needed to ensure C.I.A.
- Monitoring and Auditing. We will describe the need for monitoring and auditing these controls.
- Threats and Vulnerabilities. We will discuss threats and violations that are applicable to the Operations domain.
Operations Security Concepts
The term operations security refers to the act of understanding the threats to and vulnerabilities of computer operations in order to routinely support operational activities that enable computer systems to function correctly. The term also refers to the implementation of security controls for normal transaction processing, system administration tasks, and critical external support operations. These controls can include resolving software or hardware problems along with the proper maintenance of auditing and monitoring processes.
Triples
Like the other domains, the Operations Security domain is concerned with triples: threats, vulnerabilities, and assets. We will now look at what constitutes a triple in the Operations Security domain:
- Threat. A threat in the Operations Security domain can be defined as the presence of any potential event that could cause harm by violating security. An example of an operations threat is an operator’s abuse of privileges that violates confidentiality.
- Vulnerability. A vulnerability is defined as a weakness in a system that enables security to be violated. An example of an operations vulnerability is a weak implementation of the separation of duties.
- Asset. An asset is considered anything that is a computing resource or ability, such as hardware, software, data, and personnel.
C I A
The following are the effects of operations controls on C.I.A.:
- Confidentiality. Operations controls affect the sensitivity and secrecy of the information.
- Integrity. How well the operations controls are implemented directly affects the data’s accuracy and authenticity.
- Availability. As in the Physical Security domain (see Chapter 10), these controls affect the organization’s level of fault tolerance and its capability to recover from failure.
Controls and Protections
The Operations Security domain is concerned with the controls that are used to protect hardware, software, and media resources from the following:
- Threats in an operating environment
- Internal or external intruders
- Operators who are inappropriately accessing resources
A CISSP candidate should know what resources to protect, how privileges should be restricted, and what controls to implement.
In addition, we will also discuss the following two critical aspects of operations controls:
- Resource protection, which includes hardware control
- Privileged-entity control
Categories of Controls
The following are the major categories of operations security controls:
- Preventative Controls. In the Operations Security domain, preventative controls are designed to achieve two things: to lower the amount and impact of unintentional errors that are entering the system and to prevent unauthorized intruders from internally or externally accessing the system. An example of these controls may be prenumbered forms or a data validation and review procedure to prevent duplications.
- Detective Controls. Detective controls are used to detect an error once it has occurred. Unlike preventative controls, these controls operate after the fact and can be used to track an unauthorized transaction for prosecution or to lessen an error’s impact on the system by identifying it quickly. An example of this type of control is an audit trail.
- Corrective (or Recovery) Controls. Corrective controls are implemented to help mitigate the impact of a loss event through data recovery procedures. They can be used to recover after damage, such as restoring data that was inadvertently erased from floppy diskettes.
The following are additional control categories:
- Deterrent Controls. Deterrent controls are used to encourage compliance with external controls, such as regulatory compliance. These controls are meant to complement other controls, such as preventative and detective controls. Deterrent controls are also known as directive controls.
- Application Controls. Application controls are the controls that are designed into a software application to minimize and detect the software’s operational irregularities.
- Transaction Controls. Transaction controls are used to provide control over the various stages of a transaction - from initiation to output through testing and change control. There are several types of transaction controls:
- Input Controls - Used to ensure that transactions are properly input into the system only once. Elements of input controls may include counting the data and timestamping it with the date it was entered or edited.
- Processing Controls - Used to guarantee that transactions are valid and accurate and that wrong entries are reprocessed correctly and promptly.
- Output Controls - Used for two things: for protecting the confidentiality of an output and for verifying the integrity of an output by comparing the input transaction with the output data. Elements of proper output controls involve ensuring that the output reaches the proper users, restricting access to the printed output storage areas, printing heading and trailing banners, requiring signed receipts before releasing sensitive output, and printing “no output” banners when a report is empty.
- Change Controls - implemented to preserve data integrity in a system while changes are being made to the configuration. Procedures and standards have been created to manage these changes and modifications to the system and its configuration. Change control and configuration management control are thoroughly described later in this chapter.
- Test Controls - Put into place during the testing of a system to prevent violations of confidentiality and to ensure a transaction’s integrity. An example of this type of control is the proper use of sanitized test data. Test controls are often part of the change control process.
Orange Book Controls
The Orange Book is one of the books of the Rainbow Series, which is a six-foot-tall stack of books from the National Security Agency, each having a different cover color, on evaluating Trusted Computer Systems. The main book, to which all others refer, is the Orange Book, which defines the Trusted Computer System Evaluation Criteria (TCSEC), as mentioned in Chapter 5. Much of the Rainbow Series has been superseded by the Common Criteria Evaluation and Validation Scheme (CCEVS). This information can be found at http://niap.nist.gov/cc-scheme/index.html. Other books in the Rainbow Series can be found at www.fas.org/irp/nsa/rainbow.htm.
The TCSEC define major hierarchical classes of security by the letters D (least secure) through A (most secure):
- D - Minimal protection
- C - Discretionary protection (C1 and C2)
- B - Mandatory protection (B1, B2, B3)
- A - Verified protection; formal methods (A1)
Table 6-1 shows these TCSEC Security Evaluation Categories.
CLASS |
DESCRIPTION |
---|---|
D: |
Minimal Protection |
C: |
Discretionary Protection |
C1: |
Discretionary Security Protection |
C2: |
Controlled Access Protection |
B: |
Mandatory Protection |
B1: |
Labeled Security Protection |
B2: |
Structured Protection |
B3: |
Security Domains |
A1: |
Verified Protection |
The Orange Book defines assurance requirements for secure computer operations. Assurance is a level of confidence that ensures that a trusted computing base’s (TCB) security policy has been correctly implemented and that the system’s security features have accurately implemented that policy.
The Orange Book defines two types of assurance: operational assurance and life cycle assurance. Operational assurance focuses on the basic features and architecture of a system, whereas life cycle assurance focuses on the controls and standards that are necessary for building and maintaining a system. An example of an operational assurance is a feature that separates a security-sensitive code from a user code in a system’s memory.
TRUSTED COMPUTING BASE (TCB)
The trusted computing base (TCB) refers to the totality of protection mechanisms within a computer system, including hardware, firmware, and software, the combination of which is responsible for enforcing a security policy. A TCB consists of one or more components that together enforce a unified security policy over a product or system. The ability of a trusted computing base to correctly enforce a security policy depends solely on the mechanisms within the TCB and on the correct input by system administrative personnel of parameters (e.g., a user’s clearance) related to the security policy.
The operational assurance requirements specified in the Orange Book are as follows:
- System architecture
- System integrity
- Covert channel analysis
- Trusted facility management
- Trusted recovery
Life cycle assurance ensures that a TCB is designed, developed, and maintained with formally controlled standards that enforce protection at each stage in the system’s life cycle. Configuration management, which carefully monitors and protects all changes to a system’s resources, is a type of life cycle assurance.
The life cycle assurance requirements specified in the Orange Book are as follows:
- Security testing
- Design specification and testing
- Configuration management
- Trusted distribution
In the Operations Security domain, the operations assurance areas of covert channel analysis, trusted facility management and trusted recovery, and the life cycle assurance area of configuration management are covered.
Covert Channel Analysis
An information transfer path within a system is a generic definition of a channel. A channel may also refer to the mechanism by which the path is effected. A covert channel is a communication channel that allows a process to transfer information in a manner that violates the system’s security policy. A covert channel is an information path that is not normally used for communication within a system; therefore, it is not protected by the system’s normal security mechanisms. Covert channels are a secret way to convey information to another person or program.[*] There are two common types of covert channels: covert storage channels and covert timing channels.
Covert Storage Channel
Covert storage channels convey information by changing a system’s stored data. For example, a program can convey information to a less secure program by changing the amount or the patterns of free space on a hard disk. Changing the characteristics of a file is another example of creating a covert channel. A covert storage channel typically involves a finite resource (e.g., sectors on a disk) that is shared by two subjects at different security levels.
Covert Timing Channel
A covert timing channel is a covert channel in which one process signals information to another by modulating its own use of system resources (e.g., CPU time) in such a way that this manipulation affects the real response time observed by the second process. A covert timing channel employs a process that manipulates observable system resources in a way that affects response time.
Covert timing channels convey information by altering the performance of or modifying the timing of a system resource in some measurable way. Timing channels often work by taking advantage of some kind of system clock or timing device in a system. Information is conveyed by using elements such as the elapsed time required to perform an operation, the amount of CPU time expended, or the time occurring between two events.
Covert timing channels operate in real time - that is, the information transmitted from the sender must be sensed by the receiver immediately or it will be lost - whereas covert storage channels do not. For example, a full-disk error code may be exploited to create a storage channel that could remain for an indefinite amount of time.
Noise and traffic generation are often ways to combat the use of covert channels. Table 6-2 describes the primary covert channel classes.
CLASS |
DESCRIPTION |
---|---|
B2 |
The system must protect against covert storage channels. It must perform a covert channel analysis for all covert storage channels. |
B3 and A1 |
The system must protect against both covert storage and covert timing channels. It must perform a covert channel analysis for both types. |
Trusted Facility Management
Trusted facility management is defined as the assignment of a specific individual to administer the security-related functions of a system. Trusted facility management has two different requirements, one for B2 systems and another for B3 systems. The B2 requirements require that the TCB shall support separate operator and administrator functions.
The B3 requirements require that the functions performed in the role of a security administrator shall be identified. System administrative personnel shall be able to perform security administrator functions only after taking a distinct, auditable action to assume the security administrator role on the system. Nonsecurity functions that can be performed in the security administration role shall be limited strictly to those essential to performing the security role effectively.
Although trusted facility management is an assurance requirement only for highly secure systems, many systems evaluated at lower security levels are structured to try to meet this requirement (see Table 6-3).
CLASS |
REQUIREMENTS |
---|---|
B2 |
Systems must support separate operator and system administrator roles. |
B3 and A1 |
Systems must clearly identify the functions of the security administrator to perform the security-related functions. |
Trusted facility management uses the concept of least privilege (discussed later in this chapter), and it is also related to the administrative concepts of separation of duties and need to know.
Separation of Duties
Separation of duties (also called segregation of duties) assigns parts of tasks to different personnel. Thus, if no single person has total control of the system’s security mechanisms, the theory is that no single person can completely compromise the system.
In many systems, a system administrator has total control of the system’s administration and security functions. This consolidation of privilege should not be allowed in a secure system; therefore, security tasks and functions should not automatically be assigned to the role of the system administrator. In highly secure systems, three distinct administrative roles might be required: a system administrator; a security administrator, who is usually an information system security officer (ISSO); and an enhanced operator function.
The security administrator, system administrator, and operator might not necessarily be different individuals. However, whenever a system administrator assumes the role of the security administrator, this role change must be controlled and audited. Because the security administrator’s job is to perform security functions, the performance of nonsecurity tasks must be strictly limited. This separation of duties reduces the likelihood of loss that results from users abusing their authority by taking actions outside of their assigned functional responsibilities. While it might be cumbersome for the person to switch from one role to another, the roles are functionally different and must be executed as such.
In the concept of two-man control, two operators review and approve the work of each other. The purpose of two-man control is to provide accountability and to minimize fraud in highly sensitive or high-risk transactions. The concept of dual control means that both operators are needed to complete a sensitive task.
Typical system administrator or enhanced operator functions can include the following:
- Installing system software
- Starting up (booting) and shutting down a system
- Adding and removing system users
- Performing backups and recovery
- Handling printers and managing print queues
Typical security administrator functions may include the following:
- Setting user clearances, initial passwords, and other security characteristics for new users
- Changing security profiles for existing users
- Setting or changing file sensitivity labels
- Setting the security characteristics of devices and communications channels
- Reviewing audit data
An operator may perform some system administrator roles, such as backups. This may happen in facilities in which personnel resources are constrained.
For proper separation of duties, the function of user account establishment and maintenance should be separated from the function of initiating and authorizing the creation of the account. User account management focuses on identification, authentication, and access authorizations. This is augmented by the process of auditing and otherwise periodically verifying the legitimacy of current accounts and access authorizations. It also involves the timely modification or removal of access and associated issues for employees who are reassigned, promoted, or terminated or who retire.
Rotation of Duties
Another variation on the separation of duties is called rotation of duties, which is defined as the process of limiting the amount of time that an operator is assigned to perform a security-related task before being moved to a different task with a different security classification. This control lessens the opportunity for collusion between operators for fraudulent purposes. Like a separation of duties, a rotation of duties may be difficult to implement in small organizations but can be an effective security control procedure.
Trusted Recovery
Trusted recovery ensures that security is not breached when a system crash or other system failure (sometimes called a discontinuity) occurs. It must ensure that the system is restarted without compromising its required protection scheme and that it can recover and roll back without being compromised after the failure. Trusted recovery is required only for B3- and A1-level systems. A system failure represents a serious security risk because the security controls might be bypassed when the system is not functioning normally.
For example, if a system crashes while sensitive data is being written to a disk (where it would normally be protected by controls), the data might be left unprotected in memory and might be accessible by unauthorized personnel. Trusted recovery has two primary activities: preparing for a system failure and recovering the system.
Failure Preparation
Under trusted recovery, preparing for a system failure consists of backing up all critical files on a regular basis. This preparation must enable the data recovery in a protected and orderly manner while ensuring the continued security of the system. These procedures may also be required if a system problem, such as a missing resource, an inconsistent database, or any kind of compromise, is detected or if the system needs to be halted and rebooted.
THE SYSTEM ADMINISTRATOR’S MANY HATS
It is not just small organizations any more that require a system administrator to function as a security administrator. The LAN/Internet Network administrator role creates security risks because of the inherent lack of the separation of duties. With the current pullback in the Internet economy, a network administrator has to wear many hats - and performing security-related tasks is almost always one of them (along with various operator functions). The sometimes cumbersome yet very important concept of separation of duties is vital to preserve operations controls.
System Recovery
While specific, trusted recovery procedures depend upon a system’s requirements, general, secure system recovery procedures include the following:
- Rebooting the system into a single-user mode - an operating system loaded without the security front end activated - so that no other user access is enabled at this time
- Recovering all file systems that were active at the time of the system failure
- Restoring any missing or damaged files and databases from the most recent backups
- Recovering the required security characteristics, such as file security labels
- Checking security-critical files, such as the system password file
After all these steps have been performed and the system’s data cannot be compromised, operators can then access the system.
In addition, the Common Criteria also describe three hierarchical recovery types:
- Manual Recovery. System administrator intervention is required to return the system to a secure state after a crash.
- Automated Recovery. Recovery to a secure state is automatic (without system administrator intervention) when resolving a single failure; however, manual intervention is required to resolve any additional failures.
- Automated Recovery without Undue Loss. Similar to automated recovery, this type of recovery is considered a higher level of recovery defining prevention against the undue loss of protected objects.
Modes of Operation
The mode of operation is a description of the conditions under which an AIS functions, based on the sensitivity of data processed and the clearance levels and authorizations of the users. Four modes of operation are defined:
- Dedicated Mode. An AIS is operating in the dedicated mode when each user with direct or indirect individual access to the AIS, its peripherals, remote terminals, or remote hosts has all the following:
- A valid personnel clearance for all information on the system
- Formal access approval (for which the user has signed nondisclosure agreements) for all the information stored or processed (including all compartments, subcompartments, and special access programs)
- A valid need to know for all information contained within the system
- System-High Mode. An AIS is operating in the system-high mode when each user with direct or indirect access to the AIS, its peripherals, remote terminals, or remote hosts has all the following:
- A valid personnel clearance for all information on the AIS
- Formal access approval (for which the user has signed nondisclosure agreements) for all the information stored or processed (including all compartments, subcompartments, and special access programs)
- A valid need to know for some of the information contained within the AIS
- Compartmented Mode. An AIS is operating in the compartmented mode when each user with direct or indirect access to the AIS, its peripherals, remote terminals, or remote hosts has all the following:
- A valid personnel clearance for the most restricted information processed in the AIS
- Formal access approval (for which the user has signed nondisclosure agreements) for that information to which he/she is to have access
- A valid need to know for that information to which the user is to have access
- Multilevel Mode. An AIS is operating in the multilevel mode when all the following statements are satisfied concerning the users with direct or indirect access to the AIS, its peripherals, remote terminals, or remote hosts:
- Some do not have a valid personnel clearance for all the information processed in the AIS.
- All have the proper clearance and have the appropriate formal access approval for that information to which they are to have access.
- All have a valid need to know for that information to which they are to have access.
MULTILEVEL DEVICE
A multilevel device is a device that is used in a manner that permits it to process the data of two or more security levels simultaneously without risk of compromise. To accomplish this, sensitivity labels are normally stored on the same physical medium and in the same form (i.e., machine readable or human readable) as the data being processed.
Configuration Management and Change Control
Change control is the management of security features and a level of assurance provided through the control of the changes made to the system’s hardware, software, and firmware configurations throughout the development and operational life cycle.
Change control manages the process of tracking and approving changes to a system. It involves identifying, controlling, and auditing all changes made to the system. It can address hardware and software changes, networking changes, or any other change affecting security. Change control can also be used to protect a trusted system while it is being designed and developed.
The primary security goal of change control is to ensure that changes to the system do not unintentionally diminish security. For example, change control may prevent an older version of a system from being activated as the production system. Proper change control may also make it possible to accurately roll back to a previous version of a system in case a new system is found to be faulty. Another goal of change control is to ensure that system changes are reflected in current documentation to help mitigate the impact that a change may have on the security of other systems, while in the production or planning stages.
The following are the primary functions of change control:
- To ensure that the change is implemented in an orderly manner through formalized testing
- To ensure that the user base is informed of the impending change
- To analyze the effect of the change on the system after implementation
- To reduce the negative impact that the change may have on the computing services and resources
Five generally accepted procedures exist to implement and support the change control process:
- Applying to introduce a change - Requests presented to an individual or group responsible for approving and administering changes
- Approval of the change - Demonstrating trade-off analysis of the change and justifying it
- Cataloging the intended change - Documenting and updating the change in a change control log
- Testing the change - Formal testing of the change
- Scheduling and implementing the change - Scheduling the change and implementing the change
- Reporting the change to the appropriate parties - Submitting a full report to management summarizing the change.
Configuration management is the more formalized, higher-level process of managing changes to a complicated system, and it is required for formal, trusted systems. Change control is contained in configuration management. The purpose of configuration management is to ensure that changes made to verification systems take place in an identifiable and controlled environment. Configuration managers take responsibility that additions, deletions, or changes made to the verification system do not jeopardize its ability to satisfy trusted requirements. Therefore, configuration management is vital to maintaining the endorsement of a verification system.
Although configuration management is a requirement only for B2, B3, and A1 systems, it is recommended for systems that are evaluated at lower levels. Most developers use some type of configuration management because it is common sense.
Configuration management is a discipline applying technical and administrative direction to do the following:
- Identify and document the functional and physical characteristics of each configuration item for the system
- Manage all changes to these characteristics
- Record and report the status of change processing and implementation
Configuration management involves process monitoring, version control, information capture, quality control, bookkeeping, and an organizational framework to support these activities. The configuration being managed is the verification system plus all tools and documentation related to the configuration process.
The four major aspects of configuration management are[*]:
- Configuration identification
- Configuration control
- Configuration status accounting
- Configuration audit
Configuration Identification
Configuration management entails decomposing the verification system into identifiable, understandable, manageable, trackable units known as configuration items (CIs). A CI is a uniquely identifiable subset of the system that represents the smallest portion to be subject to independent configuration control procedures. The decomposition process of a verification system into CIs is called configuration identification.
CIs can vary widely in size, type, and complexity. Although there are no hard-and-fast rules for decomposition, the granularity of CIs can have great practical importance. A favorable strategy is to designate relatively large CIs for elements that are not expected to change over the life of the system and small CIs for elements likely to change more frequently.
Configuration Control
Configuration control is a means of ensuring that system changes are approved before being implemented; that only the proposed and approved changes are implemented; and that the implementation is complete and accurate. This involves strict procedures for proposing, monitoring, and approving system changes and their implementation. Configuration control entails central direction of the change process by personnel who coordinate analytical tasks, approve system changes, review the implementation of changes, and supervise other tasks such as documentation.
Configuration Status Accounting
Configuration accounting documents the status of configuration control activities and, in general, provides the information needed to manage a configuration effectively. It allows managers to trace system changes and establish the history of any developmental problems and associated fixes.
Configuration accounting also tracks the status of current changes as they move through the configuration control process. Configuration accounting establishes the granularity of recorded information and thus shapes the accuracy and usefulness of the audit function.
The accounting function must be able to locate all possible versions of a CI and all the incremental changes involved, thereby deriving the status of that CI at any specific time. The associated records must include commentary about the reason for each change and its major implications for the verification system.
Configuration Audit
Configuration audit is the quality assurance component of configuration management. It involves periodic checks to determine the consistency and completeness of accounting information and to verify that all configuration management policies are being followed. A vendor’s configuration management program must be able to sustain a complete configuration audit by an NCSC review team.
Configuration Management Plan
Strict adherence to a comprehensive configuration management plan is one of the most important requirements for successful configuration management. The configuration management plan is the vendor’s document tailored to the company’s practices and personnel. The plan accurately describes what the vendor is doing to the system at each moment and what evidence is being recorded.
Configuration Control Board (CCB)
All analytical and design tasks are conducted under the direction of the vendor’s corporate entity called the Configuration Control Board (CCB). The CCB is headed by a chairperson, who is responsible for ensuring that changes made do not jeopardize the soundness of the verification system and ensuring that the changes made are approved, tested, documented, and implemented correctly.
The members of the CCB should interact periodically, either through formal meetings or other available means, to discuss configuration management topics such as proposed changes, configuration status accounting reports, and other topics that may be of interest to the different areas of the system development. These interactions should be held to keep the entire system team updated on all advancements or alterations in the verification system.
Table 6-4 shows the two primary configuration management classes.
CLASS |
REQUIREMENT |
---|---|
B2 and B3 |
Configuration management procedures must be enforced during development and maintenance of a system. |
A1 |
Configuration management procedures must be enforced during the entire system’s life cycle. |
Administrative Controls
Administrative controls can be defined as the controls that are installed and maintained by administrative management to help reduce the threat or impact of violations on computer security. We separate them from the operations controls because these controls have more to do with human resources personnel administration and policy than they do with hardware or software controls.
The following are some examples of administrative controls:
- Personnel Security. These controls are administrative human resources controls that are used to support the guarantees of the quality levels of the personnel performing the computer operations. These are also explained in the Physical Security domain. Elements of these include the following:
- Employment screening or background checks. Pre-employment screening for sensitive positions should be implemented. For less sensitive positions, post-employment background checks may be suitable.
- Mandatory taking of vacation in one-week increments. This practice is common in financial institutions or other organizations in which an operator has access to sensitive financial transactions. Some institutions require a two-week vacation. During the mandatory vacation period the operator’s accounts, processes, and procedures are audited carefully to uncover any evidence of fraud.
- Job action warnings or termination. These are the actions taken when employees violate the published computer behavior standards.
- Separation of Duties and Responsibilities. Separation (or segregation) of duties and responsibilities is the concept of assigning parts of security-sensitive tasks to several individuals. We described this concept earlier in this chapter.
- Least Privilege. Least privilege requires that each subject be granted the most restricted set of privileges needed for the performance of their task. We describe this concept later in more detail.
- Need to Know. The principle of need to know requires that the subject is given only the amount of information required to perform an assigned task. We also describe this concept later in more detail. In addition to whatever specific object or role rights a user may have on the system, the user has also the minimum amount of information necessary to perform his or her job function.
- Change Control. The function of change control is to protect a system from problems and errors that may result from improperly executed or tested changes to a system. We described this concept earlier in this chapter.
- Record Retention and Documentation Control. The administration of security controls on documentation and the procedures implemented for record retention have an impact on operational security. We describe these concepts later in more detail.
Least Privilege
The least privilege principle requires that each subject in a system be granted the most restricted set of privileges (or lowest clearance) needed for the performance of authorized tasks. The application of this principle limits the damage that can result from accident, error, or unauthorized use of system resources.
It may be necessary to separate the levels of access based on the operator’s job function. A very effective approach is least privilege. An example of least privilege is computer operators who are not allowed access to computer resources at a level beyond what is absolutely needed for their specific job tasks. Operators are organized into privilege-level groups. Each group is then assigned the most restricted level that is applicable.
The three basic levels of privilege are defined as follows:
- Read Only. This level is the lowest level of privilege and the one to which most operators should be assigned. Operators are allowed to view data but are not allowed to add, delete, or make changes to the original or make copies of the data.
- Read/Write. The next higher privilege level is read/write access. This level enables operators to read, add to, or write over any data for which they have authority. Operators usually have read/write access only to data copied from an original location; they cannot access the original data.
- Access/Change. The third and highest level is access/change. This level gives operators the right to modify data directly in its original location, in addition to data copied from the original location. Operators may also have the right to change file and operator access permissions in the system (a supervisor right).
These privilege levels are commonly much more finely granular than we have stated here, and privilege levels in a large organization can, in fact, be very complicated.
Operations Job Function Overview
In a large shop, job functions and duties may be divided among a very large base of IT personnel. In many IT departments, the following roles are combined into fewer positions. The following listing, however, gives a nice overview of the various task components of the operational functions.
- Computer Operator. Responsible for backups, running the system console, mounting and dismounting reel tapes and cartridges, recording and reporting operational problems with hardware devices and software products, and maintaining environmental controls
- Operations Analyst. Responsible for working with application software developers, maintenance programmers, and computer operators
- Job Control Analyst. Responsible for the overall quality of the production job control language and conformance to standards
- Production Scheduler. Responsible for planning, creating, and coordinating computer processing schedules for all production and job streams in conjunction with the established processing periods and calendars
- Production Control Analyst. Responsible for the printing and distribution of computer reports and microfiche/microfilm records
- Tape Librarian. Responsible for collecting input tapes and scratch tapes, sending tapes to and receiving returns from offsite storage and third parties, and for maintaining tapes
Record Retention
The term record retention refers to how long transactions and other types of records (legal, audit trails, e-mail, and so forth) should be retained according to management, legal, audit, or tax compliance requirements. In the Operations Security domain, record retention deals with retaining computer files, directories, and libraries. The retention of data media (tapes, diskettes, and backup media) can be based on one or more criteria, such as the number of days elapsed, number of days since creation, hold time, or other factors. An example of record retention issues could be the mandated retention periods for trial documentation or financial records.
Data Remanence
Data remanence is the data left on the media after the media has been erased. After erasure, there may be some physical traces left, which could enable the data that may contain sensitive material to be reconstructed. Object reuse mechanisms ensure that system resources are allocated and reassigned among authorized users in a way that prevents the leak of sensitive information, and they ensure that the authorized user of the system does not obtain residual information from system resources.
Object reuse is defined as “The reassignment to some subject of a storage medium (e.g., page frame, disk sector, magnetic tape) that contained one or more objects. To be securely reassigned, no residual data can be available to the new subject through standard system mechanisms.”[*] The object reuse requirement of the TCSEC is intended to ensure that system resources, in particular storage media, are allocated and reassigned among system users in a manner that prevents the disclosure of sensitive information.
Systems administrators and security administrators should be informed of the risks involving the issues of object reuse, declassification, destruction, and disposition of storage media. Data remanence, object reuse, and the proper disposal of data media are also discussed in Chapter 10.
Due Care and Due Diligence
The concepts of due care and due diligence require that an organization engage in good business practices relative to the organization’s industry. An example of due care could be training employees in security awareness, rather than simply creating a policy with no implementation plan or follow-up. Mandating statements from the employees that they have read and understood appropriate computer behavior is also an example of due care.
Due diligence might be mandated by various legal requirements in the organization’s industry or through compliance with governmental regulatory standards. Due care and due diligence are described in more detail in Chapter 9.
Due care and due diligence are becoming serious issues in computer operations today. In fact, the legal system has begun to hold major partners liable for the lack of due care in the event of a major security breach. Violations of security and privacy are hot-button issues that are confronting the Internet community, and standards covering the best practices of due care are necessary for an organization’s protection.
Documentation Control
A security system needs documentation controls. Documentation can include several things: security plans, contingency plans, risk analyses, and security policies and procedures. Most of this documentation must be protected from unauthorized disclosure; for example, printer output must be in a secure location. Disaster recovery documentation must also be readily available in the event of a disaster.
Operations Controls
Operations controls embody the day-to-day procedures used to protect computer operations. A CISSP candidate must understand the concepts of resource protection, hardware/software control, and privileged entity.
The following are the most important aspects of operations controls:
- Resource protection
- Hardware controls
- Software controls
- Privileged-entity controls
- Media controls
- Physical access controls
Resource Protection
Resource protection is just what it sounds like - the concept of protecting an organization’s computing resources and assets from loss or compromise. Computing resources are defined as any hardware, software, or data that is owned and used by the organization. Resource protection is designed to help reduce the possibility of damage that can result from the unauthorized disclosure or alteration of data by limiting the opportunities for its misuse.
Various examples of resources that require protection are:
Hardware Resources
- Communications, including routers, firewalls, gateways, switches, modems, and access servers
- Storage media, including floppies, removable drives, external hard drives, tapes, and cartridges
- Processing systems, including file servers, mail servers, Internet servers, backup servers, and tape drives
- Standalone computers, including workstations, modems, disks, and tapes
- Printers and fax machines
Software Resources
- Program libraries and source code
- Vendor software or proprietary packages
- Operating system software and systems utilities
Data Resources
- Backup data
- User data files
- Password files
- Operating Data Directories
- System logs and audit trails
Hardware Controls
Hardware Maintenance
System maintenance requires physical or logical access to a system by support and operations staff, vendors, or service providers. Maintenance may be performed on-site, or the unit needing replacement may be transported to a repair site. Maintenance might also be performed remotely. Furthermore, background investigations of the service personnel may be necessary. Supervising and escorting the maintenance personnel when they are on-site is also necessary.
Maintenance Accounts
Many computer systems provide maintenance accounts. These supervisor-level accounts are created at the factory with preset and widely known passwords. It is critical to change these passwords, or at least disable the accounts until they are actually needed for maintenance. If an account is used remotely, authentication of the maintenance provider can be performed by using callback or encryption.
Diagnostic Port Control
Many systems have diagnostic ports through which troubleshooters can directly access the hardware. These ports should be used only by authorized personnel and should not enable either internal or external unauthorized access. Diagnostic port attack is the term that describes this type of abuse.
Hardware Physical Control
Many data processing areas that contain hardware may require locks and alarms. The following are some examples:
- Sensitive operator terminals and keyboards
- Media storage cabinets or rooms
- Server or communications equipment data centers
- Modem pools or telecommunication circuit rooms
Locks and alarms are described in more detail in Chapter 10.
Software Controls
An important element of operations controls is software support - controlling what software is used in a system. The following are some elements of controls on software:
- Antivirus Management. If personnel can load or execute any software on a system, the system is more vulnerable to viruses, unexpected software interactions, and to the subversion of security controls.
TRANSPARENCY OF CONTROLS
One important aspect of controls is the need for their transparency. Operators need to feel that security protections are reasonably flexible and that the security protections do not get in the way of doing their jobs. Ideally, the controls should not require users to perform extra steps, although realistically this result is hard to achieve. Transparency also aids in preventing users from learning too much about the security controls.
- Software Testing. A rigid and formal software-testing process is required to determine compatibility with custom applications or to identify other unforeseen interactions. This procedure should also apply to software upgrades.
- Software Utilities. Powerful systems utilities can compromise the integrity of operations systems and logical access controls. Their use must be controlled by security policy.
- Safe Software Storage. A combination of logical and physical access controls should be implemented to ensure that the software and copies of backups have not been modified without proper authorization.
- Backup Controls. Not only do support and operations personnel back up software and data, but, in a distributed environment, users may also back up their own data. It is very important to routinely test the restore accuracy of a backup system. A backup should also be stored securely to protect from theft, damage, or environmental problems. A description of the types of backups appears later in this chapter.
Privileged-Entity Controls
Privileged-entity access, which is also known as privileged operations functions, is defined as an extended or special access to computing resources given to operators and system administrators. Many job duties and functions require privileged-entity access.
Privileged-entity access is most often divided into classes. Operators should be assigned to a class based on their job title.
The following are some examples of privileged-entity operator functions:
- Special access to system commands
- Access to special parameters
- Access to the system control program
RESTRICTING HARDWARE INSTRUCTIONS
A system control program, or the design of the hardware itself, restricts the execution of certain computing functions and permits them only when a processor is in a particular functional state, known as privileged or supervisor state. Applications can run in different states, during which different commands are permitted. To be authorized to execute privileged instructions, a program should be running in a restrictive state that enables these commands.
Media Resource Protection
Media resource protection can be classified into two areas: media security controls and media viability controls. Media security controls are implemented to prevent any threat to C.I.A. by the intentional or unintentional exposure of sensitive data. Media viability controls are implemented to preserve the proper working state of the media, particularly to facilitate the timely and accurate restoration of the system after a failure.
Media Security Controls
Media security controls should be designed to prevent the loss of sensitive information when the media is stored outside the system.
A CISSP candidate needs to know several of the following elements of media security controls:
- Logging. Logging the use of data media provides accountability. Logging also assists in physical inventory control by preventing tapes from “walking away” and by facilitating their recovery process.
- Access Control. Physical access control to the media is used to prevent unauthorized personnel from accessing the media. This procedure is also a part of physical inventory control.
- Proper Disposal. Proper disposal of the media after use is required to prevent data remanence. The process of removing information from used data media is called sanitization. Three techniques are commonly used for sanitization: overwriting, degaussing, and destruction, described in the following paragraphs. These are also described in Chapter 10.
Overwriting
Simply copying new data to the media is not recommended, because the application may not completely overwrite the old data properly, and strict configuration controls must be in place on both the operating system and the software itself. Also, bad sectors on the media may not permit the software to overwrite old data properly.
To purge the media, the DoD requires overwriting with a pattern, then its complement, and finally with another pattern; for example, overwriting first with 0011 0101, followed by 1100 1010, then 1001 0111. To satisfy the DoD clearing requirement, it is required to write a character to all data locations in the disk. The number of times an overwrite must be accomplished depends on the storage media, sometimes on its sensitivity, and sometimes on differing DoD component requirements, but seven times is most commonly recommended.
Degaussing
Degaussing is often recommended as the best method for purging most magnetic media. Degaussing is a process whereby the magnetic field patterns are erased from the media, returning the medium to its initial virgin state. Erasure via degaussing may be accomplished in two ways:
- In AC erasure, the media is degaussed by applying an alternating field that is reduced in amplitude over time from an initial high value (i.e., an AC-powered electromagnet)
- In DC erasure, the media is saturated by applying a unidirectional field (i.e., a DC-powered electromagnet or a permanent magnet)
Another important point about degaussing is that degaussed magnetic hard drives will generally require restoration of factory-installed timing tracks, so data purging is recommended instead.
Destruction
Paper reports and diskettes need to be physically destroyed before disposal. Also, physical destruction of optical media (CD-ROM or WORM disks) is necessary.
Destruction techniques can include shredding or burning documentation, physically breaking CD-ROMS and diskettes, and destroying with acid. Paper reports should be shredded by personnel with the proper level of security clearance. Some shredders cut in straight lines or strips; others cross-cut or disintegrate the material into pulp. Care must be taken to limit access to the reports prior to disposal and those stored for long periods. Reports should never be disposed of without shredding, do not place them in a dumpster intact. Burning is also sometimes used to destroy paper reports, especially in the DoD.
In some cases, acid is used to destroy disk pack surfaces. Applying a high concentration of hydroiodic acid (55% to 58% solution) to the gamma ferric oxide disk surface is a rarely used method of media destruction, and acid solutions should be used in a well-ventilated area and only by qualified personnel.
Media Viability Controls
Many physical controls should be used to protect the viability of the data storage media. The goal is to protect the media from damage during handling and transportation or during short-term or long-term storage. Proper marking and labeling of the media are required in the event of a system recovery process.
- Marking. All data storage media should be accurately marked or labeled. The labels can be used to identify media with special handling instructions or to log serial numbers or bar codes for retrieval during a system recovery.
MEDIA LIBRARIAN
It is the job of a media librarian to control access to the media library and to regulate the media library environment. All media must be labeled in a human-and machine-readable form that should contain information such as the date and who created the media, the retention period, a volume name and version, and security classification.
- It is important not to confuse this kind of physical storage media marking for inventory control with the logical data labeling of sensitivity classification for mandatory access control, which is described in other chapters.
- Handling. Proper handling of the media is important. Some issues with the handling of media include cleanliness of the media and the protection from physical damage to the media during transportation to the archive sites.
- Storage. Storage of the media is very important for both security and environmental reasons. A proper heat- and humidity-free, clean storage environment should be provided for the media. Data media are sensitive to temperature, liquids, magnetism, smoke, and dust.
Physical Access Controls
The control of physical access to the resources is the major tenet of the Physical Security domain. Obviously, the Operations Security domain requires physical access control, and the following list contains examples of some of the elements of the operations resources that need physical access control.
Hardware
- Control of communications and the computing equipment
- Control of the storage media
- Control of the printed logs and reports
Software
- Control of the backup files
- Control of the system logs
- Control of the production applications
- Control of the sensitive/critical data
Obviously, all personnel require some sort of control and accountability when accessing physical resources, yet some personnel will require special physical access to perform their job functions. The following are examples of this type of personnel:
- IT department personnel
- Cleaning staff
- Heating ventilation and air conditioning (HVAC) maintenance personnel
- Third-party service contract personnel
- Consultants, contractors, and temporary staff
Special arrangements for supervision must be made when external support providers are entering a data center.
The term physical piggybacking describes an unauthorized person going through a door behind an authorized person. The concept of a man trap (described in Chapter 10) is designed to prevent physical piggybacking.
[*]Sources: DoD 5200.28-STD, Department of Defense Trusted Computer System Evaluation Criteria; and NCSC-TG-030, A Guide to Understanding Covert Channel Analysis of Trusted Systems (Light Pink Book).
[*]Sources: National Computer Security Center publication NCSC-TG-006, A Guide To Understanding Configuration Management In Trusted Systems; NCSC-TG-014, Guidelines for Formal Verification Systems.
[*]Source: NCSC-TG-018, A Guide to Understanding Object Reuse in Trusted Systems (Light Blue Book).
Monitoring and Auditing
Operational assurance requires the process of reviewing an operational system to see that security controls, both automated and manual, are functioning correctly and effectively. Operational assurance addresses whether the system’s technical features are being bypassed or have vulnerabilities and whether required procedures are being followed. To maintain operational assurance, organizations use two basic methods: system audits and monitoring. A system audit is a one-time or periodic event to evaluate security; monitoring refers to an ongoing activity that examines either the system or the users.
Problem identification and problem resolution are the primary goals of monitoring. The concept of monitoring is integral to almost all the domains of information security. In Chapter 3 we described some technical aspects of monitoring and intrusion detection. Chapter 10 will also describe intrusion detection and monitoring from a physical access perspective. In this chapter we are more concerned with monitoring the controls implemented in an operational facility in order to identify abnormal computer usage, such as inappropriate use or intentional fraud. The task of failure recognition and response, which includes reporting mechanisms, is an important part of monitoring.
Monitoring
Monitoring contains the mechanisms, tools, and techniques that permit the identification of security events that can impact the operation of a computer facility. It also includes the actions to identify the important elements of an event and to report that information appropriately.
The concept of monitoring includes monitoring for illegal software installation, monitoring the hardware for faults and error states, and monitoring operational events for anomalies.
Monitoring Techniques
To perform this type of monitoring, an information security professional has several tools at his or her disposal:
- Intrusion detection
- Penetration testing
- Violation processing, using clipping levels
Intrusion Detection (ID)
Intrusion Detection (ID) is a useful tool that can assist in the detective analysis of intrusion attempts. ID can be used not only for the identification of intruders but also to create a sampling of traffic patterns. By analyzing the activities occurring outside of normal clipping levels, a security practitioner can find evidence of events such as in-band signaling or other system abuses.
Penetration Testing
Penetration testing is the process of testing a network’s defenses by attempting to access the system from the outside, using the same techniques that an external intruder (for example, a cracker) would use. This testing gives a security professional a better snapshot of the organization’s security posture.
Among the techniques used to perform a penetration test are:
- Scanning and Probing. Various scanners, such as a port scanner, can reveal information about a network’s infrastructure and enable an intruder to access the network’s unsecured ports.
- Demon Dialing. Demon (or “war”) dialers automatically test every phone line in an exchange to try to locate modems that are attached to the network. Information about these modems can then be used to attempt external unauthorized access.
- Sniffing. A protocol analyzer can be used to capture data packets that are later decoded to collect information such as passwords or infrastructure configurations.
Figure 6-1 shows how penetration testing techniques should be used to test every access point of the network and work area.
Figure 6-1: Penetration testing all network access points.
Other techniques that are not solely technology-based can be used to complement the penetration test. The following are examples of such techniques:
- Dumpster Diving. Searching paper disposal areas for unshredded or otherwise improperly disposed-of reports.
- Social Engineering. The most commonly used technique of all: getting information (like passwords) just by asking for it.
Violation Analysis
One of the most-used techniques to track anomalies in user activity is violation tracking, processing, and analysis. To make violation tracking effective, clipping levels must be established. A clipping level is a baseline of user activity that is considered a routine level of user errors. A clipping level enables a system to ignore normal user errors. When the clipping level is exceeded, a violation record is then produced. Clipping levels are also used for variance detection.
Using clipping levels and profile-based anomaly detection, the following types of violations should be tracked, processed, and analyzed:
- Repetitive “mistakes” that exceed the clipping level number
- Individuals who exceed their authority
- Too many people with unrestricted access
- Patterns indicating serious intrusion attempts
Profile-based anomaly detection uses profiles to look for abnormalities in user behavior. A profile is a pattern that characterizes the behavior of users. Patterns of usage are established according to the various types of activities the users engage in, such as processing exceptions, resource utilization, and patterns in actions performed. The ways in which the various types of activity are recorded in the profile are referred to as profile metrics.
Benefits of Incident-Handling Capability
The primary benefits of employing an incident-handling capability are containing and repairing damage from incidents and preventing future damage. Additional benefits related to establishing an incident-handling capability are[*]:
- Enhancement of the risk assessment process. An incident-handling capability will allow organizations to collect threat data that may be useful in their risk assessment and safeguard selection processes (e.g., in designing new systems). Statistics on the numbers and types of incidents in the organization can be used in the risk-assessment process as an indication of vulnerabilities and threats.
INDEPENDENT TESTING
It is important to note that in most cases, external penetration testing should be performed by a reputable, experienced firm that is independent of an organization’s IT or Audit departments. This independence guarantees an objective, nonpolitical report on the state of the company’s defenses. The firm must be fully vetted, however, and full legal nondisclosure issues must be resolved to the organization’s satisfaction before work begins. For this reason, “Black Hat” testers - that is, ex-crackers now working for security firms - are often not recommended.
- Enhancement of internal communications and the readiness of the organization to respond to any type of incident, not just computer security incidents. Internal communications will be improved; management will be better organized to receive communications; and contacts within public affairs, legal staff, law enforcement, and other groups will have been pre-established.
- Security training personnel will have a better understanding of users’ knowledge of security issues. Trainers can use actual incidents to vividly illustrate the importance of computer security. Training that is based on current threats and controls recommended by incident-handling staff provides users with information more specifically directed to their current needs, thereby reducing the risks to the organization from incidents.
Auditing
The implementation of regular system audits is the foundation of operational security controls monitoring. In addition to enabling internal and external compliance checking, regular auditing of audit (transaction) trails and logs can assist the monitoring function by helping to recognize patterns of abnormal user behavior.
Security Auditing
Information Technology (IT) auditors are often divided into two types: internal and external. Internal auditors typically work for the organization whose systems are to be audited, whereas external auditors do not. External auditors are often Certified Public Accountants (CPAs) or other audit professionals who are hired to perform an independent audit of an organization’s financial statements. Internal auditors, on the other hand, usually have a much broader mandate: checking for compliance and standards of due care, auditing operational cost efficiencies, and recommending the appropriate controls.
IT auditors typically audit the following functions:
- Backup controls
- System and transaction controls
- Data library procedures
- Systems development standards
- Data center security
- Contingency plans
In addition, IT auditors might recommend improvements to controls, and they often participate in a system’s development process to help an organization avoid costly re-engineering after the system’s implementation.
Audit Trails
An audit trail is a set of records that collectively provides documentary evidence of processing, used to aid in tracing from original transactions forward to related records and reports or backward from records and reports to their component source transactions. Audit trails may be limited to specific events, or they may encompass all the activities on a system.
An audit (or transaction) trail enables a security practitioner to trace a transaction’s history. This transaction trail provides information about additions, deletions, or modifications to the data within a system. Audit trails enable the enforcement of individual accountability by creating a reconstruction of events. As with monitoring, one purpose of an audit trail is to assist in a problem’s identification, which leads to a problem’s resolution. An effectively implemented audit trail also enables an auditor to retrieve and easily certify the data. Any unusual activity or variation from the established procedures should be identified and investigated.
The audit logs should record the following:
- The transaction’s date and time
- Who processed the transaction
- At which terminal the transaction was processed
- Various security events relating to the transaction
In addition, an auditor should examine the audit logs for the following:
- Amendments to production jobs
- Production job reruns
- Computer operator practices
USER ACCOUNT REVIEW
It is necessary to regularly review user accounts on a system. Such reviews may examine the levels of access each individual has, conformity with the concept of least privilege, whether all accounts are still active, whether management authorizations are up-to-date, or whether required training has been completed, for example. These reviews can be conducted on at least two levels: on an application-by-application basis or on a systemwide basis. Both kinds of reviews can be conducted by, among others, in-house systems personnel (a self-audit), the organization’s internal audit staff, or external auditors.
User audit trails can usually log:
- All commands directly initiated by the user
- All identification and authentication attempts
- Files and resources accessed
It is most useful if options and parameters are also recorded from commands. It is much more useful to know that a user tried to delete a log file (e.g., to hide unauthorized actions) than to know the user merely issued the delete command, possibly for a personal data file.
Source: National Institute of Standards and Technology Special Publication 800-12, An Introduction to Computer Security: The NIST Handbook).
The audit mechanism of a computer system has five important security goals:[*]
- The audit mechanism must “allow the review of patterns of access to individual objects, access histories of specific processes and individuals, and the use of the various protection mechanisms supported by the system and their effectiveness.”
- Allow discovery of both users’ and outsiders’ repeated attempts to bypass the protection mechanisms.
- Allow discovery of any use of privileges that may occur when a user assumes a functionality with privileges greater than his or her own, such as a programmer assuming the role of administrator. In this case, there may be no bypass of security controls, but nevertheless, a violation is made possible.
- Act as a deterrent against perpetrators’ habitual attempts to bypass the system protection mechanisms. However, for the audit mechanism to act as a deterrent, the perpetrator must be aware of its existence and its active use to detect any attempts to bypass system protection mechanisms.
ELECTRONIC AUDIT TRAILS
Maintaining a proper audit trail is more difficult now because fewer transactions are recorded to paper media and will thus always stay in an electronic form. In the old paper system, a physical purchase order might be prepared with multiple copies, initiating a physical, permanent paper trail. An auditor’s job is now more complicated because digital media are more transient and a paper trail may not exist.
- Supply “an additional form of user assurance that attempts to bypass the protection mechanisms are recorded and discovered.”[†] Even if the attempt to bypass the protection mechanism is successful, the audit trail will still provide assurance by its ability to aid in assessing the damage done by the violation, thus improving the system’s ability to control the damage.
Other important security issues regarding the use of audit logs are:
- Retention and protection of the audit media and reports when their storage is offsite
- Protection against the alteration of audit or transaction logs
- Protection against the unavailability of an audit media during an event
Problem Management Concepts
Effective auditing embraces the concepts of problem management. Problem management is a way to control the process of problem isolation and problem resolution. An auditor may use problem management to resolve the issues arising from an IT security audit, for example.
The goal of problem management is threefold:
- To reduce failures to a manageable level
- To prevent the occurrence or reoccurrence of a problem
- To mitigate the negative impact of problems on computing services and resources
The first step in implementing problem management is to define the potential problem areas and the abnormal events that should be investigated. Some examples of potential problem areas are:
- The performance and availability of computing resources and services
- The system and networking infrastructure
- Procedures and transactions
- The safety and security of personnel
Some examples of abnormal events that could be discovered during an audit are as follows:
- Degraded hardware or software resource availability
- Deviations from the standard transaction procedures
- Unexplained occurrences in a processing chain
Of course, the final objective of problem management is resolution of the problem.
[*]Source: NIST Special Publication 800-12, “An Introduction to Computer Security: The NIST Handbook.”
[*]Source: NCSC-TG-001, A Guide to Understanding Audit in Trusted Systems (Tan Book).
[†]Source: V. D. Gligor, Guidelines for Trusted Facility Management and Audit (University of Maryland, 1985).
Threats and Vulnerabilities
A threat is simply any event that, if realized, can cause damage to a system and create a loss of confidentiality, availability, or integrity. Threats can be malicious, such as the intentional modification of sensitive information, or they can be accidental, such as an error in a transaction calculation or the accidental deletion of a file.
A vulnerability is a weakness in a system that can be exploited by a threat. Reducing the vulnerable aspects of a system can reduce the risk and impact of threats on the system. For example, a password generation tool that helps users choose robust passwords reduces the chance that users will select poor passwords (the vulnerability) and makes the password more difficult to crack (the threat of external attack).
Threats and vulnerabilities are discussed in several of the ten domains; for example, many examples of attacks are given in Chapter 2.
Threats
We have grouped the threats into several categories, and we will describe some of the elements of each category.
Accidental Loss
Accidental loss is a loss that is incurred unintentionally, either through the lack of operator training or proficiency or by the malfunctioning of an application’s processing procedure. The following are some examples of the types of accidental loss:
- Operator input errors and omissions. Manual input transaction errors, entry or data deletion, and faulty data modification
- Transaction processing errors. Errors that are introduced into the data through faulty application programming or processing procedures
Inappropriate Activities
Inappropriate activity is computer behavior that, while not rising to the level of criminal activity, may be grounds for job action or dismissal.
- Inappropriate Content. Using the company systems to store pornography, entertainment, political, or violent content
- Waste of Corporate Resources. Personal use of hardware or software, such as conducting a private business with a company’s computer system
- Sexual or Racial Harassment. Using e-mail or other computer resources to distribute inappropriate material
- Abuse of Privileges or Rights. Using unauthorized access levels to violate the confidentiality of sensitive company information
Illegal Computer Operations and Intentional Attacks
Under this heading, we have grouped the areas of computer activities that are considered as intentional and illegal computer activity for personal financial gain for destruction:
- Eavesdropping. Data scavenging, traffic or trend analysis, social engineering, economic or political espionage, sniffing, dumpster diving, keystroke monitoring, and shoulder surfing are all types of eavesdropping to gain information or to create a foundation for a later attack. Eavesdropping is a primary cause of the failure of confidentiality.
- Fraud. Examples of the types of fraud are collusion, falsified transactions, data manipulation, and other altering of data integrity for gain.
- Theft. Examples of the types of theft are the theft of information or trade secrets for profit or unauthorized disclosure, and physical theft of hardware or software.
- Sabotage. Sabotage includes denial of service (DoS), production delays, and attacks on data integrity.
- External Attack. Examples of external attacks are malicious cracking, scanning, and probing to gain infrastructure information, demon dialing to locate an unsecured modem line, and the insertion of a malicious code or virus.
Vulnerabilities and Attacks
- Traffic/Trend Analysis. Traffic analysis, which is sometimes called trend analysis, is a technique employed by an intruder that involves analyzing data characteristics (message length, message frequency, and so forth) and the patterns of transmissions (rather than any knowledge of the actual information transmitted) to infer information that might be useful to an intruder.
- Countermeasures to traffic analysis are similar to the countermeasures to crypto-attacks:
- Padding messages. Creating all messages to be a uniform data size by filling empty space in the data
- Sending noise. Transmitting noninformational data elements mixed in with real information to disguise the real message
- Covert channel analysis. Previously described in the “Orange Book Controls” section of this chapter
- Maintenance Accounts. Maintenance accounts that still have factory-set or easily guessed passwords provide a method to break into computer systems. Physical access to the hardware by maintenance personnel can also constitute a security violation. (See the “Hardware Controls” section earlier in this chapter.)
- Data-Scavenging Attacks. Data scavenging is the technique of piecing together information from found bits of data. There are two common types of data-scavenging attacks:
- Keyboard Attacks. Data scavenging through the resources that are available to normal system users who are sitting at the keyboard and using normal utilities and tools to glean information.
- Laboratory Attacks. Data scavenging by using very precise electronic equipment; these are planned, orchestrated attacks.
- IPL Vulnerabilities. The start of a system, the initial program load (IPL), presents very specific system vulnerabilities, whether the system is a centralized mainframe type or a distributed LAN type. During the IPL, the operator brings up the facility’s system. This operator has the ability to put a system into a single-user mode, without full security features, which is a very powerful ability. In this state, an operator can load unauthorized programs or data, reset passwords, rename various resources, or reset the system’s time and date. The operator can also reassign the data ports or communications lines to transmit information to a confederate outside the data center. On a LAN, a system administrator can start the boot sequence from a tape, CD-ROM, or floppy disk - bypassing the operating system’s security on the hard drive.
- Social Engineering. This attack uses social skills to obtain information. Common techniques used by an intruder to gain either physical access or system access are:[*]:
- Asserting authority or pulling rank. Professing to have the authority, perhaps supported with altered identification, to enter the facility or system
- Intimidating or threatening. Browbeating the access control subjects with harsh language or threatening behavior to permit access or release information
- Praising, flattering, or sympathizing. Using positive reinforcement to seduce the subjects into giving access or information for system access
- Network Address Hijacking. It may be possible for an intruder to reroute data traffic from a server or network device to a personal machine, either by device address modification or by network address “hijacking.” This diversion enables the intruder to capture traffic to and from the devices for data analysis or modification or to steal the password file from the server and gain access to user accounts. By rerouting the data output, the intruder can obtain supervisory terminal functions and bypass the system logs.
[*]Source: Fighting Computer Crime, Donn B. Parker (Wiley, 1998).
Maintaining Resource Availability
As we’ve discussed before, availability is one of the three cornerstone tenets of information systems security. In Chapter 3 we discussed the concept of Network Availability using fault-tolerant systems and server clustering. Here let’s look at how backup systems can help guarantee a system’s up time, and support the tenet of availability.
RAID
RAID stands for redundant array of inexpensive disks or redundant array of independent disks. Its primary purpose is to provide fault tolerance and protection against file server hard disk failure and the resultant loss of availability and data. Some RAID types secondarily improve system performance by caching and distributing disk reads from multiple disks that work together to save files simultaneously.
Simply put, RAID separates the data into multiple units and stores it on multiple disks by using a process called striping. It can be implemented as either a hardware or a software solution; each type of implementation has its own issues and benefits.
The RAID Advisory Board has defined three classifications of RAID:
- Failure-Resistant Disk Systems (FRDS)
- Failure-Tolerant Disk Systems
- Disaster-Tolerant Disk Systems
RAID Levels
RAID is implemented in one or a combination of several ways, called levels. They are:
- RAID Level 0 creates one large disk by using several disks. This process is called striping. It stripes data across all disks (but provides no redundancy) by using all the available drive space to create the maximum usable data volume size and to increase the read/write performance. One problem with this level of RAID is that it actually lessens the fault tolerance of the disk system rather than increasing it; the entire data volume is unusable if one drive in the set fails.
- RAID Level 1 is commonly called mirroring. It mirrors the data from one disk or set of disks by duplicating the data onto another disk or set of disks. This process is often implemented by a one-for-one disk-to-disk ratio; each drive is mirrored to an equal drive partner that is continually being updated with current data. If one drive fails, the system automatically gets the data from the other drive. The main issue with this level of RAID is that the one-for-one ratio is very expensive, resulting in the highest cost per megabyte of data capacity. This level effectively doubles the amount of hard drives you need; therefore, it is usually best for smaller-capacity systems.
- RAID Level 2 consists of bit-interleaved data on multiple disks. The parity information is created by using a Hamming code, which detects errors and establishes which part of which drive is in error. It defines a disk drive system with 39 disks - 32 disks of user storage and seven disks of error-recovery coding. This level is not used in practice and was quickly superseded by the more flexible levels of RAID that follow.
- RAID Levels 3 and 4 are discussed together because they function in the same way. The only difference is that Level 3 is implemented at the byte level, whereas Level 4 is usually implemented at the block level. In this scenario, data is striped across several drives, and the parity check bit is written to a dedicated parity drive. This process is similar to RAID 0. They both have a large data volume, but the addition of a dedicated parity drive provides redundancy. If a hard disk fails, the data can be reconstructed by using the bit information on the parity drive. The main issue with these levels of RAID is that the constant writes to the parity drive can create a performance hit. In this implementation, spare drives can be used to replace crashed drives.
- RAID Level 5 stripes the data and the parity information at the block level across all the drives in the set. It is similar to RAID 3 and 4 except that the parity information is written to the next-available drive rather than to a dedicated drive by using an interleave parity. This feature enables more flexibility in the implementation and increases fault tolerance because the parity drive is not a single point of failure, as it is in RAID 3 and 4. The disk reads and writes are also performed concurrently, thereby increasing performance over levels 3 and 4. The spare drives that replace the failed drives are usually hot swappable, meaning they can be replaced on the server while the system is up and running. This is probably the most popular implementation of RAID today.
Vendors created various other implementations of RAID to combine the features of several RAID levels, although these levels are less common. Level 6 is an extension of Level 5 that allows for additional fault tolerance by using a second independent distributed-parity scheme (i.e., two-dimensional parity). Level 10 is created by combining Level 0 (striping) with Level 1 (mirroring). Level 15 is created by combining Level 1 (mirroring) with Level 5 (interleave). Level 51 is created by mirroring entire Level 5 arrays. Table 6-5 shows the various levels of RAID with terms you will need to remember.
RAID LEVEL |
DESCRIPTION |
---|---|
0 |
Striping |
1 |
Mirroring |
2 |
Hamming Code Parity |
3 |
Byte Level Parity |
4 |
Block Level Parity |
5 |
Interleave Parity |
6 |
Second Independent Parity |
7 |
Single Virtual Disk |
10 |
Striping Across Multiple Pairs (1+0) |
15 |
Striping With Parity Across RAID 5 Pairs (1+5) |
51 |
Mirrored RAID 5 Arrays With Parity (5+1) |
Backup Concepts
A CISSP candidate will also need to know the basic concepts of data backup. The candidate might be presented with questions regarding file selection methods, tape format types, and common problems.
Tape Backup Methods
The purpose of a tape backup method is to protect and restore lost, corrupted, or deleted information - thereby preserving the data’s integrity and ensuring network availability. There are several varying methods of selecting files for backup.
Most backup methods use the Archive file attribute to determine whether the file should be backed up. The backup software determines which files need to be backed up by checking to see whether the Archive file attribute has been set and then resets the Archive bit value to null after the backup procedure.
The three most common methods are:
- Full Backup Method - This backup method makes a complete backup of every file on the server every time it is run. A full or complete backup backs up all files in all directories stored on the server regardless of when the last backup was made and whether the files have already been backed up. The Archive file attribute is changed to mark that the files have been backed up, and the tapes or tapes will have all data and applications on it or them. The method is primarily run for system archive or baselined tape sets.
- Incremental Backup Method - The incremental backup method backs up files that have been created or modified only since the last backup was made, or in other words files whose Archive file attribute is reset. This can result in the backup operator needing several tapes to do a complete restoration, because every tape with changed files as well as the last full backup tape will need to be restored.
BACKUP METHOD EXAMPLE
A full backup was made on Friday night. This full backup is just what it says - it copied every file on the file server to the tape, regardless of the last time any other backup was made. This type of backup is common for creating full copies of the data for off-site archiving or in preparation for a major system upgrade. On Monday night, another backup was made. If the site uses the incremental backup method, Monday, Tuesday, Wednesday, and Thursday’s backup tapes contain only those files that were altered during that day (Monday’s incremental backup tape has only Monday’s data on it, Tuesday’s backup tape has only Tuesday’s on it, and so on). All backup tapes might be required to restore a system to its full state after a system crash, because some files that changed during the week may exist only on one tape. If the site is using the differential backup method, Monday’s tape backup has the same files that the incremental tape has (Monday is the only day that the files have changed so far). However, on Tuesday, rather than only backing up that day’s files, the site also backed up Monday’s files - creating a longer backup. Although this increases the time required to perform the backup and increases the amount of tapes needed, it does provide more protection from tape failure and speeds up recovery time (see Table 6-6).
Table 6-6: Differential versus Incremental Tape Backup Open table as spreadsheet BACKUP METHOD
MONDAY
TUESDAY
WEDNESDAY
THURSDAY
FRIDAY
Full Backup
Not used
Not used
Not used
Not used
All files
Differential
Changed File A
Changed Files A and B
Files A, B, and C
Files A, B, C, and D
Not used
Incremental
Changed File A
Changed File B
Changed File C
Changed File D
Not used
- Differential Backup Method - The differential backup method backs up files that have been created or modified only since the last backup was made, like an incremental backup. The difference between an incremental backup and a differential backup is that the Archive file attribute is not reset after the differential backup is completed. Therefore the changed file is backed up every time the differential backup is run. The backup set grows in size until the next full backup, as these files continue to be backed up during each subsequent differential backup. The advantage of this backup method is that the backup operator should need only the full backup and the latest differential backup to restore the system.
Other Backup Formats
- Compact Disc (CD) Optical Media. Write once, read many (WORM) optical disk “jukeboxes” are used for archiving data that does not change. This is a very good format to use for a permanent backup. Companies use this format to store data in an accessible format that may need to be accessed at a much later date, such as legal data. The shelf life of a CD is also longer than a tape. Rewritable and erasable (CDR/W) optical disks are sometimes used for backups that require short-time storage for changeable data but require faster file access than tape. This format is used more often for very small data sets.
- Zip/Jaz Drives, SyQuest, and Bernoulli Boxes. These types of drives are frequently used for the individual backups of small data sets of specific application data. These formats are very transportable and are often the standard for data exchange in many businesses.
- Tape Arrays. A tape array is a large hardware/software system that uses the RAID technology we discussed earlier in a large device with multiple (sometimes 32 or 64) tapes, configured as a single array. These devices require very specific hardware and software to operate, but they provide a very fast backup and a multitasking backup of multiple targets with considerable fault tolerance.
- Hierarchical Storage Management (HSM). HSM provides a continuous online backup by using optical or tape “jukeboxes,” similar to WORMs. It appears as an infinite disk to the system and can be configured to provide the closest version of an available real-time backup. This is commonly employed in very large data retrieval systems.
Common Backup Issues and Problems
All backup systems share common issues and problems, whether they use a tape or a CD-ROM format. There are three primary backup concerns:
- Slow data transfer of the backup. All backups take time, especially tape backup. Depending upon the volume of data that needs to be copied, full backups to tape can take an incredible amount of time. In addition, the time required to restore the data must also be factored into any disaster recovery plan. Backups that pass data through the network infrastructure must be scheduled during periods of low network utilization, which are commonly overnight, over the weekend, or during holidays. This also requires off-hour monitoring of the backup process.
- Server disk space utilization expands over time. As the amount of data that needs to be copied increases, the length of time to run the backup proportionally increases, and the demand on the system grows as more tapes are required. Sometimes the data volume on the hard drives expands very quickly, thus overwhelming the backup process. Therefore, this process must be monitored regularly.
- The time the last backup was run is never the time of the server crash. With noncontinuous backup systems, data that was entered after the last backup prior to a system crash will have to be recreated. Some systems have been designed to provide online fault tolerance during backup (the old Vortex Retrochron was one), yet, because backup is a postprocessing batch process, some data reentry will need to be performed.
Operational E Mail Security
The Chapter 4 section “E-mail Security Issues and Approaches” lists the main objectives of e-mail security and describes some cryptographic approaches, such as PEM, PGP, and S/MIME. This section addresses other ways e-mail can pose a threat to the organization’s security posture, as well as some solutions.
E-mails have three basic parts: attachments, content, and headers. Both the content and attachments are areas of vulnerability. E-mail is the primary means of virus and malicious code distribution, being one of the main ways Trojan horses and other executable code are distributed. The virus danger from e-mail stems from attachments containing active executable program files (with extensions such as CLASS, OCX, EXE, COM, and DLL) and from macro-enabled data files. These attachments could contain malicious code that could be masquerading as another file type. These attachments do not even need to be opened if the mail client automatically displays all attachments. (You should disable the preview pane feature in all your mail clients.) Virus detection and removal is a major industry and will continue to be so into the foreseeable future.
As shown in Figure 6-2, e-mail relay servers can propagate spam if the relay agent is not correctly configured. Any SMTP mail server in the DMZ should be correctly configured so that its relay agent is not being used by an unauthorized mail server for spamming. If your system is used for spamming, or even if it only has the possibility of being used for spamming, your customers’ Internet service providers may blacklist your domain, and you could be exposed to legal liability.
Figure 6-2: Spam can propagate through the enterprise and onto other networks.
A relay should not be configured to send any message it receives but only mail addressed to its domain, and it must have proper antispam features enabled. It must also employ antivirus and content-filtering applications both incoming and outgoing, to minimize the exposure of the company to liability. Figure 6-2 shows how open e-mail relays can compromise multiple networks.
E Mail Phishing
E-mail is currently the largest attack vector for phishing malware and ID theft exploits. This may change, because Web sites increasingly employ advanced scripting techniques and automated functions; but e-mail is still the hands-down winner.
You can take a number of steps to protect your business from fraudulent e-mail, including the following:
MALICIOUS CODE VECTORS IN HTML E-MAIL
HTML e-mail and infected Web pages can deliver malicious code to the user in a variety of ways, such as:
- ActiveX Controls - Browser security settings that prevent running unsigned or unverified ActiveX controls can be overridden by launching HTML files from a local disk (as is the case with a cached e-mail message) or changing system registry entries.
- VBScript and JavaScript - Rogue scripts can automatically send data to a Web server without your knowledge or infect a computer for distributed denial-of-service (DDoS) attack.
- IFrames - An iframe embedded in an e-mail message could be used to run a VB script; this script could access the local file system to read or delete files.
- Images - Embedded images can be dangerous and cause the execution of unwanted code (see steganography). Web bugs can also create privacy issues.
- Flash applets - Some bugs could be used to execute arbitrary code.
- Standardizing your communications with the customer
- Implementing e-mail authentication
The following sections discuss these three topics in more detail.[*]
Standard Customer Communication Policy
The organization should have an e-mail standard in regards to e-mailing clients and customers. A standard customer communications policy should convey a consistent message and not confuse your customer.
Here are some basic customer e-mail policy standards:
- Don’t send e-mail in HTML format.
- Don’t send attachments.
- Don’t include or ask for personal information.
- Use the full name of the user.
- Don’t include hyperlinks.
- Don’t require HTML e-mail. In e-mail correspondence to customers, an organization should use plain text-formatted e-mail. A company’s e-mail policy should explicitly recommend that plain text be used in all correspondence with customers. If the appearance of your message is important, save it as a Rich Text Format (RTF) or Portable Document Format (PDF) document and post it to your Web site.
- Don’t send attachments. Attachments are the most common way that viruses and Trojans propagate themselves, often accompanied by a social engineering message such as “Here is the document you requested.” An attachment should be an obvious red flag for the recipient. Try not to send attachments if you don’t have to.
- Discourage personal information. Organizations should never ask customers to reply to a company-generated e-mail with their date of birth, credit card data, password, or other personal data. If the e-mail provides a link to a Web site to supply the information, the customer should know not to click it. The organization’s e-mail policy should instruct customers not to submit e-mails that contain sensitive or confidential information and not to use e-mail for specific transaction-related requests. An e-mail auto-responder can respond to all e-mail submitted, thank the sender for the message, acknowledge that it was received, and reiterate your policy about customers not sending confidential or sensitive information.
- Use the Customer’s Full Name. Some companies have a policy of using the customer’s full name in all communication. This is helpful because it’s much harder to create spamming routines with the user’s full name as opposed to the email or screen name.
- Don’t Use Hot Links. If you use only plain text e-mail, it will prevent the customer from easily clicking an embedded link.
E-Mail Authentication Systems
E-mail authentication systems may provide an effective means of stopping e-mail and IP spoofing. Without authentication, verification, and traceability, users can never know for certain whether a message is legitimate or forged. E-mail administrators continually have to make educated guesses on behalf of their users on what to deliver, what to block, and what to quarantine.
E-MAIL BOUNCES
One piece of evidence that a spammer may be using your “From:” address is the receipt of hundreds of returned undeliverable messages a day. What’s happening is that a virus or a spammer is inserting your domain into the “From:” address, and the recipients have their servers configured to blindly return or ‘bounce’ spam to the sender, apparently you.
READING HEADERS
The following quote comes from Phishing: Cutting the Identity Theft Line by Lininger and Vines:
“Learning to read email headers is overrated. It’s kind of a neat parlor trick, but if you’re to the point where you need to read the headers to find out if it’s an honest message, you should be contacting the alleged sender directly. If the message is real, the headers will support that. If the message is fraudulent, there’s a pretty good chance the headers will still look real. Any header can be forged. The headers of a spam message might go back to the original server it was sent from, but this isn’t common. More likely, the headers will lead you back to the bot the spammer hijacked. Or some innocent third party. Or god@heaven.org.”
The four main contenders for authentication are Sender Policy Framework (SPF), SenderID, DomainKeys, and Cisco Identified Internet Mail. The Anti-Phishing Working Group (APWG) estimates that adopting a two-step e-mail authentication standard (say, using both SPF and DomainKeys) could stop 85% of phishing attacks in their current form. Although all four systems rely on changes being made to DNS, they differ in the specific part of the email that each tests:
- SPF - Checks the “envelope sender” of an e-mail message (the domain name of the initiating SMTP server)
- SenderID - Checks after the message data is transmitted and examines several sender-related fields in the header of an e-mail message to identify the “purported responsible address”
- DomainKeys - Checks a header containing a digital signature of the message and verifies the domain of each e-mail sender as well as the integrity of the message
- Cisco Identified Internet Mail - Adds two headers to the RFC 2822 message format to confirm the authenticity of the sender’s address
All e-mail will eventually have to comply with some type of sender verification method if you want it to get through. Successful deployment of e-mail authentication will probably be achieved in stages, incorporating multiple approaches and technologies.
[*]Excerpted from Phishing: Cutting the Identity Theft Line, Lininger and Vines (Wiley, 2005). Used by permission.
Fax Security
In some ways, fax security awareness has taken a back seat to other types of intraorganizational communications such as e-mail security and IM security. Also, the use of fax servers has helped curtail the vulnerability of having printed faxes lying around the office. But because fax technology is still widespread, the CISSP candidate will need to know a few basics about fax security, especially the threats to fax servers.
Since fax machines are often used to transmit sensitive information they present security issues. Guidelines and procedures on the use of faxes, receiving as well as sending, must be incorporated into the security policy of the organization. Because a received fax sits in a physical inbox until retrieved, policies similar to sensitive document output should be implemented.
Fax servers electronically route a received fax to the e-mail inbox of the destination addressee. Since the fax stays in electronic form, this helps to remediate the sensitive document issue. This also helps save money by cutting down on paper requirements and shredding needs.
One problem with this approach is that users tend to print out received faxes, thereby recreating the issue. If necessary, the print feature can be disabled on the fax server configuration so that the viewing of the document retains the proper document security classification.
The fax server should also be monitored and audited, and encryption of the fax transmission may be implemented in high-security environments. The organization may employ a fax encryptor, an encryption mechanism that encrypts all fax transmissions at the Data Link Layer and helps ensure that all incoming and outgoing fax data is encrypted at its source.
Assessment Questions
You can find the answers to the following questions in Appendix A.
1. |
Which of the following places the four systems security modes of operation in order, from the most secure to the least?
|
|
2. |
Why is security an issue when a system is booted into single-user mode?
|
|
3. |
An audit trail is an example of what type of control?
|
|
4. |
Which of the following media controls is the best choice to prevent data remanence on magnetic tapes or floppy disks?
|
|
5. |
Which of the following choices is not a security goal of an audit mechanism?
|
|
6. |
Which of the following tasks would normally be a function of the security administrator, not the system administrator?
|
|
7. |
Which of the following is a reason to institute output controls?
|
|
8. |
Which of the following statements is not correct about reviewing user accounts?
|
|
9. |
Which of the following terms most accurately describes the trusted computing base (TCB)?
|
|
10. |
Which of the following statements is accurate about the concept of object reuse?
|
|
11. |
Using prenumbered forms to initiate a transaction is an example of what type of control?
|
|
12. |
Which of the following choices is the best description of operational assurance?
|
|
13. |
Which of the following is not a proper media control?
|
|
14. |
Which of the following choices is considered the highest level of operator privilege?
|
|
15. |
Which of the following choices below most accurately describes a covert storage channel?
|
|
16. |
Which of the following would not be a common element of a transaction trail?
|
|
17. |
Which of the following would not be considered a benefit of employing incident-handling capability?
|
|
18. |
Which of the following is the best description of an audit trail?
|
|
19. |
Which of the following best describes the function of change control?
|
|
20. |
Which of the following is not an example of intentionally inappropriate operator activity?
|
|
21. |
Which book of the Rainbow Series addresses the Trusted Computer System Evaluation Criteria (TCSEC)?
|
|
22. |
Which term best describes the concept of least privilege?
|
|
23. |
Which of the following best describes a threat as defined in the Operations Security domain?
|
|
24. |
Which of the following is not a common element of user account administration?
|
|
25. |
Which of the following is not an example of using a social engineering technique to gain physical access to a secure facility?
|
|
26. |
Which statement about covert channel analysis is not true?
|
|
27. |
“Separation of duties” embodies what principle?
|
|
28. |
Convert Channel Analysis, Trusted Facility Management, and Trusted Recovery are parts of which book in the TCSEC Rainbow Series?
|
|
29. |
How do covert timing channels convey information?
|
|
30. |
Which of the following would be the best description of a clipping level?
|
|
31. |
Which of the following backup methods will probably require the backup operator to use the most number of tapes for a complete system restoration if a different tape is used every night in a five-day rotation?
|
|
32. |
Which level of RAID is commonly referred to as disk mirroring?
|
|
33. |
Which is not a common element of an e-mail?
|
|
34. |
Which of the following choices is the best description of a fax encryptor?
|
|
35. |
Which of the following statements is true about e-mail headers?
|
|
Answers
1. |
Answer: b Dedicated Mode, System-High Mode, Compartmented Mode, and Multilevel Mode |
2. |
Answer: a When the operator boots the system in single-user mode, the user front-end security controls are not loaded. This mode should be used only for recovery and maintenance procedures, and all operations should be logged and audited. |
3. |
Answer: c An audit trail is a record of events to piece together what has happened and allow enforcement of individual accountability by creating a reconstruction of events. They can be used to assist in the proper implementation of the other controls, however. |
4. |
Answer: b Degaussing is recommended as the best method for purging most magnetic media. Answer a is not recommended because the application may not completely overwrite the old data properly. Answer c is a rarely used method of media destruction, and acid solutions should be used in a well-ventilated area only by qualified personnel. Answer d is wrong. |
5. |
Answer: b Answer b is a distracter; the other answers reflect proper security goals of an audit mechanism. |
6. |
Answer: c Reviewing audit data should be a function separate from the day-to-day administration of the system. |
7. |
Answer: b In addition to being used as a transaction control verification mechanism, output controls are used to ensure that output, such as printed reports, is distributed securely. Answer a is an example of change control, c is an example of application controls, and d is an example of recovery controls. |
8. |
Answer: a Reviews can be conducted by, among others, in-house systems personnel (a self-audit), the organization’s internal audit staff, or external auditors. |
9. |
Answer: d The Trusted Computing Base (TCB) represents totality of protection mechanisms within a computer system, including hardware, firmware, and software, the combination of which is responsible for enforcing a security policy. Answer a describes the reference monitor concept, answer b refers to a sensitivity label, and answer c describes formal verification. |
10. |
Answer: b Object reuse mechanisms ensure system resources are allocated and assigned among authorized users in a way that prevents the leak of sensitive information, and they ensure that the authorized user of the system does not obtain residual information from system resources. Answer a is incorrect, answer c is incorrect, and answer d refers to authorization: the granting of access rights to a user, program, or process. |
11. |
Answer: b Prenumbered forms are an example of preventative controls. They can also be considered a transaction control and input control. |
12. |
Answer: c Operational assurance is the process of reviewing an operational system to see that security controls, both automated and manual, are functioning correctly and effectively. Operational assurance addresses whether the system’s technical features are being bypassed or have vulnerabilities and whether required procedures are being followed. Answer a is a description of an audit trail review, answer b is a description of a benefit of incident handling, and answer d describes a personnel control. |
13. |
Answer: d Sanitization is the process of removing information from used data media to prevent data remanence. Different media require different types of sanitization. All the others are examples of proper media controls. |
14. |
Answer: c The three common levels of operator privileges, based on the concept of “least privilege,” are:
Answer d is a distracter. |
15. |
Answer: d A covert storage channel typically involves a finite resource (e.g., sectors on a disk) that is shared by two subjects at different security levels. Answer a is a partial description of a covert timing channel, and answer b is a generic definition of a channel. A channel may also refer to the mechanism by which the path is affected. Answer c is a higher-level definition of a covert channel. While a covert storage channel fits this definition generically, answer d is the proper specific definition. |
16. |
Answer: c Why the transaction was processed is not initially a concern of the audit log, but it will be investigated later. The other three elements are all important information that the audit log of the transaction should record. |
17. |
Answer: a The primary benefits of employing an incident-handling capability are containing and repairing damage from incidents and preventing future damage. Answer a is a benefit of employing “separation of duties” controls. |
18. |
Answer: a An audit trail is a set of records that collectively provide documentary evidence of processing used to aid in tracing from original transactions forward to related records and reports and/or backward from records and reports to their component source transactions. Answer b is a description of a multilevel device, and answer c refers to a network reference monitor. Answer d is incorrect because audit trails are detective, and answer d describes a preventative process - access control. |
19. |
Answer: a Answer b describes least privilege, answer c describes record retention, and answer d describes separation of duties. |
20. |
Answer: a Although operator error (answer a) is most certainly an example of a threat to a system’s integrity, it is considered unintentional loss, not an intentional activity. |
21. |
Answer: b |
22. |
Answer: a The least privilege principle requires that each subject in a system be granted the most restrictive set of privileges (or lowest clearance) needed for the performance of authorized tasks. Answer b describes separation of privilege, answer c describes a security level, and answer d is a distracter. |
23. |
Answer: a Answer b describes a vulnerability, answer c describes an asset, and answer d describes risk management. |
24. |
Answer: b For proper separation of duties, the function of user account establishment and maintenance should be separated from the function of initiating and authorizing the creation of the account. User account management focuses on identification, authentication, and access authorizations. |
25. |
Answer: d Answers a, b, and c denote common tactics used by an intruder to gain either physical access or system access. The salami fraud is an automated fraud technique. In the salami fraud, a programmer will create or alter a program to move small amounts of money into his personal bank account. The amounts are intended to be so small as to be unnoticed, such as rounding in foreign currency exchange transactions; hence the name, a reference to slicing a salami. |
26. |
Answer: c Orange Book B2 class systems do not need to be protected from covert timing channels. Covert channel analysis must be performed for B2-level class systems to protect against only covert storage channels. B3 class systems need to be protected from both covert storage channels and covert timing channels. |
27. |
Answer: d Separation of duties means that the operators are prevented from generating and verifying transactions alone, for example. A task might be divided into different smaller tasks to accomplish this, or in the case of an operator with multiple duties, the operator makes a logical, functional job change when performing such conflicting duties. Answer a is need-to-know, answer b is dual-control, and c is job rotation. |
28. |
Answer: b The Red Book (answer a) is the Trusted Network Interpretation (TNI) summary of network requirements (described in the Telecommunications and Network Security domain); the Green Book (answer c) is the Department of Defense (DoD) Password Management Guideline; and the Dark Green Book (answer d) is The Guide to Understanding Data Remanence in Automated Information Systems. |
29. |
Answer: d A covert timing channel alters the timing of parts of the system to enable it to be used to communicate information covertly (outside the normal security function). Answer a is the description of the use of a covert storage channel, answer b is a technique to combat the use of covert channels, and answer c is the Orange Book requirement for B3, B2, and A1 evaluated systems. |
30. |
Answer: a This description of a clipping level is the best. Answer b is not correct because one reason to create clipping levels is to prevent auditors from having to examine every error. Answer c is a common use for clipping levels but is not a definition. Answer d is a distracter. |
31. |
Answer: c Most backup methods use the Archive file attribute to determine whether the file should be backed up. The backup software determines which files need to be backed up by checking to see whether the Archive file attribute has been set and then resets the Archive bit value to null after the backup procedure. The Incremental backup method backs up only files that have been created or modified since the last backup was made because the Archive file attribute is reset. This can result in the backup operator needing several tapes to do a complete restoration, because every tape with changed files as well as the last full backup tape will need to be restored. A full or complete backup (answer a) backs up all files in all directories stored on the server regardless of when the last backup was made and whether the files have already been backed up. The Archive file attribute is changed to mark that the files have been backed up, and the tape or tapes will have all data and applications on it. This is an incorrect answer for this question, however, as it’s assumed that answers b and c will additionally require differential or incremental tapes. The Differential backup method (answer b) backs up only files that have been created or modified since the last backup was made, like an incremental backup. However, the difference between an incremental backup and a differential backup is that the Archive file attribute is not reset after the differential backup is completed; therefore, the changed file is backed up every time the differential backup is run. The backup set grows in size until the next full backup as these files continue to be backed up during each subsequent differential backup. The advantage of this backup method is that the backup operator should need only the full backup and the one differential backup to restore the system. Answer d is a distracter. |
32. |
Answer: b Redundant Array of Inexpensive Disks (RAID) is a method of enhancing hard disk fault tolerance, which can improve performance. RAID 1 maintains a complete copy of all data by duplicating each hard drive. Performance can suffer in some implementations of RAID 1, and twice as many drives are required. Novell developed a type of disk mirroring called disk duplexing, which uses multiple disk controller cards, increasing both performance and reliability. RAID 0 (answer a) gives some performance gains by striping the data across multiple drives but reduces fault tolerance, because the failure of any single drive disables the whole volume. RAID 3 (answer c) uses a dedicated error-correction disk called a parity drive, and it stripes the data across the other data drives. RAID 5 (answer d) uses all disks in the array for both data and error correction, increasing both storage capacity and performance. |
33. |
Answer: c E-mails have three basic parts: attachments, contents, and headers. Both the contents and attachments are areas of vulnerability. |
34. |
Answer: b A fax encryptor is a encryption mechanism that encrypts all fax transmissions at the Data Link layer and helps ensure that all incoming and outgoing fax data is encrypted at its source. |
35. |
Answer: c The header may point back to the hijacked spambot’s mail server. Email headers can be spoofed, fraudulent e-mail not always identified by the headers, and the header doesn’t always point back to the original spammer. |