Network Analysis, Architecture and Design, Second Edition (The Morgan Kaufmann Series in Networking)

5.4 Component Architectures

Component architecture is a description of how and where each function of a network is applied within that network. It consists of a set of mechanisms (hardware and software) by which that function is applied to the network, where each mechanism may be applied, and a set of internal relationships between these mechanisms.

Each function of a network represents a major capability of that network. This book explores four such functions: addressing/routing, network management, performance, and security. Other general functions, such as infrastructure and storage, could also be developed as component architectures, and there can certainly be functions specific to each network that you may wish to develop.

Mechanisms are hardware and software that help a network achieve each capability. Some example mechanisms are shown in Figure 5.3 and are examined in detail in Chapters 6 through 9, which deal with component architectures.

Function

Description of Capability

Example Subset of Mechanisms Used to Achieve Capability

Addressing/Routing

Provides robust and flexible connectivity between devices

Addressing: ways to allocate and aggregate address space

Routing: routers, routing protocols, ways to manipulate routing flows

Network Management

Provides monitoring, configuring, and troubleshooting for the network

Network management protocols

Network management devices

Ways to configure network management in the network

Performance

Provides network resources to support requirements for capacity, delay, and RMA

Quality of service

Service-level agreements

Policies

Security

Restricts unauthorized access, usage, and visability within network to reduce the threat and effects of attacks

Firewalls

Security policies and procedures

Filters and access control lists

Figure 5.3: Functions, capabilities, and mechanisms.

Internal relationships consist of interactions (trade-offs, dependencies, and constraints), protocols, and messages between mechanisms, and they are used to optimize each function within the network. Trade-offs are decision points in the development of each component architecture. They are used to prioritize and decide which mechanisms are to be applied. Dependencies occur when one mechanism relies on another mechanism for its operation. Constraints are restrictions that one mechanism places on another. These relationship characteristics help describe the behaviors of the mechanisms within a component architecture, as well as the overall behavior of the function itself.

Developing a component architecture consists of determining the mechanisms that comprise each component, how each mechanism works, and how that component works as a whole. For example, consider some of the mechanisms for performance: QoS, service-level agreements (SLAs), and policies. To determine how performance will work for a network, we will need to determine how each mechanism works and how they work together to provide performance for the network and system. In Figure 5.4, QoS is applied at each network device to control its resources in support of SLAs and policies; SLAs tie subscribers to service levels, and policies (usually located at one or more databases within the network) provide a high-level framework for service levels, SLAs, and QoS.

Figure 5.4: Examples of performance mechanisms in a network.

Interactions within a component are based on requirements that mechanisms have to communicate and operate with each other. Using the previous example for performance, we would determine whether there are any information flows between QoS, SLAs, and policies. If such flows exist (and usually they do), we would determine where and how these flows occur. This is important to know, for when we are developing an architecture for a component, its communications requirements within that component will help drive that architecture. Figure 5.5 gives an example of where interactions occur between performance mechanisms.

Figure 5.5: Interactions between performance mechanisms.

Trade-offs are decision points in the development of each component; they are used to prioritize and choose between features and functions of each and to optimize each component's architecture. There are often several trade-offs within a component, and much of the refining of the network architecture occurs here. For example, a common trade-off in network management is in choosing between centralizing and distributing management capabilities. As mentioned in Chapter 1, trade-offs are fundamental to network architecture and network design. We will, therefore, spend much time on trade-offs in this chapter and throughout the rest of the book.

Dependencies are requirements that one mechanism has on one or more other mechanisms in order to function. Determining such dependencies will help us to decide when trade-offs are acceptable or unacceptable. For example, there are dependencies between addressing and routing, as proper routing function will depend on how internal and external addressing is done.

Constraints are a set of restrictions within each component architecture. For example, SLAs are constrained by the type and placement of QoS within the network. Such constraints are useful in determining the boundaries under which each component operates.

Whereas the functions described in this chapter are addressing/routing, network management, performance, and security, there are often other functions, such as network storage, computing, or application services, that can also be described by this component architecture approach. Functions may be defined by you and may be specific to the network on which you are working. Experience has shown that addressing/routing, network management, performance, and security are common across most networks. By developing the relationships between these functions, we begin to develop a highlevel, end-to-end view of the network and system.

Developing component architectures requires input, in terms of sets of user, application, and device requirements; estimated traffic flows; and architectural goals defined for each network. For example, user, application, and device requirements for performance and security are used as criteria to evaluate mechanisms for the performance and security component architectures. This input forms a common foundation for all network functions, from which all component architectures are developed. Figure 5.6 illustrates that component architectures, requirements, flows, and goals are all interwoven through the reference architecture.

Figure 5.6: Component architectures and the reference architecture are derived from network requirements, flows, and goals.

Based on the requirements, flows, and goals for the network, we will evaluate a set of candidate mechanisms for each function and choose the desired mechanisms.

To facilitate determining where each mechanism may be applied, the network is divided into regions. Network functions have a common basis in traffic flows; thus, characterizing regions by traffic flows allows each region to be applied in a similar fashion to all functions.

Commonly used regions include access (edge), distribution, core (backbone), external interfaces, and demilitarized zones (DMZs). From a traffic flow perspective, access regions are where most traffic flows are generated and terminated. Distribution regions are where traffic flows are aggregated and terminated for common services, such as application or storage servers. Core regions provide transit for aggregates of traffic flows; individual traffic flows are rarely generated or terminated in this region. External interfaces or DMZs are aggregation points for traffic flows external to that network.

The characteristics of each region help identify where mechanisms are applied. For example, since traffic flows are often generated and terminated at access regions, performance mechanisms that work on a per-flow basis, such as access control and traffic shaping, would apply to these regions. In the core region, where traffic flows are aggregated, performance mechanisms that work on groups or classes of flows, such as differentiated services, would apply.

Once mechanisms have been chosen and applied, we can then determine and analyze the internal relationships between these mechanisms.

In practice, mechanisms, locations, and relationship characteristics are listed in tabular form, with one set of tables for each component architecture. An example of internal relationships, in this case the set of dependencies between performance mechanisms, is presented in Figure 5.7.

Dependencies between Performance Mechanisms

QoS

SLAs

Policies

QoS

QoS dependencies on SLAs, e.g., QoS at network devices may need to enforce SLA values

QoS dependencies on policies, e.g., QoS at network devices may need to enforce policies

SLAs

SLA dependencies on QoS, e.g., can an SLA be enforceable via available QoS mechanisms?

SLA dependencies on policies, e.g., SLAs may need to map to network policies

Policies

Policy dependencies on QoS, e.g., can a policy be enforceable via available QoS mechanisms?

Policy dependencies on SLAs

Figure 5.7: Sample chart for listing dependencies between performance mechanisms.

Each characteristic should have its own chart, showing whether or not it applied, and if it did apply, how it would work between each of the components. For example, let's consider a chart on dependencies. In developing this chart, we would start by looking at mechanisms for a particular component, for example, performance. We would consider any dependencies between mechanisms within each component.

We would continue down the list, doing the same thing for each type of interaction. We will explore these relationships in detail in Chapters 6 through 9.

As we go through the Sections 5.4.1 through 5.4.4, many terms will be introduced for the mechanisms of each network function. Although each mechanism is presented briefly in this chapter, we will discuss them in detail over the next four chapters.

In developing each component architecture, we will need to consider the dynamic nature of networks. For example, how will the network be reconfigured to handle a security attack? What happens when traffic flows change and congestion occurs? How much of each function can be automated?

Each component architecture will have policies associated with it. Security, routing, performance, and network management policies can have elements in common. Documenting policies early in the architectural process will help you understand the relationships between component architectures.

5.4.1 Addressing/Routing Component Architecture

Addressing is applying identifiers (addresses) to devices at various protocol layers (e.g., data link and network), whereas routing is learning about the connectivity within and between networks and applying this connectivity information to forward IP packets to their destinations.

The addressing/routing component architecture describes how user and management traffic flows are forwarded through the network and how hierarchy, separation, and grouping of users and devices are supported.

This component architecture is important; it determines how user and management traffic flows are propagated throughout the network. As you can imagine, this will be closely tied to the network management architecture (for management flows) and performance architecture (for user flows). This architecture also helps determine the degrees of hierarchy and interconnectivity in the network, as well as how areas of the network are subdivided.

Several addressing and routing mechanisms could be considered for this component architecture. From an addressing perspective, mechanisms include subnetting, variable-length subnetting, supernetting, dynamic addressing, private addressing, virtual LANs (VLANs), IP version 6 (IP6), and network address translation (NAT). From a routing (forwarding) perspective, mechanisms include switching and routing, default route propagation, classless interdomain routing (CIDR), multicasts, mobile IP, route filtering, peering, routing policies, confederations, and Internet Gateway Protocol (IGP) and External Gateway Protocol (EGP) selection and location.

Depending on the type of network being developed, the set of candidate addressing and routing mechanisms for a component architecture can be quite different. For example, a service-provider network may focus on mechanisms such as supernetting, CIDR, multicasts, peering, routing policies, and confederations, whereas the focus of a medium-sized enterprise network would more likely be on classful or private addressing and NAT, VLANs, switching, and the selection and locations of routing protocols (particularly IGPs).

In terms of addressing, classful addressing is applying predetermined mask lengths to addresses in order to support a range of network sizes. Subnetting is using part of the device (host) address space to create another layer of hierarchy. Variable-length subnetting is subnetting in which multiple subnet masks are used, creating subnets of different sizes. Supernetting is aggregating network addresses, by changing the address mask, to decrease the number of bits allocated to the network; dynamic addressing is providing addresses on demand. Private IP addressing is using IP addresses that cannot be advertised and forwarded by network and user devices in the public domain (i.e., the Internet). VLANs are addresses that can be dynamically changed and reconfigured to accommodate changes in the network. IP6 is the next generation of IP addresses. NAT is the mapping of IP addresses from one realm to another. Typically this is between public and private address spaces.

In terms of forwarding, switching and routing are common forwarding mechanisms. Default route propagation is a technique used to inform the network of the default route (or route of last resort). CIDR is routing based on arbitrary address mask sizes (classless). Multicasts are packets targeted toward multiple destinations. Mobile IP is providing network (IP) connectivity for devices that move or roam or are portable. Route filtering is the technique of applying filters (statements) to hide networks from the rest of an autonomous system or to add, delete, or modify routes in the routing table, Peering is an arrangement between networks or autonomous systems (peers) to mutually pass traffic and adhere to routing policies, which are highlevel statements about relationships between networks or autonomous system, and IGP and EGP selection and location, which is comparing and contrasting IGPs in order to select the appropriate protocols for the network and decide where to apply them in the network.

Two types of interactions between mechanisms are predominant within this component architecture: trade-offs between addressing and routing mechanisms and trade-offs within addressing or within routing. Addressing and routing mechanisms influence the selection of routing protocols and where they are applied. They also form an addressing hierarchy upon which the routing hierarchy is overlaid.

Areas of the network where dynamic addressing, private addressing, and NAT mechanisms are applied will affect how routing will (or will not) be provided to those areas.

The addressing/routing component architecture will be discussed in detail in Chapter 6.

5.4.2 Network Management Component Architecture

Network management is providing functions to control, plan, allocate, deploy, coordinate, and monitor network resources. Network management is part of most or all of the network devices. As such, the network management architecture is important because it determines how and where management mechanisms will be applied in the network. It is likely that the other architectural components (e.g., IT security) will require some degree of monitoring and management and will interact with network management.

The network management component architecture describes how the system, including the other network functions, is monitored and managed. This consists of an information model that describes the types of data that are used to monitor and manage each of the elements in the system, mechanisms to connect to devices in order to access data, and the flows of management data through the network.

Network management mechanisms include monitoring and data collection; instrumentation to access, transfer, act upon, and modify data; device and service configuration; and data processing, display, and storage. Network management mechanisms include:

Whereas monitoring is obtaining values for end-to-end, per-link, and per-element network management characteristics, instrumentation is determining the set of tools and utilities needed to monitor and probe the network for management data. Configuration is setting parameters in a network device for operation and control of that element. FCAPS is the set of fault, configuration, accounting, performance, and security management components. In-band and out-of-band management refers to whether management data flow along the same path as user traffic or have a separate path. Centralized and distributed management refers to whether the management system is in a single hardware platform or is distributed across the network among multiple platforms. Scaling network management data is determining how much network capacity should be reserved for network management. Checks and balances is using multiple mechanisms to verify that variables are represented correctly. MIB selection is determining which MIBs to use and how much of each MIB to use. Integration into OSS refers to how the management system will communicate with higher level OSSs.

As we will see in Chapter 7, many interactions exist within the network management component. These include trade-offs of routing management traffic flows along the same paths as user traffic flows (in-band) or along separate paths (out-ofband) and centralizing all management mechanisms by placing them on a single hardware platform or distributing them throughout the network on multiple platforms.

5.4.3 Performance Component Architecture

Performance consists of the set of mechanisms used to configure, operate, manage, provision, and account for resources in the network that allocate performance to users, applications, and devices. This includes capacity planning and traffic engineering, as well as various service mechanisms. Performance may be applied at any of the protocol layers and often applies across multiple layers. Therefore, there may be mechanisms targeted toward the network layer, the physical layer, or the data-link layer, as well as at the transport layer and above.

The performance component architecture describes how network resources will be allocated to user and management traffic flows. This consists of prioritizing, scheduling, and conditioning traffic flows within the network, either end-to-end between source and destination for each flow or between network devices on a per-hop basis. It also consists of mechanisms to correlate user, application, and device requirements to traffic flows, as well as traffic engineering, access control, QoS, policies, and SLAs.

QoS is determining, setting, and acting on priority levels for traffic flows. Resource control refers to mechanisms that will allocate, control, and manage network resources for traffic. SLAs are informal or formal contracts between a provider and user that define the terms of the provider's responsibility to the user and the type and extent of accountability if those responsibilities are not met. Policies are sets (again, formal or informal) of high-level statements about how network resources are to be allocated among users.

This important architectural component provides the mechanisms to control the network resources that are allocated to users, applications, and devices. This may be as simple as determining the amount of capacity that is available in various regions of the network or as complex as determining the capacity, delay, and reliability, maintainability, and availability characteristics on a per-flow basis.

As we will discuss in detail in Chapter 8, interactions within this component architecture include the trade-offs between end-to-end and per-hop prioritization, scheduling, and conditioning of traffic flows, as well as whether flows are treated individually, are aggregated into groups, or a combination of the two. As we will see, these interactions are closely coupled at the network (IP) layer to the use of differentiated services (DiffServ) and integrated services (IntServ) within the network. Differentiated and integrated services are performance mechanisms standardized through the Internet Engineering Task Force that target individual and aggregate performance requirements.

When policies, SLAs, and differentiated services are chosen for the network, part of this component architecture describes the placement of databases for SLA and policy information, including policy decision points (PDPs), policy enforcement points (PEPs), and Diffserv edge devices.

5.4.4 Security Component Architecture

Security is a requirement to guarantee the confidentiality, integrity, and availability of user, application, device, and network information and physical resources. This is often coupled with privacy, which is a requirement to protect the sanctity of user, application, device, and network information.

The security component architecture describes how system resources are to be protected from theft, damage, denial of service, or unauthorized access. This consists of the mechanisms used to apply security, which may include such hardware and software capabilities as virtual private networks, encryption, firewalls, routing filters, and NAT.

Each of these mechanisms can be targeted toward specific areas of the network, such as at external interfaces or at aggregation points for traffic flows. In many instances, security mechanisms are deployed in regions, often termed security zones or cells, where each region or security zone represents a particular level of sensitivity and access control. Security zones may be within each other, overlapping, or completely separate, depending on the security requirements and goals for that network. We will cover security zones, as part of the security component architecture, in detail in Chapter 9.

The security and privacy architecture determines to what degree security and privacy will be implemented in the network, where the critical areas that need to be secured are, and how it will affect and interact with the other architectural components.

Security mechanisms that we will consider include:

Security awareness is educating users, getting them involved with the day-to-day aspects of security in their network, and helping them understand the potential risks of violating security policies and procedures. Security policies and procedures are formal statements on rules for system, network, and information access and use to minimize exposure to security threats. Physical security is the protection of devices from physical access, damage, and theft (including isolating all or parts of the network from outside access). Protocol and application security is securing management and network protocols and applications from unauthorized access and misuse. Encryption is a security mechanism in which cipher algorithms are applied together with a secret key to encrypt data so that they are unreadable if intercepted. Network perimeter security is protecting the external interfaces between your network and external networks. And finally, remote access security is securing network access based on traditional dial-in, point-to-point sessions and virtual private network connections.

5.4.5 Optimizing Component Architectures

Determining and understanding the set of internal relationships enables you to optimize each component architecture for a particular network. This is based on input for that particular network, the requirements, estimated traffic flows, and goals for that network.

After you have chosen a set of mechanisms for a component architecture and determined possible interactions between these mechanisms, the requirements, flows, and goals for the network are used to prioritize the mechanisms and interactions.

User, application, and device requirements usually incorporate some degree of performance, security, and network management requirements. Such requirements are directly related to selecting and placing mechanisms within a component architecture, as well as prioritizing interactions.

Estimated traffic flows for the network, determined either through modeling and simulation, experience with the existing system, or via a set of heuristics, can indicate aggregation points for traffic or where high-priority flows (e.g., mission-critical, realtime, secure, or operations, administration, and maintenance) are likely to occur. By understanding the types of flows in the network and where they are likely to occur, you can develop each component architecture to focus on mechanisms that will optimally support high-priority flows.

Architectural goals for the network are either derived from requirements; determined from discussions with users, management, and staff; or taken as an extension of the scope and scale of the existing network. When goals are developed from various sources, they provide a broad perspective on which functions are most important in a network. Thus, requirements, flows, and goals strongly influence which mechanisms are preferred and where they are applied for each component architecture.

Категории