Citrix Access Suite 4 for Windows Server 2003: The Official Guide, Third Edition

With all of the hardware, software, and concepts defined, implementation can proceed. Planners at CME developed at timeline to schedule interdependent tasks , such as site cutovers and equipment reallocations. The following configuration examples are not intended to be all-inclusive; many of the basic steps are omitted to focus on those germane to the enterprise infrastructure and Citrix support.

Private WAN Sites (CORP Sales)

Private WAN sales offices (connected directly to CME-CORP) all share common configurations. The CM-TNG configuration is similar, but with bandwidth management provided by the site router. Figures 17-3 and 17-4 depict the basic configuration for these sites.

Figure 17-3: Typical private WAN site network

Figure 17-4: The CME-TNG site network

Router Configuration

The standard router configuration employs a single 768KB (CIR) frame relay PVC carried over a physical T1 local loop, while the CME-TNG site uses an ATM VC over DSL. The two configurations are similar:

Bandwidth Management

The private WAN site bandwidth management paradigm is the bandwidth for on-demand access and control other traffic.

Private WAN Sites (Frame Relay) Management of traffic flows across the private WAN network is controlled by PacketShaper units at each end of each link. A typical site configuration, shown in Figure 17-5, accomplishes the following:

Figure 17-5: Typical private WAN Packeteer settings

The Private WAN Site (ATM/DSL) As mentioned previously, the ATM/DSL connection to the CME-TNG site does not require the expense of PacketShaper-based bandwidth management, but it still needs at least some controls to assure performance for Citrix sessions. The router (ORD-RPVT-TNG-A) is configured using Cisco's Modular Quality of Service (QoS) command-line interface (CLI) or MQC. The Traffic management command "service-policy output LLQ" shown in the basic configuration is based on the following parameters:

VPN WAN Sites (CME-WEST Sales and CME-EUR Sales)

CME-EUR and CME-MEX sales office sites rely on Internet connectivity for their VPN lifeline to CME-CORP. As mentioned previously, the selection of a specific Internet router may not be an option due to host nation or ISP restrictions. The relatively low bandwidth also implies that the host nation ISP or circuit provider may not guarantee service in the form of an SLA. Bandwidth management is therefore not cost-effective . To ensure at least a limited ability to cope with failures, each site will be equipped with a dial-up modem to allow remote terminal connectivity to the firewall in the event of a failure or problem (CME-CORP staff will direct connection of the modem to the firewall console and reconfigure as required). Refer to Figure 17-6 for a graphic hardware layout of a typical site.

Figure 17-6: A typical VPN WAN site network

Firewall Configuration

The standard firewall/VPN configuration for all CME-WEST and CME-EUR sites establishes a VPN tunnel but disallows outbound access to the Internet by client PCs. The IPsec tunnel settings are a mirror image of the tunnel endpoint on the ORD-FPUB-A (CME-CORP) firewall. IP addresses used for the public segment are as assigned by the servicing ISP.

CME-EUR

Like the sales office sites, CME-EUR relies on Internet connectivity. Unlike the sales offices, CME-EUR has a higher throughput and greater demands, including printing and domain replication. Because of the "commercial grade" Internet requirements, CME-EUR has an SLA for their Internet service. Bandwidth management is necessary to control the traffic traversing the VPN tunnel and ensure that the relatively high number of Citrix sessions do not become "starved" for bandwidth. CME-EUR has limited on-site IT staff and will not require immediate access to a modem connection for remote reconfiguration. The CME-EUR LAN switch is a consolidated distribution and access layer module, with only limited Layer 3 requirements (isolating the internal LAN segment from the uplink to the PacketShaper and firewall). The CME-EUR configuration is detailed in Figure 17-7.

Figure 17-7: The CME-EUR network

Firewall Configuration

The firewall and VPN configuration for CME-EUR is similar to a sales office firewall configuration but allows specific LAN hosts to access the Internet directly. The configuration example shown under CME-MEX is applicable to CME-EUR as well.

Bandwidth Management

Bandwidth management at CME-EUR is similar to a private WAN site, but as almost all traffic is routed to CME-CORP over the VPN tunnel, traffic must be policed before it enters the tunnel. Other modifications would include modified restrictions on traffic related to printing from the corporate site (NetBIOS IP, LPR), and less restrictions on Active Directory Domain replication traffic to the local domain controller.

CME-MEX

CME-MEX parallels CME-EUR, but with the additional restrictions imposed by the production environment. The manufacturing plant floor has little need for service beyond limited Citrix connectivity and no need for external Internet access through the corporate network. Again, bandwidth management is necessary to control client traffic behavior (allow reliable access to Citrix, police printing bandwidth consumption, allow management and administration of staff Internet access, and restrict production subnets to corporate intranet access). Figure 17-8 shows the assembled network components .

Figure 17-8: The CME-MEX network

Firewall Configuration

CME-MEX firewall and VPN parameters (conceptually identical to CME-EUR) define the subnets that traverse the VPN tunnel but allow direct outbound access for a limited number of LAN hosts, specified by a fully qualified domain name (FQDN). As these sites are domain members of the CME Active Directory domain with a local domain controller/internal DNS server, the firewall can use the internal DNS and dynamic DNS registration of DHCP-addressed LAN hosts to identify hosts granted access by FQDN. Again, the VPN parameters are a mirror image of those at CME-CORP.

Bandwidth Management

CME-MEX is a somewhat larger mirror of CME-EUR. Basic bandwidth allocations are the same, but outbound Internet access is restricted by the PacketShaper based on approved host names (manually defined in the PacketShaper rules) as compared to the host's IP address as resolved by the internal DNS on the domain controller.

Core LAN Switch Configuration

The CME-MEX core switch (MEX-SCO-A) is the first switch that requires advanced Layer 3 routing functionality with its associated VLANs. By subnetting CME-MEX's address space, the designers simplified the process of restricting access to many services from the plant floor (production) hosts. The following partial configuration shows both the Layer 2 VLAN assignments and the Layer 3 routed interfaces. Note that VLAN 1 (default) is used only for interswitch VLAN control traffic, and VLAN 999 is passed through the switch at Layer 2 for visibility but cannot be routed to any other VLAN. Each Layer 3 VLAN interface will have access lists defined to limit accessibility from VLAN-to-VLAN. Finally, the 802.1Q trunk to the plant floor switches only transports the PLANT VLAN and the SERVER VLAN (used for management).

vlan 2 name SERVERS vlan 3 name ADMIN vlan 4 name PLANT vlan 201 name INSIDE vlan 999 name OUTSIDE ! interface Vlan1 no ip address ! interface Vlan2 description CME-MEX Servers ip address 10.201.0.129 255.255.255.128 ! interface Vlan3 description CME-MEX ADMIN ip address 10.201.1.0 255.255.255.0 ip helper-address 10.201.0.100 ! interface Vlan4 description CME-MEX Plant Floor ip address 10.201.2.1 255.255.254.0 ! interface Vlan201 description CME-MEX firewall (MEX-FPUB ip address 10.201.0.14 255.255.254.240 ! interface Vlan999 no ip address ! interface GigabitEthernet4/1 description Uplink to PacketShaper 6500 switchport access vlan 999 switchport mode access spanning-tree portfast speed 100 duplex full ! interface GigabitEthernet4/2 description trunk to MEX-SDI-A Port switchport trunk encapsulation dot1q switchport trunk native vlan 2 switchport trunk allowed vlan 1,2,4 switchport mode trunk ! interface GigabitEthernet4/3 description Connected to MEX-FPUB-A switchport access vlan 999 switchport mode access spanning-tree portfast speed 100 duplex full ! interface GigabitEthernet4/4 description Connected to MEX-FPUB-B switchport access vlan 999 switchport mode access spanning-tree portfast speed 100 duplex full ! interface GigabitEthernet4/47 description Connected to MEX-FPUB-A switchport access vlan 201 switchport mode access spanning-tree portfast speed 100 duplex full ! interface GigabitEthernet4/47 description Connected to MEX-FPUB-B switchport access vlan 201 switchport mode access spanning-tree portfast speed 100 duplex full ! interface GigabitEthernet5/1 description Connected to MEX-SDC01 switchport access vlan 2 switchport mode access spanning-tree portfast ! interface GigabitEthernet6/1 description ADMIN Client switchport access vlan 3 switchport mode access spanning-tree portfast

Access Switch Configuration (Plant Floor)

The individual access switches (MEX-SAI-A through E) on the plant floor are virtually identical. Client interfaces (fast Ethernet) are assigned to the "PLANT" VLAN, and the first gigabit Ethernet interface is configured as an 802.1Q trunk to the distribution switch (MEX-SDI-A). MEX-SDI-A interfaces are all configured as trunks, with the management address and default gateway (they are Layer 2 only) set for VLAN 2 (SERVERS).

vlan 2 name SERVERS vlan 4 name PLANT ! interface Vlan1 no ip address interface Vlan2 description CME-MEX Servers ip address 10.201.0.151 255.255 ! interface GigabitEthernet0/1 description trunk to MEX-SDI-A switchport trunk encapsulation switchport trunk native vlan 2 switchport trunk allowed vlan switchport mode trunk ! interface FastEthernet0/1 description Plant Floor Access switchport access vlan 4 switchport mode access spanning-tree portfast ! ip default-gateway 10.201.0.129

CME-WEST

CME-WEST is the "backup" site for CME-CORP. As shown in Figure 17-9, CME-WEST is actually an extensible subset of that infrastructure, including both Internet and private WAN access.

Figure 17-9: The CME-WEST network

Internet Router Configuration

The CME-WEST Internet access router (Cisco 7401) uses a single 1.5MB ATM virtual circuit (VC) carried over an ATM DS3 port for Internet access. The point-to-point subnet is assigned by the ISP, with CME-WEST's delegated address space routed by the ISP.

Firewall Configuration

The CME-WEST firewall configuration is essentially a subset of the CME-CORP configuration. It allows outbound access to the Internet for selected hosts, providing a single DMZ equivalent to CME-CORP's SECURE-PUBLIC DMZ for a tertiary secure gateway and tertiary DNS. The VPN tunnels to the remote branches are not configured, but copies of the CME-CORP configuration ensure they can be rapidly created.

Private WAN Router

The CME-WEST private WAN Cisco 7401 is virtually identical to the Internet router, with the exception of the provisioned bandwidth and service type (vbr-nrt versus ubr). Additionally, the private WAN router participates in the dynamic routing protocol (EIGRP) common to all private WAN sites.

Bandwidth Management

The PacketShaper at CME-WEST does dual-duty through the added LEM. One segment manages the 6MB connection to CME-CORP while the other monitors the Internet connection. Rules for traffic management in each segment are equivalent to rules in the stand-alone counterparts at CME-CORP. No IPsec rules are established for VPN termination, but should the need arise, configuration settings for the CME-CORP Internet PacketShaper could be modified and imported quickly.

Core LAN Switch Configuration

The CME-WEST LAN core is somewhat underutilized on a day-to-day basis, but the over-build is necessary to position the switch as a backup for CME-CORP. The switch's Layer 3 configuration is similar to CME-MEX with VLANs defined to isolate clients from the subset of servers that are homed at CME-WEST. CME-WEST has substantially more active servers than other regional offices, including redundant domain controllers, online backup servers for network and security management, a backup Citrix server that is part of the CME-CORP farm, and an array of repository and backup servers used to store images and data that are replicated from CME-CORP.

A significant portion of the core switch's capacity is "preconfigured" to support drop-in LAN access switches that would be purchased and deployed as part of a disaster recovery effort. Again, configurations (including server configurations) for critical systems at CME-CORP are " backed up" at CME-WEST. CME-WEST will reuse CME-CORP's IP addresses and identities by recreating the same VLANs for reconstituted servers.

CME-CORP

The CME-CORP infrastructure is intended to meet design objectives (fast, redundant, hierarchical, fault-tolerant, and so forth) now and in the foreseeable future. In many cases, subsystem design components for the case study, including supporting network and security management elements, are beyond what many corporate networks employ today. Conversely, many of those same networks would be redesigned and re-engineered for greater capacity and survivability if the performance warranted the effort and expense. When looking at the aggregate cost of leading-edge hardware technologies, compare them to the cost of industry-leading ERP software packages and server systemstypically, infrastructure cost is a fraction of the major business application package, and the application package is considered so vital that "we lose money" when the system is down. The underlying network must be at least as reliable as the application software: based on the designer's efforts, CME's on-demand access network should be up to the task.

Internet Access Module

The CME-CORP high-bandwidth Internet access module consists of the Cisco 7401 routers and associated switch ports on the ORD-SDMZ-A switch. The routers operate using EBGP as a routing protocol against two upstream ISPs, and are responsible for announcing CME's primary Internet-routable subnet into the Internet routing tables. Internally, the routers use OSPF and static routes to receive routing information (routes must be learned from an IGP before being injected into the BGP process). The combination of OSPF, BGP, and redundant routers virtually eliminates the need to implement more troublesome techniques like Hot Standby Routing Protocol (HSRP) or Virtual Router Redundancy Protocol (VRRP) to ensure any combination of routes and equipment can carry critical traffic. As an added advantage, the Internet gateway routers will also maintain a full copy of the Internet routing tables for instant access.

Internet Routers Each Internet router terminates a single 15MB ATM virtual circuit carried over a DS3 local loop. Point-to-point subnets for each upstream ISP are provided by the ISP, and the routers run BGP, with restrictions to prevent cross-routing of ISP traffic through CME-CORP. Router configurations are similar to the CME-WEST Internet router (ATM ubr service).

Firewall Configuration The CME-CORP firewall is typical of an enterprise-class firewall. It, like the regional site firewalls at CME-EUR, CME-MEX, and CME-WEST, maintains session state tracking, resulting in stateful fail-over to the redundant unit. By upgrading to the ASA-series firewall appliance, CME can operate the firewalls in active/active mode (load-balancing) versus active/passive (fail-over). This eliminates previous restrictions on the firewall's ability to maintain state and encryption status for IPsecall IPsec tunnels will no longer drop during a fail-over . The CME-CORP firewall set (ORD-FPUB-A & B) manages multiple DMZs according to the original corporate security model. Each DMZ is assigned a progressively higher (more secure) security level, with normal ingress and egress rules applied. As a footnote, isolation of the second "public" DNS in a more secure DMZ serves two purposes. First, the more secure server can be the "master" for replicating zone updates. Second, the server in the PUBLIC DMZ coexists with corporate Web servers (public targets). A malicious attack on, and compromise of, a Web server could expose the DNS server to a direct attack from within the same DMZ. The DNS server in the SECURE-PUBLIC DMZ shares the DMZ with servers that only allow HTTPS (SSL) traffic and are easier to secure. The ACCESS DMZ is intended to terminate inbound connections from known, unencrypted but authenticated sources (RAS, Wireless, and others), and apply inspection rules to these traffic flows. The SECURE-ACCESS DMZ is only for termination of traffic that is both encrypted during transport (with strong encryption) and authenticated (read hereVPN clients). Access lists for the CME-CORP PIX are built much like lists for all other sites but are far more complex due to the many traffic flows that must be allowed through the firewall. Even traffic originating in a "secure" segment like the SECURE-PUBLIC DMZ must be filtered by firewall rules and exposed to IDS monitoring before being allowed inside the firewall. The following subset of the firewall configuration provides the basic settings for VPN tunnels, address translation, and filtering rules. CME will migrate their legacy PIX-535 configuration to the new hardware.

VPN (Client IPSEC VPN) The VPN termination for roaming clients historically provided by the Cisco 3030 VPN eoncentrator (redundant) will be phased out. Most remote users needed access to only a subset of the overall applications suite, and the Citrix Access Gateway will support virtually any port or protocol, including VoIP. Legacy IPsec requirements are expected to remain, so the VPN 3030 will remain in place to service these clients. Routing is a combination of static and OSPF to allow external routes to be propagated to the Internet router and PIX firewall. Individual client settings vary according to their role in the CME corporate environmentsome are authenticated by the Windows 2000 Active Directory domain, some by internal accounts on the VPN concentrator, and some by RADIUS. Tunnel settings also vary, with most users locked in to "tunnel everything" for security reasons. Most tunnels use preshared keys, but the VPN concentrator is the "test bed" for implementing certificate-based keying for future use on site-to-site PIC VPN tunnels.

Access Gateway and SSL VPN (Client VPN) The Citrix Access Gateway provides SSL VPN termination for roaming clients as well as Presentation Suite access. Although this is not truly a "fail-over" VPN, clients are provided connection information on both VPN devices when they connect; no renegotiation or rediscovery is required. Citrix's Advanced Access Control (discussed in Chapter 16) is applied to all VPN and Citrix sessions to enforce security policies through dynamic endpoint analysis. As the Access Gateway is a "hybrid" (versus traditional IPsec) VPN, connections across it are not vulnerable to tunnel traversal by malicious code (worms). Connections may be authenticated via the Windows 2003 Active Directory domain, with multifactor authentication supported natively.

Bandwidth Management Internet bandwidth at CME-CORP cannot be "shaped" in the same way internal WAN sites can, but as a minimum, certain traffic types must be protected. Figure 17-10 shows

Figure 17-10: CME-CORP Internet Packeteer settings

DMZ Distribution Switch (6509) Configuration The DMZ distribution switch (Catalyst 6509) configuration is complex. It employs a combination of routed (Layer 3 interface) and nonrouted (Layer 2 only) segments to isolate traffic flows, expose all segments to the intrusion detection module (IDS), and allow management platform visibility of traffic statistics. Additionally, isolated routed subnets are created by the content services module to allow it to load-balance IP traffic (HTTP and DNS) across multiple DNS and Web servers. Although detailed configurations are beyond the scope of this chapter, fundamental Layer 2 and Layer 3 configurations echo those of other corporate switches with several notable exceptions:

Figure 17-11 shows the combined Internet access layer, security and VPN modules, DMZ distribution switch, and peripheral equipment.

Figure 17-11: CME-CORP Internet, security perimeter, and VPN/firewall configuration

ACCESS-DMZ Switch Configuration A secondary distribution switch (Cisco 3550-12G) (ORD-SDE-A) is used between the PIX firewall ACCESS DMZ interface and the separate access segments or wireless LAN (WLAN) and dial-up Remote Access Services (RAS).

The 3550 enforces intersegment routing restrictions to limit the ability of wireless and RAS users to communicate directly, provides a first line of defense for the firewall against RAS or WLAN sources denial-of-service attempts, and aggregates the multiple VLAN/ WLAN segments for the wireless network. Finally, to avoid exposing critical equipment and servers, the Catalyst 3550 provides DHCP server services to the WLAN segments. The switch runs OSPF on the uplink to the primary DMZ distribution switch (ORD-SDMZ-A) and on the downlink to the PortMaster. The routes to the connected Layer 3 interfaces for the wireless segments are announced upstream but blocked on all other interfaces by distribution lists and "passive interface" settings. The PortMaster does not need a route to the WLAN, and the WLAN devices are Layer 2 only.

The Private WAN Module

The private WAN distribution module consists of the Cisco router, the distribution aggregation switch, PacketShaper, and an IDS appliance to preinspect traffic arriving from the sites. Figure 17-12 depicts the operational configuration.

Figure 17-12: The private WAN distribution module

The Private WAN Router The Cisco 7401 router is configured to use a 1000Base SX LAN interface and an ATM-DS3 WAN interface. Configuration for the routing protocol (EIGRP) is similar to the private WAN site routers, except that it has a much larger scope of assigned subnets (10.0.2.0/24). Configuration of the ATM interface is similar to that of CME-WEST.

Bandwidth Management The PacketShaper 8500 defines unique shaping parameters for each remote private WAN site based on the site's assigned LAN subnet range. By controlling bandwidth at the LAN edge, the traffic destined for the Internet is "pre-policed" to appropriate values and no per-site settings are required on the Internet PacketShaper for these sites. The policies and partitions of remote sites are replicated at the main private WAN PacketShaper. In Figure 17-13, note that the CME-TNG site (with bandwidth managed by MQC on the router) is classified as "ignore" (do not manage this traffic).

Figure 17-13: CME-CORP private WAN PacketShaper settings

The other notable feature is the CME-WEST "HotSite Replication" rule, a time-based rule that opens up the site-to-site bandwidth after hours and guarantees best performance to intersite data replication to support disaster recovery.

CME-TNG bandwidth is not controlled by the PacketShaper. Instead, policing is managed by settings for the ATM virtual circuit (VC) on the router. To the PacketShaper, the subnets associated with CME-TNG are classified with an "Ignore" rule so that no shaping or policing of traffic flows is enabled. The same MQC parameters invoked at the CME-TNG router (ORD-RPVT-TNG-A) are used on the CME-CORP's private WAN router interface to CME-TNG. The following shows partial configurations for the CME-CORP interface.

The Campus LAN Access/Distribution Module

Access and distribution layer topology for the CME-CORP campus was redesigned to form a virtual "ring" (that is, in fact, a Layer 3 partial mesh) centered on the data center facility. By changing all links from individual buildings to the core to be both redundant and Layer 3 (Figure 17-14), the designers eliminated issues related to spanning tree in the campus networkspanning tree instances on each switch are only locally significant because of the Layer 3 (routed) boundary. Switch routing tables will always contain the next -best route to the core, ensuring immediate convergence in case of a link failure.

Figure 17-14: Campus LAN access/distribution topology

Typical LAN Access/Distribution Switch Configuration The campus building switches are only partially fault-tolerant (single supervisor module), but multihomed at Layer 3 to ensure connectivity to the core. Figure 17-15 shows the physical connectivity.

Figure 17-15: Campus LAN access/distribution (partial)

Building distribution switches in the "virtual ring" are all based on the same template: 10/100/1000 Ethernet connections for in-building hosts, with multiple fiber optic gigabit uplinks to adjacent switches and the core switches for resilience. Individual interfaces for switch-to-switch connectivity have no need for VLAN parameters, so they are locked in as Layer 3 routed interfaces only with the "no switchport" command.

Switch-to-switch connectivity for a typical LAN distribution switch, using ORD-SDI-C (ENG-C) as a model, follows .

The WLAN Access Module

The WLAN access points (Cisco 1200 series) are configured as 802.1Q trunks on their internal (Ethernet) interfaces. VLAN 871 is used for management but is not "mapped" to an equivalent WLAN segment. VLAN 872 is mapped to the corporate WLAN on a unique nonbroadcast System Security Identifier (SSID) that requires RADIUS (LEAP) authentication. By tying the WLAN segment to RADIUS, CME IT staff can force positive mutual authentication of clients, enforce session key rotation, and ensure only specifically authorized users are allowed WLAN access. VLAN 873 is mapped to a "public" WLAN that uses no encryption or authentication and assumes default SSID values (tsunami). The Layer 3 interface for VLAN 873 is filtered by multiple access lists designed to restrict WLAN clients from accessing CME-CORP public servers (Web servers) and the Internet. As a security measure, the Layer 3 interface on switch ORD-SDE-A is maintained in a "shutdown" state to prevent use of this segment without prior coordination. As a secondary check, access attempts (associations) are logged by the individual access points as an audit trailthe WLAN is "active," just not connected beyond the access point. Figure 17-16 shows the WLAN topology. CME is evaluating conversion of all wireless LAN segments to Light Weight Access Points under centralized configuration control and management. The existing Cisco 1200 series can be converted to use the Light Weight Access Point protocol management and IOS, and the existing WLAN management module can monitor and control all APs. This conversion will allow seamless roaming and drop-ip provisioning without a site survey.

Figure 17-16: The Campus WLAN access/distribution topology

The Core Module

The dual Catalyst 6513 core (Figure 17-17) is linked by a 10GB Ethernet fiber link using single-mode fiber transceivers originally intended for far greater distances (optical attenuation is required); this allows the server farms and core switches to be physically separate in different areas of the data center without loss of throughput. Individual fiber links (Layer 3) to every campus distribution switch, the DMZ switch, and the private WAN distribution module ensure that no single failure, or even the failure of an entire core switch, can disrupt operations. (Remember, the Citrix farm and critical servers are distributed redundantly across both switches.) Failure of the core-to-core fiber link imposes little, if any, performance penalty as the multiple links through the distribution switches will dynamically re-route and load-balance (via EIGRP) the traffic.

Figure 17-17: The Dual Core module

The Core Switch Configuration A partial configuration from switch ORD-SCO-A illustrates the connectivity to the distribution layer and adjacent core switch. Key elements of the configuration for servers are reflected in module 9 ports 1 and 2 (Gigabit EtherChannel [GEC]), and module 12 ports 1 and 2 (Fast EtherChannel [FEC]).

Server-Side Network Settings

Network interoperability requires correct (matching) configurations between the server-side hardware (network interface card [NIC]) and the associated switch interface. Using Intel NIC hardware as an example, there are several critical settings that must be configured to ensure the best performance:

Creating an FEC/GEC EtherChannel (Layer 2 link aggregation) is the preferred method for increasing the aggregate bandwidth available to Citrix or other servers. By their nature, they are fault-tolerant and can run with only one member of the team, but with two or more members active, traffic is dynamically load-balanced across a virtual "fat pipe."

Basic configuration involves creating an EtherChannel "team" and then adding members. One member must be designated as "primary" and this MAC address will register as the address of the team. Figure 17-18 shows the teamed configuration and identifies the team MAC address and IP address.

Figure 17-18: The FEC adapter team

Individual member adapters must be correctly configured independently for 100MB, full-duplex. The secondary adapter is shown in Figures 17-1917-21. Note that it reports the MAC address of the team/primary adapter.

Figure 17-19: The FEC member adapter (general)

Figure 17-20: The FEC member adapter (link settings)

Figure 17-21: The FEC member adapter (power management)

Finally, Figure 17-21 shows the power management settings (enabled by default) that are inappropriate for a server and may cause flapping on an FEC team.

Категории