Citrix Access Suite 4 for Windows Server 2003: The Official Guide, Third Edition

Infrastructure Design—Connecting the Modules

Once the component module requirements are defined, specific connecting media can be specified and accurate bandwidth calculations are possible to correctly scale that media. The need for specialized bandwidth managers can also be assessed.

Media Selection

LAN Media

In the context of server-based computing, the LAN resides in two places—inside the data center and inside the remote office. The data center LAN is potentially very complex, while the remote office LAN will be relatively simple, containing little more than a workgroup media concentration point, client devices (PCs or thin clients), and LAN peripherals (printers, storage devices, and so on).

WAN Media

The wide area network (WAN) is the vehicle for transporting data across the enterprise. In a server-based computing environment, the design of the WAN infrastructure is crucial to the IT enterprise. It is essential to create a WAN design that is robust, scalable, and highly reliable in order to protect the value of the data that must flow across the WAN. Interconnecting media types for WAN services include

Planning Network Bandwidth

Planning network bandwidth may seem like an obvious need, but it is often skipped because it is difficult to predict the normal bandwidth utilization of a given device or user on the network. However, by using modeling based on nominal predicted values, bandwidth requirements can be accurately projected. When planning network bandwidth, keep the following guidelines in mind:

Bandwidth Management

In most thin-client WAN environments, "calculated" bandwidth should provide optimal performance, but seldom does. Even strict corporate policies on acceptable use of bandwidth cannot protect thin-client bandwidth when the network administrator downloads a large file or a user finds a new way to access music and media sharing sites. These unpredictable behaviors can degrade SBC services to remote users due to bandwidth starvation or excessive latency. There are several technologies available to more tightly control bandwidth utilization and assure responsive service environments: Layer 2 CoS and queuing, Layer 3 QoS and queuing, router-based bandwidth managers (Cisco's NBAR), and appliance-based bandwidth managers (Packeteer). Each of these has its respective strengths and weaknesses and all share several common characteristics. In addition, all of these technologies must have a mechanism for differentiating more important traffic from less important traffic, a process called classification. Traffic may or may not be "marked" and tagged with its particular priority, but subsequent network devices must be able to recognize the classifications and apply policies or rules to prioritize or constrain specific traffic types. All must have a means for identifying traffic as more important or less important than other traffic.

When applying bandwidth management technologies to WAN traffic flows, the following general rules apply:

Tip

More bandwidth, not less, is needed when migrating users to a new network. It is likely that the old and new network will have to run on some of the same network segments while users are moved from the old network to the new data center network. Tasks such as user data migration, interim file server reassignment, and "backhauling" user data to legacy systems not yet on the new network can all add up to an increased bandwidth need. Some of this is unavoidable, but some of the need can be mitigated with careful planning and staging of which systems will be migrated in which order. When an organization does not underestimate bandwidth needs, it enjoys a lower risk of having unhappy users before projects get started.

Layer 2 CoS and Queuing Applying Layer 2 CoS prioritization to LAN traffic has several weaknesses: It is only locally significant (CoS tags are frame based and not transported across Layer 3 boundaries); granular control, by application or service, is not widely supported; and most applications are incapable of originating traffic with and tagging frames with CoS values. Several vendors provide network interface cards capable of applying CoS and QoS tags to frames or packets, but this feature is on or off and cannot differentiate application layer traffic. Microsoft's Generic Quality of Service (GQoS) API allows software developers to access CoS and QoS features through the Windows 2003 server operating system. However, the API is not widely supported and only a limited number of Microsoft multimedia applications currently use the GQoS API. Most Layer 2 network devices have one or two input queues per port and up to four output queues. Out of the box, all traffic is routed through the default queue (low priority) on a first-in/first-out basis. CoS can be applied to frames at the source or upon entry to the switch to redirect the output to use a higher priority queue. Higher priority queues are always serviced (emptied) first, reducing latency. In a server-based computing paradigm, there is little to be gained from accelerating frames through the network at Layer 2.

Layer 3 QoS and Queuing Quality of Service at Layer 3 encompasses classifying traffic (via a standard or extended access list), protocol (such as URLs, stateful protocols, or Layer 4 protocol), input port, IP Precedence or DSCP, or Ethernet 802.1p class of service (CoS). Traffic classification using access lists is processor intensive and should be avoided. Once traffic is classified it must be marked with the appropriate value to ensure end-to-end QoS treatment is enforced. Marking methods are three IP Precedence bits in the IP Type of Service (ToS) byte; six Differentiated Services Code Point (DSCP) bits in the IP ToS byte; three MPLS Experimental (EXP) bits; three Ethernet 802.1p CoS bits; and one ATM cell loss probability (CLP) bit. In most IP networks, marking is accomplished by IP Precedence or DSCP. Finally, different queuing strategies are applied to each marked class. Fair queuing (FQ) assigns an equal share of network bandwidth to each application. An application is usually defined by a standard TCP service port (for example, port 80 is HTTP). Weighted fair queuing (WFQ) allows an administrator to prioritize specific traffic by setting the IP Precedence or DSCP value, but the Layer 3 device automatically assigns the corresponding queue. WFQ is the default for Cisco routers on links below 2 Mbps. Priority Queuing (PQ) classifies traffic into one of the predefined queues: high, medium, normal, and low priority queues. The high priority traffic is serviced first, then medium priority traffic, followed by normal and low priority traffic. PQ can starve low priority queues if high priority traffic flows are always present. Class-based weighted fair queuing (CBWFQ) is similar to WFQ but with more advanced differentiation of output queues. No guaranteed priority queue is allowed. Finally, low latency queuing (LLQ) is the preferred method for prioritizing thin-client traffic at Layer 3. LLQ can assign a strict priority queue with static guaranteed bandwidth to digitized voice or video, assign multiple resource queues with assured bandwidth and preferential treatment, and allow a default queue for "all other" traffic. Queuing works well in a network with only occasional and transitory congestion. If each and every aspect of a network is precisely designed and it never varies from the design baseline, queuing will provide all of the bandwidth management thin clients require. Absent a perfect network, queuing has the following characteristics and limitations:

Router-Based Bandwidth Management Cisco's Network Based Application Recognition (NBAR) provides intelligent network classification coupled with automation of queuing processes. NBAR is a Cisco IOS classification engine that can recognize a wide variety of applications, including Citrix, Web-based applications, and client/server applications. Additional features allow user-specified application definitions (by port and protocol). Once the application is recognized, NBAR can invoke the full range of QoS classification, marking, and queuing features, as well as selectively drop packets from the network. Although it is "application aware," NBAR relies on concurring devices to collectively implement QoS policies, and remains an "outbound" technology.

Appliance-Based Bandwidth Managers (TCP Rate Control) TCP rate control provides a method to manage both inbound and outbound traffic by manipulating the internal parameters in the TCP sliding window. TCP rate control evenly distributes packet transmissions by controlling TCP acknowledgments to the sender. This causes the sender to throttle back, avoiding packet tossing when there is insufficient bandwidth. As packet bursts are eliminated in favor of a smoothed traffic flow, overall network utilization is driven up as high as 80 percent. In a network without rate control, typical average utilization is around 40 percent. TCP rate control operates at Layer 4, performing TCP packet and flow analysis, and above Layer 4, analyzing application-specific data. TCP rate control has the following advantages:

On the other hand, TCP rate control has the following limitations:

Packet prioritization using TCP rate control is a method of ensuring that general WAN traffic does not interfere with critical or preferred data. Using packet prioritization, thin-client traffic can be given guaranteed bandwidth, which results in low perceived latency and speedy application performance, and contributes to a high level of user satisfaction in the server-based computing environment.

Packeteer created the category of hardware-based TCP rate control appliances with its PacketShaper product. Other manufacturers including Sitara and NetReality offer competing technologies, but Packeteer products were selected for an in-depth discussion.

In a simple deployment, a PacketShaper (shown in Figure 6-15) is an access layer device that sits between the router or firewall and the LAN infrastructure, and proactively manages WAN traffic to ensure that critical applications receive the bandwidth they require. For SBC environments, the bandwidth manager resides at remote sites with large enough bandwidth requirements to justify the expense. A PacketShaper is always placed inside the site router so it can manage the traffic flow before routing. In a large network, there is also value in placing a PacketShaper at the data center to control Internet services bandwidth and protect Internet-based remote users from being degraded by main site users surfing the Web. In this configuration, individual traffic flows cannot be managed, however, good traffic (thin clients, IPSec) can be given somewhat preferential treatment, and less-critical traffic flows can be throttled to ensure bandwidth remains available for thin-client flows. Though it is not possible to manage individual sessions this way, it is possible to create partitions for particular types of traffic. The flow-by-flow management happens in the PacketShapers at the edge of the network. There are several PacketShaper models available, and they are priced by the amount of bandwidth they are capable of managing. Packeteer has recently added new features, including the ability to manage enterprisewide devices from a central policy center.

Figure 6-15: Network with a Packeteer PacketShaper

Of the three methods discussed in this section, session-based policies and partitions are recommended for ICA traffic. A session-based policy that guarantees 20 Kbps but allows bursts of up to 50 Kbps is ideal for ICA. However, such a policy can only be implemented when the PacketShaper can control the inbound and outbound traffic, which means it cannot be done over the Internet. In such a case, a partition policy can be used. Depending on the size of the network pipe, it could, for example, be guaranteed that 50 percent of the bandwidth is available to ICA. The remaining bandwidth could be left "unmanaged" or partitions could be defined for the most common, known protocols such as HTTP and Telnet. Priority-based packet shaping with ICA should be avoided simply because it makes it harder to predict the behavior of a PacketShaper. This is because a priority is not absolute and relies on some fairly complex algorithms to shape the traffic. Partitions and session policies are more rigid, and therefore more predictable and easier to administer.

A limitation of packet prioritization is that print traffic (and resulting print output speed) may be reduced because bandwidth is guaranteed to ICA traffic. Users may find this delay unsatisfactory. If so, one may choose to increase WAN bandwidth to allow more room for print traffic. Printing is a complex issue in this environment and is discussed in more detail in Chapter 18. Another potential problem with packet prioritization is that Internet browsing speed may be reduced because of the guaranteed bandwidth reserved for ICA traffic. Our experience has shown that Internet browsing that includes rapid screen refresh rates appears to substantially increase ICA packet bandwidth requirements—sometimes to as much as 50 Kbps—although Citrix has made great strides in fixing this with Feature Release 3. Disabling Java, ActiveMovie, or other plug-in technology can mitigate this problem that causes the screen to refresh more than a static page. Few companies consider Web browsing to be mission critical (quite the opposite it seems), so this might not be a problem.

Packeteer in Action Figure 6-19 shows a sample report output from a Packeteer unit configured to monitor a small business Internet link. The customer relies on Citrix to provide applications to remote branch offices via the Internet. The main site (data center) has a 1.5-Mbps SDSL circuit to a local Internet service provider (ISP). The first graph shows poor response time for the customer's ERP/financial application (NAVISION) deployed over Citrix. Although server response times are somewhat suspect, network latency drives the total response time well above the recommended threshold of 500ms. The second graph shows that "bursty" HTTP traffic is consuming virtually all of the available WAN bandwidth, and that the bursts coincide with delays in Citrix response times. Graph three shows total (link) bandwidth consumption, and the final chart shows that HTTP consumes 48 percent of all available bandwidth, with HTTP and WinMedia accounting for nearly two-thirds of all bandwidth. The Packeteer's TCP rate control can "partition" the Internet pipe to ensure HTTP cannot deny Citrix access to needed bandwidth. As an added benefit, the Packeteer analysis proved that the ISP was only providing 1 Mbps of available bandwidth, not the 1.5-Mbps circuit the customer paid for. The ISP agreed to rebate $2500.00 in fees for substandard services.

Figure 6-19: Packeteer analysis report

Категории