Network Requirements
Like any other network technology, before deploying a wireless LAN, you must answer three questions: "where?", "how fast?", and "what can I spend?" Typically, the cost is specified independently through a budget process, and what the architect is trying to do is achieve the best possible LAN service within that constraint. The where in building an 802.11 network is the set of locations where radio service will be provided. It is usually desirable to have coverage everywhere, but many projects start as scaled-down deployments for conference rooms and public spaces only to hold down cost. The how fast refers to the capacity of the radio network. Wireless LAN client speeds depend on distance from the access point and the number and type of obstacles between the access point and the client. Building a high-throughput requires attention to detail to minimize the average distance of a client from its nearest AP. In some physical spaces, the environment can obstruct radio waves to such a degree that a large number of APs are required simply to achieve coverage.
Network architects need to optimize between the trio of variables to get the right network. In some environments, the physical design of the building prevents radio propagation, so the network will need to have a smaller footprint and less capacity. Or perhaps the cost is limited and the objective will be to design as large a network that meets some minimum capacity specification. In rare cases, the network may even be designed around total coverage at high capacity, and cost is allowed to be very high. Optimizing between the three design factors is a continuous process that must occur on a regular basis. As a wireless network grows in popularity, the network will have to grow with it. Initial limited-area deployments will probably need to give way to wall-to-wall coverage. Networks designed for coverage only and based purely on a sparse AP layout to keep costs low may need to move to a higher-capacity deployment to accommodate additional demands for capacity to serve users, or an increased number of users. Finding the balance between the troika of demands is the art of building a wireless network. Fortunately, there are a number of tools available to help make decisions between trade-offs.
Coverage Requirements
All networks cover some area. Wired networks get coverage through network ports placed throughout the facility. To get network coverage, you pull wire and drop in ports. Wireless networks approach coverage planning in a different way because the physical network medium spreads out through space and can penetrate walls. Getting a handle on how radio waves propagate through your space is key to understanding how to cover a network.
The first question to answer is where coverage will be provided. Are you blanketing the whole building or campus, or just putting wireless in select areas? It is often common to start with a pilot deployment that covers a small area while getting familiar with the technology. In many cases, the pilot deployment will cover the IT workspace, though it is also quite common to provide coverage for public areas such as lobbies or conference rooms.
"Ubiquitous" is a popular word for those who are specifying wireless LAN coverage requirements, although it can be a horrific word for those who have to fulfill them. Does that mean that every square inch of the building must be covered? Is it really necessary to have high-quality coverage in, say, the restroom? For public buildings, should escape routes be covered?
The number of access points required to cover an indoor area may depend on several factors. First, there is the matter of building construction. More walls mean more material blocking radio waves, and hence more access points will be required. Different types of material also affect RF in different ways. For a given material, thicker walls cause greater signal loss. Signal power is diminished, or attenuated, most by metal, so elevator shafts and air ducts cause significant disruption of communications. Tinted or coated windows frequently cause severe disruption of radio signals. Some buildings may have metal-coated ceilings or significant amounts of metal in the floor. Wood and most glass panes have only small effects, although bulletproof glass can be quite bad.[*] Brick and concrete have effects somewhere between metal and untreated plain glass.
[*] Bulletproof glass is far more common than one might otherwise assume.
The second major factor is the desired speed. Simply providing some wireless LAN access throughout an area is different from requiring a certain data rate. 802.11a has speeds that go from 6 Mbps to 54 Mbps, and the coverage provided at the slower speed will be much larger. Building a network that supports the 54 Mbps data rate everywhere will require many, many more access points than a network that simply provides some 802.11 access everywhere. Figure 23-1 is an attempt to show how the distance/data rate trade-off compares for the three major 802.11 physical layers. Higher data rates do not travel as far.
Figure 23-1 is based on theoretical calculations of free-space loss. It shows the relative distance at which different speeds are available for 802.11a, 802.11b, and 802.11g. For the calculation, I have assumed a typical transmission power (20 dBm or 100 mW for 802.11b/g, and 11 dBm for 802.11a) and then calculated the distance at which the power would drop to the radio sensitivity for each speed. For typical sensitivity, I used the detailed specifications that Cisco makes available for their a/b/g card. Range is shown relative to the minimum range, which is the top operational rate of 54 Mbps for 802.11a.
Figure 23-1. Relative range comparison of free space loss
To meet a performance target, it may be necessary to have a lot of of overlap between adjacent APs. If the goal is to have high-speed coverage throughout an area at, say, 36 Mbps, the range of transmissions at lower speeds will significantly overlap. Planning for some overlap to ensure handoff while minimizing the amount necessary to preserve optimum performance is a delicate trade-off in the design of a wireless LAN.
A final factor to consider when designing coverage is the objective of your network. A radio will have a certain coverage area, but the area covered by its transmissions and the area covered by its reception may be different. In general, the latter is a much larger area, especially if you are not using the maximum transmission power the AP is capable of. One of the advantages of deploying a dense network with relatively low power is that there is a great deal of overlap in the reception areas. Any unauthorized APs that are deployed by the user community will be detected by several of the APs in your network, which will enable more precise location.
Outdoor coverage is subject to a different set of trade-offs and engineering requirements than indoor coverage, and is generally only a concern in mild climates where users are likely to be working outside on a regular basis.[*] Certain types of applications are also suited to combined indoor/outdoor coverage; for example, airports may wish to provide outdoor access to the airlines at the curb for skycap equipment. Placing equipment outdoors is often a challenge, in large part because it must be weatherproofed, and there are a number of environmental and safety rules to comply with. Any outdoor equipment should be sturdy enough to work there, which is largely a matter of waterproofing and weather resistance. One solution is to install access points inside and run antennas to outdoor locations, but external antenna cables that are long enough are not always available, and in any case, the cable loss is often severe. Many vendors will have weatherproof enclosures on their price list, especially if they have sold equipment to a large combined indoor/outdoor installation.
[*] This chapter does not discuss the use of point-to-point 802.11 links; obviously, they are outdoor links that may be used year-round in any climate.
Weatherproof enclosures may be subject to some additional safety requirements. International Electrotechnical Commission (IEC) Standard 60529 has test procedures to rate enclosures on protection against water and debris. In the United States, enclosures may also be subject to the National Electric Manufacturer's Association (NEMA) Standard 250.
Coverage and physical installation restrictions
Part of the end-user requirement is a desired coverage area, and possibly some physical restrictions to go along with it. Physical restrictions, such as a lack of available electrical power and network connections, can be mundane. Some institutions may also require that access points and antennas are hidden; this may be done to maintain the physical security of the network infrastructure, or it may be simply to preserve the aesthetic appeal of the building.
It is often desirable to mount access points as high as possible. Just like scouts who try to seize the high ground for a battlefield, APs work best when they are above the typical obstructions that live on the floor. By mounting them above cubicles and other objects, it is often possible to make their signals go farther more reliably. Some access points have mounting kits that enable them to attach to walls, or even the suspension bars for the dropped ceiling tile. Other vendors will recommend installing the AP above the ceiling tiles and using an unobtrusive external antenna through the ceiling tile. Ceiling tile vendors are even getting into the market by producing ceiling tile panels with integrated antennas.[
images/ent/U2020.GIF border=0>] See, for example, the Armstrong i-ceilings tile at http://www.armstrong.com/commceilingsna/article7399.html.
Many commercial buildings use "dropped" ceilings, where the ceiling tile is suspended from the actual ceiling. Network wiring and electrical cables are placed above the ceiling tile, along with air ducts. In some buildings, the area between the ceiling and the ceiling tiles may be used as part of the building's air-handling system. Safety standards dictate that if objects are placed in the air-handling spaces (plenums), they must not endanger the building's occupants. In case of fire, one of the biggest dangers to people inside the building is that thick black smoke may obscure vision and otherwise obstruct attempts to escape. If an object in the air system were to start giving off smoke, the smoke would be circulated throughout the building. To protect people inside buildings, therefore, there are specific safety standards on how fire-retardant equipment placed in plenums must be. If you wish to mount wireless LAN equipment above the ceiling, ensure that the components placed above the ceiling are plenum-rated. In addition to the APs, this would include any support equipment mounted up above the ceiling as well. Power injectors are often not plenum-rated because they can usually be located safely in wiring closets. Any cables used above the ceiling tile almost certainly need to be plenum-rated. Plenum safety standards are developed by Underwriters Laboratories and published as UL standard 2043. UL also tests products for conformance to the standard and certifies those that pass.
Performance Requirements
Coverage is not the end of the story in wireless LAN design. When operating under network load, access points act like hubs. For a given coverage area, there is a fixed amount of radio capacity. An 802.11b access point can move about 6 Mbps of user data to the edge of its coverage range. The physical medium in 802.11 networks is inherently shared. A lone user connected to an access point will be able to obtain speeds of about 6 Mbps because there will be no congestion window backoff. As more users are added to the network, the same 6 Mbps must be divided among the users, and the protocol must work to fairly (or unfairly, as users are sure to contend) allocate transmission capacity between stations.
For a network built to serve users, coverage and quality are an inherent trade-off. It is possible to use fewer access points by attaching high-gain external antennas, but the capacity is shared over larger areas. There is nothing inherently wrong with large coverage areas, especially if the user density is not high. Some deployments may use a single access point with an external antenna to create a huge coverage area because the demand for network capacity is not very high at all. Many K-12 schools with only a few users fall into this category, which is represented by the left-hand side "coverage" picture in Figure 23-2. On the other hand, a network with lots of users may wish to use many small coverage areas. Network engineers will sometimes borrow a term from cellular telephony and refer to such a network as having "microcells." With smaller areas, any given access point is likely (although not guaranteed) to have fewer users than the large-coverage-area case. In Figure 23-2, the right-hand side picture has divided the same area into three separate subareas. As a result, each AP handles a smaller number of stations and the per-station throughput is likely to be better.
Figure 23-2. Coverage/quality trade-off
One metric that might be useful in evaluating your needs is total area throughput (or, its close relative, throughput per unit of area). Both networks in Figure 23-2 cover the same area. However, with three access points, the network on the right provides three times as much throughput. Not all networks need high total area throughput, at least initially. When the wireless LAN proves popular, and the 5 users shown in Figure 23-2 become 10, 15, or even 50 users, the total area throughput may need to be further increased.
How much capacity should be reserved for each user? One answer is to undertake a detailed study of network applications and performance requirements, and to design the network appropriately. In the real world, however, most networks seem to operate on a Schrodinger's Cat sort of principle: as long as packets are moving, the network is working; if we were to inquire too deeply about its operation, it might cease to function. Generally speaking, most wireless network projects are started with only the vaguest idea of required application support, other than perhaps that "it should feel something like a wired connection." In the absence of any countervailing data, I advise to plan for at least 1 Mbps for each user. 802.11a and 802.11g networks allow planning for higher rates per user, especially if you make aggressive assumptions about the burstiness of traffic.
Exploring the coverage/quality trade-off and total area throughput
Under load, an access point will act like a hub, sharing capacity between users throughout its coverage area. Networks built using large AP coverage areas are generally cheaper to build because they use fewer access points, but they may have poor service quality due to lower aggregate capacity. When each AP is responsible for providing connectivity in a large area, there is also a higher likelihood that stations far the edge of the network will communicate using slower speeds.
One way of measuring the quality of coverage is to consider the total aggregate throughput available to the service area, which is in many ways a reflection of the density of the access points. All other things equal, more access points mean that there is more radio capacity. Figure 23-3 shows three networks. On the left, there is a network with one access point. It is capable of offering up to, say, 30 Mbps of user payload data to clients in the service area. In the middle, there is a network built with three access points, operating on lower power. By separating the coverage area into independent radio cells, more throughput is available. Roughly speaking, there will be 90 Mbps available to serve clients. Finally, in the network on the right, there is a single access point with a sectorized antenna, which acts like three combined directional antennas. In some implementations, each sector is assigned its own channel, which also reduces the number of collisions between stations attempting to transmit. Done in the most sophisticated manner, each channel on the sectorized antenna will act like an independent access point, and the aggregate throughput available to clients will be 90 Mbps. Throughput quality may also be measured in terms of megabits available per square foot of service area, which is consistent with the total throughput available to the service area.
Figure 23-3. Total aggregate service area throughput illustration
In the Ethernet world, Ethernet switches increase network throughput by reducing the contention for the medium. Any approach to increasing the capacity of a wireless network takes the same qualitative approach. By shrinking the coverage area associated with any given access point, more access points can be deployed in a single service area. Although there is a great deal of hype about so-called "Wi-Fi switches," they all share a relatively simple design principle. Network management is harder when there are more elements in the network, so use aggregation devices to concentrate the complexity in a few spots where it can be dealt with, rather than scattered across the network.
|
Client limitations
One of the major challenges for the industry is that a great deal of 802.11 operations are in control of the client machine and its software, and this may lead to path dependencies and hysteresis in the behavior of clients.
Nearly all of the "interesting" protocol operations that support mobility are in the hands of clients. Client software decides when to roam, how to scan for a new access point, and where to attach. Different machines on the same network may behave differently because 802.11 does not specify an algorithm for deciding when to roam or how to select access points. With so much of the protocol usage up to client implementation, yet unspecified by any standard, the behavior of clients is often a mystery. As a demonstration, get three laptop computers, add 802.11 interfaces, and roll them around your network on a cart. The three computers will move between access points at different times, and generally exhibit different behavior. Future standards, especially the forthcoming 802.11k, should help improve the quality of roaming decisions.
Take, for example, the case of a client deciding where to associate with the network. In most cases, clients will do a relatively intelligent scan when they are first activated, and choose the access point with the strongest signal. For many cards, intelligence ends there. They will continue to hang on to that first access point for dear life, even to the exclusion of much better access points that are nearby. Losing the signal is the only thing that forces many cards to roam. This behavior is often called the bug light problem because the client resembles a moth entranced by a flame, unable to move away. Bug light clients are particularly bad for throughput. As they stray far from their point of initial connection, they drop to slower speeds to stay connected.
Not only does this sap the far-ranging client of speed, but it also dramatically reduces the throughput available to other clients on the same access point. A maximum-size network payload of 1,500 bytes encapsulated in an OFDM PHY (802.11a or 802.11g) requires a much greater transmission time at slower speeds. To illustrate the extremes, the maximum size frame at 54 Mbps requires 57 data symbols and 248 microseconds to transmit. At 6 Mbps, however, it requires 512 data symbols for a total time of 2,068 microseconds, or slightly more than 8 times longer. The slower transmission rate robs other clients of the ability to transmit for 1,800 microseconds.
Leaving so much of the overall protocol operation in the hands of the client limits the ability of the network infrastructure to do the "best" thing under some circumstances. Cellular networks, for example, have the ability to direct a mobile telephone to a tower with more capacity. Wireless LAN protocols do not yet have that ability. Many vendors of wireless LAN systems have implemented "AP load balancing" capabilities that claim to give network administrators the ability to co-locate two access points in close proximity to increase overall network capacity in that location. A common approach is to monitor either the number of associations on each access point, or the amount of transit traffic through each access point, and carefully disassociate active clients on a loaded access point to encourage them to move to an unloaded AP. Without any client cooperation, it is difficult to achieve the optimal balancing because most clients will tend to go right back to the AP they were initially associated with.
|
Realistic throughput expectations
As more users are added to a wireless LAN, the network's capacity is divided between more users and throughput suffers. For networks using the distributed coordination function (DCF), a practical rule of thumb is to expect 50-60% of the nominal bit rate in order to take into account the overhead from elements such as interframe spacing, the preamble and framing headers. Network protocols add additional overhead for network-layer framing and retransmission. An additional problem that most network protocols will impose is that reliable delivery assumes that there will be a transport-layer acknowledgment of transmissions. Every TCP segment must be acknowledged (though not necessarily individually), and the TCP acknowledgments may collide with additional segments in transmission.[*] Table 23-1 shows a rough rule of thumb for the network capacity of an access point.
[*] One estimate I have heard is that you should expect about 10% of frames to be retransmitted if TCP/IP is used as the network and transport protocol combination.
Technology |
Approximate capacity |
---|---|
802.11 direct sequence |
1.3-1.5 Mbps |
802.11b |
6 Mbps |
802.11g, with protection |
15 Mbps |
802.11g, no protection |
30 Mbps, although this will be rare |
802.11a |
30 Mbps |
Quality of service technologies are presently being designed to squeeze more network capacity, but they are not widely deployed. If the history of QoS is any guide, there will be a great deal of talk, followed by standards that very few people use.
Number of users per access point
When you are planning a network, you must also ask how many users can be attached to an access point. 802.11 limits the number of associated stations to 2,016, which is for all practical purposes an infinite limit. Practical considerations dictate that you limit the number of users per AP to something much, much smaller.
6 Mbps is a reasonable assumption for the user payload throughput of 802.11b. To engineer a network speed of 1 Mbps for each user, it would at first appear that only six users could be supported on an 802.11b access point. However, network traffic is bursty, so there is a natural oversubscription that can be built into any assumptions about traffic patterns. Engineering 1 Mbps for each user can be done by assuming that there are some times when users will be idle. I have generally found that a ratio in the neighborhood of 3:1 to 5:1 will be reasonable. Given that ratio, an 802.11b access point can support approximately 20 to 30 users.
However, the number of users per access point does not get much better if you go to 802.11a or 802.11g. Speed depends on distance in 802.11. As stations get farther from an AP, they will fall back to more robust but slower encoding methods for transmission. The higher speeds of 802.11a and 802.11g are available only relatively close to the AP, so the 20-30 user per AP number remains reasonable.
If you are using applications that are highly sensitive to network characteristicsfor example, voice traffic (VoIP)--the number of stations per access point may be even lower. Wireless networks do not yet have sophisticated quality of service prioritization, and must rely on the medium itself to arbitrate access between stations. Voice traffic and data traffic are at odds with each other. Data throughput is highest when the medium is saturated and the AP transmit queue is pumping out data as fast as possible. Voice frames need to be delivered on time, and queues need to remain relatively free to accept high-priority frames for immediate delivery. Voice traffic, whether run directly on the 802.11 link layer or over IP, is highly sensitive to delay and jitter. To avoid unnecessary delay in frame delivery, you may need to restrict the number of voice handsets even further, down to 8-10 handsets per AP, until the arrival of better QoS technology.
Mobility Requirements
The days in which wireless LANs were a cool new technology populated by power users is long gone. A few years ago, it may have been acceptable to have the network merely facilitate automatic reconfiguration as users moved from one location to another. As wireless LANs have become more mature, however, users have come to expect networks that reflect the ideal of continuous connectivity, regardless of physical location or the convenience of local attachment points.
Continuous coverage and seamless roaming should be the norm throughout a campus environment. Users may move throughout a campus in unpredictable ways, while expecting that the network will just run and support their connection. Generally speaking, users expect that any journey not involving motorized transport should be supported by the wireless LAN. In designing coverage for an entire campus, you may need to engineer the network so that users can cross router boundaries while retaining their addresses, typically through some form of tunneling. Different wireless LAN architectures will accomplish the tunneling in different ways. For a detailed discussion, see Chapter 21.
Network Integration Requirements
There are two components to network planning. The first, physical integration is largely legwork. In addition to the building map, it helps to obtain a physical network map, if one exists. It is much easier to install wireless LAN hardware when no expensive and time-consuming wiring needs to be done. Knowing the location and contents of all the wiring closets is an important first step. The second step, logical integration, consists of hooking your wireless LAN up to the existing network.
Physical integration
Physical integration consists of getting the atoms in the right places. APs need to be mounted according to your plan, and cabled appropriately. Connecting new devices to the network probably requires new cabling if you are concerned with aesthetic appearance. If not, you can run patch cords from existing jacks to the AP locations. Depending on the product and architecture you choose, the cable may be attached to an AP controller, a special wireless VLAN, or the network that exists in the closet.
In addition to supplying connectivity, you must power up the APs. It is possible, though unlikely, that you will power APs directly at service locations. Many enterprise-grade APs are designed to draw power from the Ethernet cable primarily, and may not even offer the option of an external power supply. (Some APs that offer the option of an external power supply draw 48 volt power from their power cords, which indicates that the power circuitry was designed to operate at PoE voltages.) Your switch vendor may offer power over Ethernet, but you need to check and be sure it is compatible. The surest guarantee of power compatibility is compliance with 802.3af, but you may have prestandard products that are vendor-proprietary. If you need to add power to the wiring closet, you can purchase third-party power injectors.
Logical integration
Before attempting the logical integration of your wireless LAN, you must first select its architecture (see Chapter 21). Different architectures will have different integration requirements. Generally speaking, though, you will need to connect at least one network to the wireless LAN. Dynamic network assignment based on AAA may require connecting more than one network. For the most part, these networks will be IP networks. However, in some cases, there may be legacy networking protocols that also need support over the wireless LAN.
The second component of network planning is thinking about changes to the logical network. How will mobile stations be addressed? If all wireless stations will use a single IP subnet, you need to allocate IP address space and ensure it is correctly routed to the wireless subnet. As you allocate new address space, ensure that you leave sufficient extra space for all the access points as well as any support devices. Do not succumb to the temptation to use address translation. Although many applications work through NAT, it is a potential disruption to future applications that are not NAT-aware. It may also break applications that were written before NAT was commonplace.
As part of the network expansion, you will need to add access points. They will probably need IP addresses. If these addresses are being assigned through DHCP, you will probably want to configure your DHCP server to give each access point an assigned address rather than randomly fishing from an address pool. If the access points connect through an IP tunnel back to a centralized controller, you may need to configure filtering rules to allow the communication through, as well as configuring the tunnel on both the AP and centralized device.