Implementation-Specific Behavior
802.11 is not a rigorous standard. Several components of the standard are relatively loose and leave a great deal up to the particular implementation. Most implementations are also relatively young, and may behave in a nondeterministic fashion. I have participated in tests that made use of several identical computers, all imaged from the same software distribution and using the exact same wireles LAN hardware and driver revision. Even though there were several computers in the same location with identical configuration, behavior was significantly different.
Rebooting Interface Cards
802.11 is a complex protocol with many options, and running the newest protocols exposes the newest bugs. 802.11 interfaces use relatively general-purpose microprocessors running software. As with a great deal of software, cards that are in a strange state may be helped by "rebooting" to clear any protocol state stored in the MAC processor. External cards can be rebooted by removing and re-inserting them; internal cards must be rebooted by power cycling them through the system software. It is not sufficient to unload and reload drivers, since the object is to clear all state in the wireless LAN interface.
Restarting a card may be required to clear state that is preventing successful operation. As a first step, restart the card when:
- The client system is associated, but cannot send or receive traffic. If the network is encrypted, the problem is often a lack of synchronization of cryptographic keys. This problem is often exacerbated by roaming because every change between APs results in the transmission of new keys.
- No scan list can be built. If you are sure that there is a network within range but it will not show up in the client utility, the card may be in a state where it is unable to supply a scan list.
- Authentication/association failures occur in rapid succession. If the state of the client system software prevents a successful connection but the network is on a "preferred" list for connection, the attempt will be retried.
Scanning and Roaming
Every card behaves differently when searching for a network to attach to, and in how it decides to move between APs. 802.11 places no constraints on how a client device makes its decision on how to move between APs, and does not allow for any straightforward way for the AP to influence the decision. Most client systems use signal strength or quality as the primary metric, and will attempt to communicate with the strongest AP signal.
Most cards monitor the signal-to-noise ratio of received frames, as well as the data rate in use, to determine when to roam to a new AP. When the signal-to-noise ratio is low at a slow data rate, the client system begins to look for another AP. Many clients put off moving as long as possible, in part because the process of looking for a new AP requires tuning to other channels and may interrupt communications in process. Client stickiness is sometimes referred to as the bug light syndrome. Once a client has attached to an AP, it hangs on for dear life, like a bug drawn to a bug zapper. Even if the client moves a great distance from the AP with a consequent drop in signal strength, most clients do not begin the roaming process until the signal is almost lost.
Roaming in 802.11 is entirely driven by client decisions. Where to send the Association Request frames is entirely in the hands of the client system's driver and firmware, and is not constrained by the 802.11 specification in any way. It would be 802.11-compliant, though awful, to connect to the AP with the weakest signal! (An unfortunate corollary is that driver updates to fix bugs may alter the roaming behavior of client systems in undesirable ways.) Access points do not have protocol operations that can influence where clients attach to, and whether they will move or not. Implementing better roaming technology is a major task for 802.11 as time-critical streaming applications begin to use 802.11.
|
Rate Selection
802.11 lays out basic ground rules for how multirate support needs to work, but it leaves the rate selection algorithm up to the software running on the interface. Generally speaking, an interface tries to transmit at higher speeds several times before downgrading to lower speeds. Part of that is simply common sense. In the time it takes to transmit a frame with a 1,500-byte payload at 1 Mbps, it would be possible to transmit the same frame 8 times at 11 Mbps, or over 20 times on an 802.11g network running at 54 Mbps with protection enabled. (Without protection, the multiplier is 40!) If the frame was corrupted by a one-time event, it makes sense to retry a few times before accepting the more drastic penalty of lowering the data rate.
Step-down algorithms are generally similar. After trying some number of times to transmit a frame, it falls to a lower data rate. Most cards step down one rate at a time until an acknowledgment is received, though there is no requirement for them to do so. It would be a valid rate selection algorithm to slow down to the minimum data rate at the first sign of trouble. Step-up algorithms work the same way in reverse. When "several" frames are received with a much higher signal-to-noise ratio than is required for the current rate, the interface may consider stepping up to the next highest rate.