Data Flow
As discussed in the following sections, a number of important issues affect data flow in a network:
- The parts of the data circuit that comprises every network, including data terminal equipment (DTE), the data communications (or channel or circuit-terminating) equipment (DCE), the transmission channel, and the physical interface
- Modems and modulation
- Simplex, half-duplex, and full-duplex data transmission
- Coding schemes
- Asynchronous and synchronous transmission modes
- Error control
The DTE, the DCE, the Transmission Channel, and the Physical Interface
Every data network is a data circuit that has seven parts: the originating DTE, its physical interface, the originating DCE, the transmission channel, the receiving DCE, its physical interface, and the receiving DTE (see Figure 5.1). The transmission channel is the network service that the user subscribes to with a carrier (e.g., a dialup connection with an ISP).
Figure 5.1. The DTE, DCE, transmission channel, and physical interface
The DTE transmits data between two points without error; its main responsibilities are to transmit and receive information and to perform error control. The DTE generally supports the end-user applications program, data files, and databases. The DTE includes any type of computer terminal, including PCs, as well as printers, hosts, front-end processors, multiplexers, and LAN interconnection devices such as routers.
The DCE, on the other hand, provides an interface between the DTE and the transmission channel (i.e., between the carrier's networks). The DCE establishes, maintains, and terminates a connection between the DTE and the transmission channel. It is responsible for ensuring that the signal that comes out of the DTE is compatible with the requirements of the transmission channel. So, for instance, with an analog voice-grade line, the DCE would be responsible for translating the digital data coming from the PC into an analog form that could be transmitted over that voice-grade line. A variety of different conversions (e.g., digital-to-analog conversion, conversion in voltage levels) might need to take place in a network, depending on the network service. The DCE contains the signal coding that makes these conversions possible. For example, a DCE might have to determine what voltage level to assign to a one bit and what level to assign to a zero bit. There are rules about how many of one type of bit can be sent in a row, and if too many of them are sent in sequence, the network can lose synchronization, at which point transmission errors might be introduced. The DCE applies such rules and performs the needed signal conversions. DCEs all perform essentially the same generic function, but the names differ depending on the type of network service to which they're attached. Examples of DCEs include channel service units (CSUs), data service units (DSUs), network termination units, PBX data terminal interfaces, and modems.
Another part of a data network is the physical interface, which defines how many pins are in the connector, how many wires are in the cable, and what signal is carried over which of the pins and over which of the wires, to ensure that the information is viewed compatibly. In Figure 5.1, the lines that join the DTE and DCE represent the physical interface. There are many different forms of physical interfaces.
Modems and Modulation
No discussion of data communications is complete without a discussion of modulation. As mentioned in Chapter 1, "Telecommunications Technology Fundamentals," the term modem is a contraction of the terms modulate and demodulate, and these terms refer to the fact that a modem alters a carrier signal based on whether it is transmitting a one or a zero. Digital transmission requires the use of modulation schemes, which are sometimes also called line-coding techniques. Modulation schemes convert the digital information onto the transmission medium (see Figure 5.2). Over time, many modulation schemes have been developed, and they vary in the speed at which they operate, the quality of wire they require, their immunity to noise, and their complexity. The variety of modulation schemes means that incompatibilities exist.
Figure 5.2. Modems
Components of Modulation Schemes
Modems can vary any of the three main characteristics of analog waveformsamplitude, frequency, and phaseto encode information (see Figure 5.3):
- Amplitude modulation A modem that relies on amplitude modulation might associate ones with a high amplitude and zeros with a low amplitude. A compatible receiving modem can discriminate between the high and low amplitudes and properly interpret them so that the receiving device can reproduce the message correctly.
- Frequency modulation A frequency modulationbased modem alters the frequency value; in Figure 5.3, zero represents a low frequency and one represents a high frequency. A complementary modem decodes the original bit patterns, based on the frequency of the received signal.
Figure 5.3. Amplitude, frequency, and phase modulation
- Phase modulation Phase modulation refers to the position of the waveform at a particular instant in time (e.g., a 90-degree phase, a 180-degree phase, a 270-degree phase). A phase modulationbased modem uses the phases to differentiate between ones and zeros, so, for example, zeros can be transmitted beginning at a 90-degree phase, and ones may be transmitted beginning at a 270-degree phase.
By using the three characteristics of a waveform, a modem can encode multiple bits within a single cycle of the waveform (see Figure 5.4). The more of these variables the modem can detect, the greater the bit rate it can produce. Clever modulation schemes that vary phase, amplitude, and frequency at the same time are possible today, and they can offer higher bit rates because multiple bits can be encoded at the same instant.
Figure 5.4. Signal modulation
Modulation schemes vary in their spectral efficiency, which is a measure of the number of digital bits that can be encoded in a single cycle of a waveform. A bit is a unit of information, while a baud is a unit of signaling speedthe number of times a signal on a communications circuit changes. The ITU-T now recommends that the term baud rate be replaced by the term symbol rate. Each signal, or symbol, can contain multiple bits, based on the modulation scheme and how many variables can be encoded onto one waveform. In the simplest of examples, where there is one bit per baud, the bit rate equals the baud or symbol rate. However, with contemporary techniques, which enable multiple bits per symbol (such as 64-QAM, which encodes 8 bits per symbol), the bps rate is much higher than the baud or symbol rate. To get more bits per Hertz, many modulation techniques provide more voltage levels. To encode k bits in the same symbol time, 2k voltage levels are required. It becomes more difficult for the receiver to discriminate among many voltage levels with consistent precision as the speed increases. So it becomes a challenge to discriminate at a very high data rate. (Chapter 13, "Wireless Communications Basics," talks more about spectrum reuse.)
Categories of Modulation Schemes
There are two main categories of modulation schemes:
- Single-carrier In the single-carrier modulation scheme, a single channel occupies the entire bandwidth.
- Multicarrier The multicarrier modulation scheme uses and aggregates a certain amount of bandwidth and then divides it into subbands. Each subband is encoded by using a single-carrier technique, and bitstreams from the subbands are bonded together at the receiver. Therefore, no bits need to be placed on portions of the frequency band that may be subject to noise and might result in distortion. Multicarrier techniques became popular as a result of developments in digital signal processing (DSP).
Table 5.2 lists some of the most commonly used modulation schemes, and the following sections describe them in more detail.
Scheme |
Description |
---|---|
Single-Carrier |
|
2B1Q |
Used with ISDN, IDSL, and HDSL. |
64-QAM |
Used with North American and European digital cable for forward (i.e., downstream) channels. |
256-QAM |
Used with North American digital cable for forward (i.e., downstream) channels. |
16-QAM |
Used with North America digital cable for reverse (i.e., upstream) channels. |
QPSK |
Used in North American digital cable for reverse (i.e., upstream) channels, as well as in direct broadcast satellite. |
CAP |
Used in older ADSL deployments. |
Multicarrier |
|
OFDM |
Used in European digital over-the-air broadcast, including 802.11a, 802.11g, 802.11n, 802.16x, 802.20x, and Super 3G; it is the basis of 4G and 5G visions. |
DMT |
Used with xDSL and is a preferred technique because it provides good quality. |
Single-Carrier Modulation Schemes
There are a number of single-carrier schemes:
- 2 Binary 1 Quaternary (2B1Q) 2B1Q is used in ISDN, HDSL, and IDSL. 2B1Q uses four levels of amplitude (voltage) to encode 2 bits per Hertz (bits/Hz). It is well understood, relatively inexpensive, and robust in the face of telephone plant interference.
- Quadrature Amplitude Modulation (QAM) QAM modulates both the amplitude and phase, yielding a higher spectral efficiency than 2B1Q and thus providing more bits per second using the same channel. The number of levels of amplitude and the number of phase angles are a function of line quality. Cleaner lines translate into more spectral efficiency or more bits per Hertz. Various levels of QAM exist, referred to as nn-QAM, where nn indicates the number of states per Hertz. The number of bits per symbol time is k, where 2k = nn. So, 4 bits/Hz is equivalent to 16-QAM, 6 bits/Hz is equivalent to 64-QAM, and 8 bits/Hz is equivalent to 256-QAM. As you can see, QAM has vastly improved throughput as compared to earlier techniques such as 2B1Q, which provided only 2 bits/Hz.
- Quadrature Phase-Shift Keying (QPSK) QPSK is equivalent to 4-QAM, with which you get 2 bits per symbol time. QPSK is designed to operate in harsh environments, such as over-the-air transmission and cable TV return paths. Because of its robustness and relatively low complexity, QPSK is widely used in cases such as direct broadcast satellite. Although QPSK does not provide as many bits per second as some other schemes, it ensures quality in implementations where interference could be a problem.
- Carrierless Amplitude Phase Modulation (CAP) CAP combines amplitude and phase modulation, and it is one of the early techniques used for ADSL. However, portions of the band over which ADSL operates conduct noise from exterior devices such as ham radios and CB radios, so if these devices are operating while you're on a call over an ADSL line, you experience static in the voice call or corrupted bits in a data session. Consequently, CAP is no longer the preferred technique with ADSL because it provides a rather low quality of service. (ADSL is discussed in Chapter 2, "Traditional Transmission Media," and in Chapter 12, "Broadband Access Alternatives.")
Multicarrier Modulation Schemes
There are two multicarrier schemes:
- Orthogonal Frequency Division Multiplexing (OFDM) OFDM, which is growing in importance, is used in European digital over-the-air broadcast and in many new and emerging wireless broadband solutions, including 802.11a, 802.11g, 802.16x, 802.20x, and Super 3G, and it is the basis of 4G and 5G visions. It is also used in xDSL, where it is known as DMT. OFDM is a combination of two key principles: multicarrier transmission and adaptive modulation. Multicarrier transmission is a technique that divides the available spectrum into many subcarriers, with the transmission rate reduced on each subcarrier. OFDM is similar to FDM in that multiple-user access is achieved by subdividing the available bandwidth into multiple channels that are then allocated to users. However, OFDM is a special case of FDM. An FDM channel can be likened to the water flow out of a faucet, where the water comes out as one big stream and can't be subdivided. The OFDM channel, on the other hand, can be compared to a shower, where the water flow is composed of a lot of little streams of water. This analogy also highlights one of the advantages of OFDM: If you put your thumb over the faucet, it stops all the water flow, but that is not the case with the shower, where some of the streams of water will still get through. In other words, FDM and OFDM respond differently to interference, which is minimized in the case of OFDM. (OFDM is discussed in more detail in Chapter 15, "WMANs, WLANs, and WPANs.")
- Discrete Multitone (DMT) DMT is a multicarrier scheme that allows variable spectral efficiency among the subbands it creates. Therefore, it is used in wireline media, where noise characteristics of each wire might differ, as in the wires used to carry xDSL facilities. Because spectral efficiency can be optimized for each individual wire with DMT, DMT has become the preferred choice for use with xDSL.
Simplex, Half-Duplex, and Full-Duplex Data Transmission
Information flow takes three forms: simplex, half-duplex, and full-duplex (see Figure 5.5).
Figure 5.5. Simplex, half-duplex, and full-duplex data transmission
With simplex transmission, information can be transmitted in one direction only. Of course, simplex does not have great appeal to today's business communications, which involve two-way exchanges. Nonetheless, there are many applications of simplex circuits, such as a doorbell in homes. When someone presses a doorbell button, a signal goes to the chimes, and nothing returns over that pair of wires. Another example of a simplex application is an alarm circuit. If someone opens a door he or she is not authorized to open, a signal is sent over the wires to the security desk, but nothing comes back over the wires.
Half-duplex provides the capability to transmit information in two directions but in only one direction at a time (e.g., with a pair of walkie-talkies). Half-duplex is associated with two-wire circuits, which have one path to carry information and a second wire or path to complete the electrical loop. Because half-duplex circuits can't handle simultaneous bidirectional flow, there has to be a procedure for manipulating who's seen as the transmitter and who's seen as the receiver, and there has to be a way to reverse who acts as the receiver and who acts as the transmitter. Line turnarounds handle these reversals, but they add overhead to a session because the devices undertake a dialog to determine who is the transmitter and who is the receiver. For communication that involves much back-and-forth exchange of data, half-duplex is an inefficient way of communicating.
Full-duplex, also referred to simply as duplex, involves a four-wire circuit, and it provides the capability to communicate in two directions simultaneously. There's an individual transmit and receive path for each end of the conversation. Therefore, no line turnarounds are required, which means full-duplex offers the most efficient form of data communication. All digital services are provisioned on a four-wire circuit and hence provide full-duplex capabilities.
You may be wondering how you can conduct a two-way conversation on your telephone, which is connected to a two-wire local loop. The answer is that the telephone set itself is a full-duplex device, containing a circuit called the network interface or telephony hybrid, which connects the microphone and speaker to the telephone line and performs the conversion between the two-wire transmission link and the four-wire telephone set, separating the incoming audio from the outgoing signal.
Coding Schemes: ASCII, EBCDIC, Unicode, and Beyond
A coding scheme (or collating sequence) is a pattern of bits used to represent the characters in a character set, as well as carriage returns and other keyboard or control functions. Over time, different computer manufacturers and consortiums have introduced different coding schemes. The most commonly used coding schemes are ASCII, EBCDIC, and Unicode.
The American Standard Code for Information Interchange (ASCII) is probably the most familiar coding scheme. ASCII has seven information bits per character, and it has one additional bit that's a control bit, called a parity bit, used for error detection. In ASCII, seven ones or zeros are bundled together to represent each character. A total of 128 characters (i.e., 27, for the seven bits of information per character and the two possible values of each character) can be represented in ASCII coding.
At about the same time that the whole world agreed on ASCII as a common coding scheme, IBM introduced its own proprietary scheme, called Extended Binary Coded Decimal Interchange Code (EBCDIC). EBCDIC involves eight bits of information per character and no control bits. Therefore, you can represent 256 possible characters (i.e., 28) with EBCDIC. This sounds like a lot of characters, but it's not enough to handle all the characters needed in the languages throughout the world. Complex Asian languages, for instance, can include up to 60,000 characters.
In Table 5.3, you can see that the uppercase letter A in ASCII coding looks quite different than it does in EBCDIC. This could be a source of incompatibility. If your workstation is coded in ASCII and you're trying to communicate with a host that's looking for EBCDIC, you will end up with garbage on your screen because your machine will not be able to understand the alphabet that the host is using.
Character or Symbol |
ASCII |
EBCDIC |
---|---|---|
A |
1000001 |
11000001 |
K |
1001011 |
11010010 |
M |
1001101 |
11010100 |
2 |
0110010 |
11110010 |
Carriage return |
0001101 |
00010101 |
In the mid-1980s, a coding scheme called Unicode was formed. Unicode assigns 16 bits per character, which translates to more than 65,000 possible characters (i.e., 216). (Can you imagine a terminal with 60,000 keys to press?) Unicode has become the key encoding scheme for Chinese and Japanese, and new techniques have made the keyboards easy to use.
Most people now believe that the best way to handle coding is to use natural language interfaces, such as voice recognition. Natural language interfaces are ultimately expected to be the most common form of data entry. But until we get there, the various coding schemes could be a potential source of incompatibility in a network, and you therefore might need to consider conversion between schemes. Conversion could be performed by a network element on the customer premises, or it could be a function that a network provider offers. In fact, the early packet-switched X.25 networks provided code conversion as a value-added feature.
Transmission Modes: Asynchronous and Synchronous Transmission
Another concept to be familiar with is the distinction between transmission modes. To appreciate the distinction, let's look at the historical time line again. The first introduced type of terminals were dumb terminals. They had no processing capabilities and no memories. They had no clocking references, so the only way they could determine where to find the beginning or the end of a character was by framing the character with start and stop bits. These systems used asynchronous transmission, in which one character is transmitted at a time, at a variable speed (i.e., speed depending on things such as how quickly you type or whether you stop to answer the phone). Asynchronous communication typically deals with ASCII-encoded information, which means a third control bit, a parity bit, needs to be accounted for. These extra control bits add up to fairly significant overhead. In essence, asynchronous transmission has 30% inefficiency because for every seven bits of information, there are at least three bits of control, and it can be higher as there can be 1, 1.5, or 2 stop bits used. Another disadvantage of asynchronous transmission is that it operates at comparatively low speeds; today, in general, it operates at no higher than 115Kbps.
Synchronous transmission emerged in the late 1960s, when IBM introduced its interactive processing line, which included smart terminals. These smart terminals could process information and use algorithms; for example, a terminal could use an algorithm on a message block to determine what it was composed of and in that way very succinctly check for errors. Smart terminals had buffers, so they could accumulate the characters being typed in until they had a big block that they could send all at one time. Smart terminals also had clocking devices, whereby on one pair of wires, a clocking pulse could be sent from the transmitter to the receiver. The receiver would lock in on that clocking pulse, and it could determine that with every clocking pulse it saw on one wire, it would have a bit of information present on the other wire. Therefore, the receiver could use the clocking pulse to simply count off the bits to determine where the beginning and the end of the character were, rather than actually having to frame each character with a start bit and a stop bit. Synchronous transmission in classic data communications implied sending information a block at a time at a fixed speed.
Another benefit of synchronous transmission is very tight error control. As mentioned earlier, smart terminals have processors and can apply mathematical algorithms to a block of data. By calculating the contents of that block, the terminal comes up with a 16- or 32-bit code that identifies the structure of the block's contents. The terminal adds this code to the end of the block and sends it to the receiver. The receiver performs the same mathematical algorithm on the block, and it comes up with its own 16- or 32-bit code. The receiver then compares its code with the one the terminal sent, and if they match, the receiver sends an ACK, a positive acknowledgment that everything's okay, and it moves on to sending the next block. If the two codes don't match, the receiver sends a NACK, a negative acknowledgment, which says there was an error in transmission and the previous block needs to be resent before anything else can happen. If that error is not corrected within some number of attempts that the user specifies, the receiver will disengage the session. This ensures that errors are not introduced. Yet another benefit of synchronous transmission is that it operates at higher speeds than asynchronous transmission, and today you commonly see it performing at 2Mbps.
These two types of transmission make sense in different applications. For machine-to-machine communications where you want to take advantage of high speeds and guarantee accuracy in the data flowsuch as electronic funds transfersynchronous communication is best. On the other hand, in a situation in which a human is accessing a database or reading today's horoscope, speed may not be of the essence, and error control may not be critical, so the lower-cost asynchronous method would be appropriate.
Keep in mind that things are never simple in telecom, and you rarely deal with simple alternatives; rather, you deal with layers and combinations of issues. For example, you can think of an escalator as being a synchronous network. The steps are presented at the same rate consistently, and they all travel up the ramp at the same speed. Passengers alight on steps, and all passengers are carried through that network at the same speed; therefore, the network is synchronous. However, each passenger alights on the escalator at a different rate, which makes the access to the network asynchronous. For example, an eight-year-old child might run up to the escalator at high speed and jump straight onto the third step. Behind that child might be an injured athlete with a cane, who cautiously waits while several stairs pass, until confident of stepping on the center of the stair. So people get on the escalator at varying rates and in different places; no consistent timing determines their presence.
The escalator scenario describes the modern broadband network. SDH/SONET is a synchronous network infrastructure. When bits get into an SDH/SONET frame, they all travel at OC-3 or OC-12 or one of the other line rates that SDH/SONET supports. But access onto that network might be asynchronous, through an ATM switch, where a movie might be coming in like a fire hose of information through one interface and next to it a dribble of text-based e-mail is slowly passing through. One stream of bits comes in quickly, and one comes in slowly, but when they get packaged together into a frame for transport over the fiber, they're transported at the same rate.
Error Control
Error control, which is a process of detecting and/or correcting errors, takes a number of forms, the two most common of which are parity checking and cyclic redundancy checking.
In ASCII-based terminals, which use asynchronous transmission, most often the error control is parity checking. Parity checking is a simple process of adding up the bit values to come up with a common value, either even or odd. It doesn't matter which one, but once you've selected either even or odd, every terminal must be set to that value. Let's say we're using odd parity. If you add up the bits for character #1 in Figure 5.6, you see that they equal 2, which is an even number. We need odd parity, so the terminal inserts a 1 bit to make that a 3, which is an odd number. For character #2 the bits add up to 3, so the terminal inserts a 0 as a parity bit to maintain the odd value. The terminal follows this pattern with each of the six characters, and then it sends all the bits across the network to the receiver. The receiver then adds up the bits the same way the terminal did, and if they equal an odd number, the receiver assumes that everything has arrived correctly. If they don't equal an odd number, the receiver knows there is a problem but cannot correct it. This is the trouble with parity checking; to determine that an error had occurred, you would have to look at the output report, and therefore errors can easily go unnoticed. Thus, parity checking is not the best technique when it comes to ensuring the correctness of information.
Bit Position |
Information Character |
|||||
---|---|---|---|---|---|---|
#1 |
#2 |
#3 |
#4 |
#5 |
#6 |
|
1 |
0 |
1 |
0 |
0 |
1 |
0 |
2 |
1 |
0 |
0 |
0 |
0 |
1 |
3 |
0 |
0 |
1 |
1 |
0 |
1 |
4 |
0 |
1 |
1 |
1 |
1 |
0 |
5 |
0 |
0 |
0 |
0 |
1 |
1 |
6 |
1 |
1 |
1 |
1 |
1 |
0 |
7 |
0 |
0 |
0 |
1 |
1 |
0 |
Parity Bit |
1 |
0 |
0 |
1 |
0 |
0 |
Synchronous terminals and transmission use a type of error control called cyclic redundancy checking (see Figure 5.7). This is the method mentioned earlier in the chapter, whereby the entire message block is run through a mathematical algorithm. A cyclic redundancy check (CRC) code is appended to the message, and the message is sent to the receiver. The receiver recalculates the message block and compares the two CRCs. If they match, the communication continues, and if they don't match, the receiver either requests retransmissions until the problem is fixed or it disengages the session if it is not capable of being fixed within some predetermined time frame.
Figure 5.7. Cyclic redundancy checking
The OSI Reference Model and the TCP IP Reference Model
|