Computer and Communication Networks (paperback)

11.3. Queueing Disciplines

Queueing systems are widely described by Kendal's notations as A/B/m/K , where A denotes the distribution of the interarrival times; B , the distribution of the service times; m , the number of servers; and K , the total capacity of the system. If a system reaches its full capacity, the arriving packet number K + 1 is blocked away. A and B are normally represented by the following symbols:

M

Exponential distribution

E k

Erlang distribution with k phases

H k

Hyperexponential distribution with k phases

D

Deterministic distribution

G

General distribution

GI

General distribution with independent interarrival times

MMPP

Markov-modulated Poisson process

BMAP

Batch Markovian arrival process

In a queueing system, packets arrive at the buffer and are stored there. If other packets are waiting for service in the queue, the incoming packet is stored as long as all other packets are ahead of it. Thus, a server state can be either idle or busy . When a currently in-service packet departs from the system, one of the stored packets is selected for service according to the scheduling discipline. A commonly used arriving process assumes that the sequence of interarrival times is independent and identically distributed (IID).

Queueing systems can be classified as follows :

  • First come, first served (FCFS) . Packets are served in the order they arrive.

  • Last come, first served (LCFS) . Packets are served in reverse order of their arrival.

  • Random service. Packets are selected at random for service.

  • Priority service. The selection of a packet in the queue depends on priorities that are locally or permanently assigned to the packet.

  • Preemption. The packet currently being serviced can be interrupted and preempted if a packet in the queue has a higher priority.

  • Round robin . If the time allowed for the service of a packet is through and the process is not finished, the packet returns to the queue, and the action is repeated until the job service is completed.

  • Service sharing. Servers can share their power with one another to provide services to the packets requiring more processing time.

This chapter focuses on FIFO models to show how the fundamental queueing in computer networks works. Chapter 12 looks at the applications of more advanced queuing models, such as priority, preemption, and round-robin queues.

Категории