ActionScripting in Flash MX

4.7 Interoperability and interdomain routing

We have discussed several different approaches to multicast distribution, and each has its strengths and weaknesses leading to a somewhat fragmented installed base. It is clearly desirable for different routing protocols to be able to interoperate with one another until there is a clear winner. Interoperability approaches can include the following:

Another area we will briefly touch on here is the delivery of IP multicasts between domains (i.e., AS-AS multicast delivery). While an interim solution is available (through a combination of both new and existing technologies), a long-term solution requires a radical examination of possible solutions.

4.7.1 Multicast border routers

There is a fundamental incompatibility between sparse- and dense-mode multicast protocols in the way they approach the construction of distribution trees. Dense-mode protocols are data driven, while sparse-mode protocols rely on explicit join requests. If a dense-mode group is to interoperate with a sparse-mode group (e.g., to form a group that is sparsely distributed over a wide area network but that is densely distributed within a single subnet), there must be a mechanism for allowing the dense group to reach out to the sparse group to request to join. The solution proposed by PIM designers is to have Multicast Border Routers (MBRs) send explicit joins to the sparse group. Note that the same approach would enable PIM-SM to interoperate with other dense-mode protocols, such as DVMRP. For further details on interoperability between different multicast routing protocols via MBRs refer to [24].

4.7.2 Tunneling Multicast Backbone (MBone)

Tunneling is a transition strategy for IP multicast routing. In this context we refer to the encapsulation of multicast packets within IP unicast datagrams, which may then be routed through parts of an internetwork via conventional unicast routing protocols, such as RIP, OSPF, and EIGRP. The encapsulation is added on entry to a tunnel and stripped off on exit from a tunnel. Perhaps the best-known demonstration of multicast tunneling is employed to create an Internet overlay network called the MBone.

The MBone

Multicast packet forwarding is far from uniformly supported on the Internet at present. To gain experience with multicasting, the designers of the Internet decided to create a virtual overlay network on top of the physical infrastructure of the existing Internet. This overlay network is called the Multicast Backbone (MBone). The MBone performed its first worldwide event in March 1992, to support a real-time multicast audioconference over the Internet from an IETF meeting in San Diego. In the original experiment there were 20 sites involved; by 1994 the IETF meeting in Seattle was multicasting to 567 hosts in 15 countries on two parallel channels (audio and video). The multicast routing function was provided by workstations running a daemon process (mrouted), capable of receiving encapsulated multicast packets and processing them as required. Connectivity between these devices was provided using point-to-point IP-encapsulated tunnels, where DVMRP was employed to create logical links between end points over one or more unicast Internet routers. With this early deployment multiple tunnels sometimes ran over the same physical link.

The MBone has grown substantially since 1992, and has subsequently been used for video- and audioconferencing, video broadcasts from international technical conferences, and NASA space shuttle missions. The MBone is probably one of the few places where DVMRP is currently implemented on a live network (although it is understood that the administrators of the MBone plan to adopt PIM in the future because of its greater efficiency). Figure 4.12 illustrates the MBone status as of May 1994.

Figure 4.12: Major MBone routers and links as of May 11, 1994. (Attributed to S. Casner)

MBone access

Multicasting can be supported in commercial multicast routers or in hosts running the multicast routing daemon (mrouted), which uses DVMRP as the routing protocol. Networks that are connected to the MBone must comply with specific requirements for the available bandwidth. For video transmissions, a minimum bandwidth of 128 Kbps is required. For audio transmissions a minimum of 9–16 Kbps is required. The IETF multicast traffic has average transmission rates of 100 to 300 Kbps and spikes of up to 500 Kbps. The interested reader should refer to [26, 27] for further details on the MBone and its architecture.

MBone routing

The basic idea in constructing an overlay network is to create virtual links by tunneling multicast packets inside regular IP unicast packets where the transmission path traverses routers that are not multicast enabled (see Figure 4.13). Since few routers in the Internet today support multicasting, the MBone is overlaid on top of the existing Internet protocols, with multicast routers (mrouters) connected by virtual point-to-point links. Unicast encapsulation hides the multicast data and addressing information inside the payload of a new unicast IP header. The unicast destination address of the new IP header is the tunnel end-point mrouter IP address. When the mrouter at the end of the tunnel receives the encapsulated packet, it strips off the IP header and forwards the original multicast packet. In Figure 4.13, we see that the multicasts being forwarded from Router-1 to Router-4 are sent as multicasts to Router-2 and then encapsulated in IP and tunneled (as unicasts) via Router-3 on to Router-4 (where they are decapsulated). We need both unicast and multicast routing tables to support tunneling, since the shortest path for multicasting between R1 and R4 is not necessarily the shortest path for unicasting.

Figure 4.13: MBone tunnel. Shaded nodes are multicast-enabled routers forming an overlay network (shown in bold).

MBone topology is engineered via path metrics, which specify the routing cost for each tunnel (used by DVMRP to select the cheapest path). The lower the metric the lower the cost of forwarding packets through a tunnel. If, in Figure 4.13, we set up two tunnels between Router-2 and Router-4, as R2-R3-R4 and R2-R6-R5-R4, with tunnel metrics 8 and 6, respectively, then the resulting MBone topology will be as illustrated in Figure 4.14.

Figure 4.14: Modified MBone tunnel topology. By changing the tunnel metrics between R2 and R4 we can force a different path.

The MBone also uses a threshold to limit the distribution of multicasts. This parameter specifies the minimum TTL for a multicast packet to be forwarded into an established tunnel. The TTL is decremented by one at every multicast router hop (i.e., it is unaffected by the number of unicast routers traversed). In the future it is envisaged that most Internet routers will be multicast enabled, and this will negate the use of tunneling. The MBone may eventually become obsolete, but this could take some time given the current adoption of multicasting on the Internet.

Example MBone applications

The first multiparty video- and audioconferencing tools to be used over the MBone were developed by the Network Research Group at Lawrence Berkeley National Laboratory (LBNL). Today there are many commercial and noncommercial applications; the following list summarizes some popular MBone applications that are currently available.

SD is particularly interesting. It can be used by MBone users to reserve and allocate media channels and to view advertised channels. SD advertises session schedules periodically (via a well-known multicast address, 224.2.2.2, and port 4000) and also assigns a unique multicast address and port number to each multicast application session (actually, to each message flow within a session).

The MBone is used widely in the research community to transmit the proceedings of various conferences and to permit desktop conferencing. Most MBone applications run over UDP rather than TCP, since the reliability and flow-control mechanisms of TCP are not practical for real-time broadcasting of multimedia data. The loss of an audio or video packet is acceptable, rather than transmission delays, due to TCP retransmissions. Above UDP most MBone applications use the Real-Time Transport Protocol (RTP), discussed in section 4.8.

4.7.3 Interdomain IP multicasting

There is a perceived need to provide Internet-wide IP multicast, evidenced by the expansion of the MBone and the emergence of new multicast-aware applications. The short-term solution for interdomain multicast routing is functional but relies on an inelegant combination of new and existing technology, as follows:

While this approach is accepted as a reasonable interim solution, it lacks scalability in the long term, and there is still a perceived need to develop a more integrated long-term strategy. There are several approaches being actively researched at this time, broadly divided into two camps, as follows:

Further discussion on this topic is beyond the scope of this book, and the interested reader should keep a watchful eye on forthcoming Internet drafts on this topic.

Категории