Beowulf Cluster Computing With Linux 2003
|
4.5 Network Choice
Choosing the appropriate system area network for a cluster can be complicated process. Two factors weigh heavily in this sort of decision. The first is cost. Realistically, most clusters are built with a fixed budget. This means that a higher-priced, higher-performance network will probably come at the cost of needing to purchase a smaller cluster. In many cases, specialized network interconnects can cost upwards of $1000–2000 per node. At this point, this cost approximates the cost of a high performance compute node. This means that building a high-performance network can reduce the cluster size by a factor of two, when working with a fixed budget. As we saw in Section 1.3.6, a high-performance network can be a very reasonable use of resources because of the greatly improved performance it can provide.
Another important factor is the performance of the network, and accordingly, the cluster itself. Many applications need particular performance properties to function effectively. Serviceability is a third concern. When the scale of a cluster increases beyond 32 or 64 nodes, many low-cost solutions become quite unwieldy, and result in largely unusable clusters. Fundamentally, all of these factors are pieces of the same puzzle: how to get the best value out of a cluster for its intended uses.
If a cluster is being built for a small number of applications, thorough application benchmarking is in order. The spectrum of communication patterns exhibited by application ranges from occasional communication from one node to another, to consistant communication from all nodes to all other nodes. At one extreme are applications that behave like SetiAtHome, wherein compute nodes will infrequently query a master node for a work unit to process for hours or days. At the other extreme are many scientific applications, where nodes will be in constant communication with one or more other nodes and the speed of the computation is limited by the performance of the slowest performing node. As is obvious from the communication pattern description, basically any interconnect would perform admirably in the first case, while the fastest interconnect possible is desirable in the second case.
The range of network options available to clusters ranges from the integrated Ethernet that is included with nearly any computer sold today, to higher speed interconnects with substantially higher costs. Performance varies greatly between these options. Integrated gigabit Ethernet will typically provide 100 MB/s of bandwidth, with latencies measured in the tens to hundreds of microseconds. Cluster interconnects generally provide five to ten times the bandwidth, providing latencies in below ten microseconds. As with many of the technologies described here, the state of the art is a fast moving target. Precise high-end performance figures would be out of date within months; check online sources for up to data figures.
|