Linux Clustering With Csm and Gpfs

 < Day Day Up > 


1.7 Other cluster hardware components

Aside from the physical nodes (management, compute, and storage nodes) that make up a cluster, there are several other key components that must also be considered. The following subsections discuss some of these components.

1.7.1 Networks

Nodes within a cluster have many reasons to communicate; after all, they are cooperating to provide an overall result. Some of the key communications that must occur include:

Depending on the actual application and the performance requirements, these various communications should often be carried out on different networks. Thus, you typically have more than one network and network type linking the nodes in your cluster.

Common network types used in clusters are:

1.7.2 Storage

Most clusters require that multiple nodes have concurrent access to storage devices, and often the same files and databases. Especially in high performance computing clusters, there is a need for very fast and reliable data access. Depending on your environment and the applications that are running on the cluster, you may chose a variety of storage options that provide both the performance and the reliability that you require.

There are two general storage configuration options: direct attached and network shared. Direct attached allows for each node to be directly attached to a set of storage devices, while network shared assumes the existence of one or more storage nodes that provide and control the access to the storage media.

We discuss some of the storage options in more detail in 2.2, "Hardware" on page 24.

1.7.3 Terminal servers

Terminal servers provide the capability to access each node in the cluster as if using a locally attached serial display. The BIOS of compute and storage nodes in xSeries clusters are capable of redirecting the machine POST out of the serial port. After POST, the boot loader and operating system also utilize the serial port for display and key input. The terminal servers provide cluster operators the ability to use common tools such as telnet, rsh, or ssh to access and communicate with each node in the cluster and, if necessary, multiple nodes simultaneously. Cluster management packages then have the ability to log whatever the nodes redirect out of the serial port to a file, even while not being viewed by the operator. This gives the operator an out-of-band method of viewing and interacting with the entire boot process of a node from POST to the operating system load. This is useful for debugging the boot process, performing other low-level diagnostics, and normal out-of-band console access. Terminal servers provide this capability by allowing a single display to be virtually attached to all nodes in the cluster. Terminal servers from Equinox and iTouch Communications, a division of MRV Communications, are examples of such devices and are commonly used in clusters.

More information on terminal servers from Equinox can be found at the following Web site:

http://www.equinox.com/

More information on terminal servers from MRV can be found at the following Web site:

http://www.mrv.com/product/MRV-IR-003/features/

1.7.4 Keyboard, video, and mouse switches

In addition to the terminal servers (for serially attached displays and printers), it is also not practical to provide a keyboard and a mouse for every node in a cluster. Therefore, using a common Keyboard Video Mouse (KVM) switch is also indispensable when building a cluster. This allows an operator to attach to any individual node and perform operations if required.

Keep in mind that this is for use by the cluster administrator or operator, not a general user of the cluster.


 < Day Day Up > 

Категории