Mac OS X Internals: A Systems Approach
2.15. Mac OS X Server
The Mac OS X Server operating system is architecturally identical to Mac OS X. In fact, for a given processor architecture, Apple uses the same kernel binary on every system, whether it is a Mac mini or the highest-end Xserve.[87] The key differences between the server and desktop versions of Mac OS X lie in bundled software and the underlying hardware.[88] Examples of server-specific features include the following: [87] Xserve is Apple's line of 1U servers. A U, or unit, refers to a standard way of defining the height of a rack-mountable piece of equipment. [88] Although Mac OS X Server is primarily targeted to run on Xserve hardware, it is also supported on other Macintosh hardware.
Let us look at two Apple technologiesXgrid and Xsanthat are typically used in the context of server computing. 2.15.1. Xgrid
The abundance of computing and networking resources, along with the fact that such resources are often not fully used, has led to the harnessing of these resources to solve a variety of problems. An early example of this concept is the Xerox worm experiments, wherein programs ran multimachine computations across several Ethernet-connected Alto computers.
[89] The Alto worms were conceptually similar, in some aspects, to the controller and agent programs in a modern grid-computing environment such as Apple's Xgrid. In general, multiple computers can be combined to do a computing-intensive task,[90] provided the task can be broken into subtasks that each computer can handle independently. Such a group of computers is called a computational grid. One may distinguish between a grid and a cluster based on how tightly coupled the constituent computers are. A grid is usually a group of loosely coupled systems that are often not even geographically close.[91] Moreover, the systems may have any platform and may run any operating system that supports the grid software. In contrast, a cluster typically contains tightly coupled systems that are centrally managed, collocated, and interconnected through a high-performance network, and they often run the same operating system on the same platform. [90] There are other varieties of "big" computing, such as High Performance Computing (HPC) and High Throughput Computing (HTC). The discussion of these is beyond the scope of this book. [91] An example of such a grid is the SETI@home project, which is a scientific experiment that uses participating computers on the Internet in the Search for Extraterrestrial Intelligence (SETI). 2.15.1.1. Xgrid Architecture
Apple's Xgrid technology provides a mechanism for deploying and managing Mac OS Xbased computational grids.[92] Figure 240 shows a simplified view of the Xgrid architecture. Xgrid has the following key components and abstractions. [92] It is possible for Linux systems to participate as agents in an Xgrid. Apple officially supports only Mac OS X and Mac OS X Server agents.
Figure 240. Xgrid architecture
Thus, clients submit jobs to the controller, which maintains most of the Xgrid logic, and the agents perform tasks. The controller's specific responsibilities include the following.
An Xgrid may be classified into the following categories based on the type of participating systems:
2.15.1.2. Xgrid Software
Xgrid provides GUI-based tools for monitoring grids and submitting jobs. It can also be managed from the command line: The xgrid command can be used to submit and monitor jobs, whereas the xgridctl command can be used to query, start, stop, or restart Xgrid daemons. The Xgrid agent and controller daemons reside as /usr/libexec/xgrid/xgridagentd and /usr/libexec/xgrid/xgridcontrollerd, respectively. The /etc/xgrid/agent/ and /etc/xgrid/controller/ directories contain the daemons' configuration files. The Xgrid public API (XgridFoundation.framework) provides interfaces for connecting to and managing Xgrid instances.[95] Custom Cocoa applications can be written with Xgrid integration. [95] Another Xgrid frameworkXgridInterface.frameworkis a private framework. 2.15.2. Xsan
Apple's Xsan product is a storage area network (SAN) file system along with a graphical management applicationthe Xsan Admin. Xsan is based on the StorNext multiplatform file system from Advanced Digital Information Corporation (ADIC). In fact, Macintosh clients can be added to an existing StorNext SAN. Conversely, Xserve and Xserve RAID systems can act as controllers and storage, respectively, for client computers running StorNext software on platforms such as AIX, HP-UX, Irix, Linux, Solaris, UNICOS/mp, and Microsoft Windows. As is the case with a typical SAN, Xsan connects computer systems and storage devices using high-speed communication channels, providing fast access to users and on-demand, nondisruptive expandability to administrators. Figure 241 shows Xsan's high-level architecture. Figure 241. Xsan architecture
An Xsan consists of the following constituents:
2.15.2.1. Storage in Xsan
The logical, user-facing view of storage in Xsan is a volume, which represents shared storage. Figure 241 shows how an Xsan volume is constructed. The smallest physical building block in Xsan is a disk, whereas the smallest logical building block is a logical unit number (LUN). A LUN can be an Xserve RAID array or slice. It can also be a JBOD.[96] LUNs are combined to form storage pools, which can have different characteristics for data loss protection or performance. For example, Figure 241 shows two storage pools: one that contains RAID 1 arrays for high recoverability through redundancy, and another that contains RAID 5 arrays for high performance.[97] At the file system level, Xsan allows assigning directories to storage pools through affinities, wherein users can have one directory for storing files that must have high recoverability and another for storing files that must have fast access. Storage pools are combined to form user-visible volumes. Once an Xsan volume is mounted by a client, the latter can use it as a local disk. It is more than a local disk, however, because its capacity can be increased dynamically, and it can be shared in the SAN. [96] JBOD stands for Just a Bunch of Disks. A JBOD LUN is a virtual disk drive created from the concatenation of multiple physical disks. There is no redundancy in a JBOD configuration. [97] A RAID 1 configuration mirrors data on two or more disks. A RAID 5 configuration stripes data blocks across three or more disks. RAID 5 intersperses parity information across the drive array. Parity is used to recover lost data in the case of a drive failure. Xsan volumes support permissions and quotas. Xsan also allows different allocation strategies to be specified for volumes. The balance strategy causes new data to be written to the storage pool that has the largest amount of free space. The fill strategy causes Xsan to fill available storage pools in sequence, starting with the first. The round-robin strategy causes Xsan to circularly iterate over all available pools while writing new data. An Xsan's storage capacity can be increased by adding new volumes, by adding new storage pools to existing volumes, or by adding new LUNs to an existing storage pool.[98] [98] The existing storage pool cannot be the one that holds the volume's metadata or journal data. 2.15.2.2. Metadata Controllers
An Xsan metadata controller's primary functions are the following:
There must be at least one metadata controllerusually an Xserve systemin an Xsan. Additional controllers may be added as standby controllers, which take over if the primary controller fails. Note that the metadata controller manages only the metadata and the journal; it does not store them on its local storage. By default, a volume's metadata and journal reside on the first storage pool added to the volume. 2.15.2.3. Client Systems
Xsan clients can range from single-user desktop computers to multiuser servers. A metadata controller can be a client as well. As we saw earlier, Xsan can support other client platforms that run the StorNext software. 2.15.2.4. Communication Infrastructure
Xsan clients use Fibre Channel for file data (i.e., while communicating with Xserve RAID systems) and Ethernet for metadata[100] (i.e., while communicating with the metadata controller). [100] Xsan administration traffic also goes over Ethernet. Xsan supports Fibre Channel Multipathing: If multiple physical connections are available, Xsan can either use dedicated connections[101] to certain LUNs in a volume, or it can use separate connections for read and write traffic. [101] Such dedicated connections are assigned at volume-mount time.
On a system with Xsan software installed, the Xsan command-line utilities reside in /Library/Filesystems/Xsan/bin/.
|
Категории