Beowulf Cluster Computing With Linux 2003

15.3 Condor Architecture

A Condor pool comprises a single machine that serves as the central manager and an arbitrary number of other machines that have joined the pool. Conceptually, the pool is a collection of resources (machines) and resource requests (jobs). The role of Condor is to match waiting requests with available resources. Every part of Condor sends periodic updates to the central manager, the centralized repository of information about the state of the pool. The central manager periodically assesses the current state of the pool and tries to match pending requests with the appropriate resources.

15.3.1 The Condor Daemons

In this subsection we describe all the daemons (background server processes) in Condor and the role each plays in the system.

15.3.2 The Condor Daemons in Action

Within a given Condor installation, one machine will serve as the pool's central manager. In addition to the condor_master daemon that runs on every machine in a Condor pool, the central manager runs the condor_collector and the condor_negotiator daemons. Any machine in the installation that should be capable of running jobs should run the condor_startd, and any machine that should maintain a job queue and therefore allow users on that machine to submit jobs should run a condor_schedd.

Condor allows any machine simultaneously to execute jobs and serve as a submission point by running both a condor_startd and a condor_schedd. Figure 15.6 displays a Condor pool in which every machine in the pool can both submit and run jobs, including the central manager.

Figure 15.6: Daemon layout of an idle Condor pool.

The interface for adding a job to the Condor system is condor_submit, which reads a job description file, creates a job ClassAd, and gives that ClassAd to the condor_schedd managing the local job queue. This triggers a negotiation cycle. During a negotiation cycle, the condor_negotiator queries the condor_collector to discover all machines that are willing to perform work and all users with idle jobs. The condor_negotiator communicates in user priority order with each condor_schedd that has idle jobs in its queue, and performs matchmaking to match jobs with machines such that both job and machine ClassAd requirements are satisfied and preferences (rank) are honored.

Once the condor_negotiator makes a match, the condor_schedd claims the corresponding machine and is allowed to make subsequent scheduling decisions about the order in which jobs run. This hierarchical, distributed scheduling architecture enhances Condor's scalability and flexibility.

When the condor_schedd starts a job, it spawns a condor_shadow process on the submit machine, and the condor_startd spawns a condor_starter process on the corresponding execute machine (see Figure 15.7). The shadow transfers the job ClassAd and any data files required to the starter, which spawns the user's application.

Figure 15.7: Daemon layout when a job submitted from Machine 2 is running.

If the job is a Standard Universe job, the shadow will begin to service remote system calls originating from the user job, allowing the job to transparently access data files on the submitting host.

When the job completes or is aborted, the condor_starter removes every process spawned by the user job, and frees any temporary scratch disk space used by the job. This ensures that the execute machine is left in a clean state and that resources (such as processes or disk space) are not being leaked.

Категории