Mac OS X Internals: A Systems Approach
7.4. Scheduling
A timesharing system provides the illusion of multiple processes running concurrently by interleaving their execution, context switching from one to another based on various conditions. The set of rules based on which the order of execution of threads is determined is called the scheduling policy. A system component called the scheduler implements the policy through data structures and algorithms. The implementation allows the scheduler to apply the policy while selecting threads for running from among those that are runnable. Although execution concurrency and parallelism are important goals of schedulers, especially as multiprocessor systems become commonplace, it is common for a modern operating system to support multiple scheduling policies, allowing different types of workloads to be treated differently. In its typical operation, the Mac OS X scheduler gives the processor to each thread for a brief period of time, after which it considers switching to another thread. The amount of time a scheduled thread can run before being preempted is called the thread's timslicing quantum, or simply quantum. Once a thread's quantum expires, it can be preempted because another thread of equal or higher priority wants to run. Moreover, a running thread can be preempted regardless of its quantum if a higher-priority thread becomes runnable. We will first look at how the Mac OS X scheduling infrastructure is initialized. Then we will discuss the scheduler's operation. 7.4.1. Scheduling Infrastructure Initialization
We saw several aspects of processor initialization during our discussion of kernel startup in Chapter 5. Figure 738 shows selected initializations related to scheduling. When ppc_init() starts executing on the master processor, none of the processor set structures, processor structures, and other scheduler structures have been initialized. The master processor's processor structure is initialized by processor_init() [osfmk/kern/processor.c], which sets up the processor's local run queue, sets the processor's state as PROCESSOR_OFF_LINE, marks it as belonging to no processor set, and sets other fields of the structure to their initial values. Figure 738. Scheduling-related initializations during system startup
7.4.1.1. Timeslicing Quantum
As shown in Figure 738, processor_init() calls timer_call_setup() [osfmk/kern/timer_call.c] to arrange for the quantum expiration functionthread_quantum_expire() [osfmk/kern/priority.c]to be called. thread_quantum_expire() recalculates the quantum and priority for a thread. Note that timer_call_setup() only initializes a call entry structure specifying which function is to be called and with what parameters. This call entry will be placed on each processor's timer call queue. (The kernel maintains per-processor timer call queues.) Until the real-time clock subsystem is configured, these queues are not serviced. // osfmk/kern/processor.c void processor_init(register processor_t p, int slot_num) { ... timer_call_setup(&p->quantum_timer, thread_quantum_expire, p); ... } ppc_init() finally calls kernel_bootstrap() [osfmk/kern/startup.c] to start the higher-level boot process. One of the latter's first operations is scheduler initialization by calling sched_init() [osfmk/kern/sched_prim.c], which first calculates the standard timeslicing quantum. The built-in default preemption ratethat is, the frequency at which the kernel will preempt threadsis 100Hz. A preemption rate of 100Hz yields a timeslicing quantum of 0.01 s (10 ms). The preempt boot argument can be used to specify a custom value of the default preemption rate to the kernel. The kern.clockrate sysctl variable contains the values of the preemption rate and the timeslicing quantum (as microseconds). $ sysctl kern.clockrate kern.clockrate: hz = 100, tick = 10000, profhz = 100, stathz = 100
The tick value represents the number of microseconds in a scheduler tick. The hz value can be seen as the frequency of a hardware-independent system clock.
sched_init() then initializes the global wait queues used by threads for waiting on events, initializes the default processor set by calling pset_init() [osfmk/kern/processor.c], and sets the sched_tick global variable to 0. 7.4.1.2. Timing and Clocks
Scheduling is a clock-based activity in that several of its critical functions are driven by a periodic clock or timer interrupts. Therefore, the clock subsystem must be configured for scheduling to be started. clock_config() [osfmk/kern/clock.c] configures the clock subsystem. Timer facilities on the PowerPC include the Timebase (TB) Register and the Decrementer Register (DEC). As we saw in Chapter 3, the TB Register is a 64-bit counter driven by an implementation-dependent frequency. On certain processor models, the frequency may be a function of the processor's clock frequency, whereas on some other models the TB Register is updated in accordance with an independent clock. In fact, the frequency is not even required to be constant, although a frequency change must be explicitly managed by the operating system. In any case, each increment of the TB Register adds 1 to its low-order bit. The TB Register is a volatile resource and must be initialized by the kernel during boot. The DEC is a 32-bit counter that is updated at the same frequency as the TB Register but is decremented by 1 on every update.
For a typical Timebase frequency, it will take thousands of years for the TB Register to attain its maximum value, but the DEC will pass zero in a few hundred seconds for the same frequency. When the DEC's value becomes negative, that is, the sign bit of the 32-bit signed integer represented by the DEC's contents changes from 0 to 1, a decrementer interrupt is caused. As we saw in Chapter 5, the PowerPC exception vector entry for this interrupt resides at address 0x900. The low-level handler in osfmk/ppc/lowmem_vectors.s sets the trap code as T_DECREMENTER and passes up the exception's processing to ihandler() [osfmk/ppc/hw_exception.s]the higher-level interrupt handler. ihandler() in turn calls interrupt() [osfmk/ppc/interrupt.c]. // osfmk/ppc/interrupt.c struct savearea * interrupt(int type, struct savearea *ssp, ...) { ... switch (type) { case T_DECREMENTER: ... rtclock_intr(0, ssp, 0); break; } ... }
rtclock_intr() [osfmk/ppc/rtclock.c] is the real-time clock device interrupt handler routine. The real-time clock subsystem maintains per-processor data structures such as the following:
Figure 739 shows an overview of real-time clock interrupt processing. Figure 739. Real-time clock interrupt processing
So far, we have seen the following functionality provided by the real-time clock subsystem.
A global list of clock devices is maintained by the kernel, with each entry being a clock object structure containing that particular clock's control port, service port, and a machine-dependent operations list. clock_config() calls the "config" function of each clock device on the list. Subsequently, clock_init() [osfmk/kern/clock.c] is called to initialize the clock devicesit calls the "init" function of each clock device. Note that unlike clock_config(), which is called only once during bootstrapping, clock_init() is called on a processor each time the processor is started. Consider the configuration and initialization of the system clock (Figure 740), whose "config" and "init" functions are sysclk_config() and sysclk_init(), respectively. Figure 740. System clock configuration
clock_config() also calls timer_call_initialize() [osfmk/kern/timer_call.c] to initialize the timer interrupt callout mechanism, which is used by the thread-based callout mechanism. // osfmk/kern/timer_call.c void timer_call_initialize(void) { ... clock_set_timer_func((clock_timer_func_t)timer_call_interrupt); ... } As shown in Figure 739, clock_set_timer_func() [osfmk/ppc/rtclock.c] merely sets its parameter (the timer_call_interrupt function pointer in this case) as the value of the rtclock_timer_expire global function pointer. Every time timer_call_interrupt() is called, it will service the timer call queue for the current processor. This way, the scheduler can arrange for tHRead_quantum_expire() to be invoked on a processor. clock_timebase_init() [osfmk/kern/clock.c] is a machine-independent function that calls sched_timebase_init() [osfmk/kern/sched_prim.c] to set up various time-related values used by the scheduler, for example:
sched_timebase_init() uses clock_interval_to_absolutetime_interval() [osfmk/ppc/rtclock.c] to convert conventional (clock) intervals to machine-specific absolute-time intervals. SCHED_TICK_SHIFT is defined to be 3 in osfmk/kern/sched.h, yielding a value of 125 ms for sched_tick_interval. 7.4.1.3. Converting between Absolute- and Clock-Time Intervals
The kernel often needs to convert between absolute- and clock-time intervals. Absolute time is based on the machine-dependent TB Register. The Mach trap mach_absolute_time(), which is available in the commpage, retrieves the current value of the TB Register. It is the highest-resolution time-related function on Mac OS X. To convert an absolute-time interval to a conventional clock interval (such as a value expressed in seconds), you need the implementation-dependent conversion factor, which can be retrieved by mach_timebase_info(). The conversion factor consists of a numerator and a denominator. The resultant ratio can be multiplied with an absolute-time interval to yield an equivalent clock interval in nanoseconds. Figure 741 shows an example of converting between the two time intervals. Figure 741. Converting between absolute- and clock-time intervals
7.4.1.4. Starting the Scheduler
The first thread to execute on the boot processor, kernel_bootstrap_thread() [osfmk/kern/startup.c], is started via load_context() [osfmk/kern/startup.c]. Besides setting up the machine-specific context of the thread, load_context() initializes certain aspects of the processor. In particular, it calls processor_up() [osfmk/kern/machine.c] to add the processor to the default processor set. kernel_bootstrap_thread() creates an idle thread for the processor, calls sched_startup() [osfmk/kern/sched_prim.c] to initiate the scheduler's periodic activities, and calls thread_bind() [osfmk/kern/sched_prim.c] to bind the current thread to the boot processor. The latter step is required so that execution remains bound to the boot processor and does not move to any other processors as they come online. Figure 742 shows an overview of scheduler startup. Figure 742. Scheduler startup
sched_startup() also initializes the thread-based callout mechanism that allows functions to be recorded by the kernel for invocation later. For example, setitimer(2), which allows real, virtual, and profiling timers to be set for a process, is implemented using a thread callout. At this point, we have the following primary scheduling-related periodic activities occurring in the kernel.
7.4.1.5. Retrieving the Value of the Scheduler Tick
Let us read the value of the sched_tick variable from kernel memory to examine the rate at which it is incremented. We can determine the address of the variable in kernel memory by running the nm command on the kernel executable. Thereafter, we will use the dd command to read its value from /dev/kmem, sleep for an integral number of seconds, and read its value again. Figure 743 shows a shell script that performs these steps. As seen in the output, the variable's value is incremented by 80 in 10 seconds, which is as we expected, since it should increment by 1 every 125 ms (or by 8 every second). Figure 743. Sampling the value of the scheduler tick
7.4.1.6. Some Periodic Kernel Activities
We have already seen what rtclock_intr() does. Let us briefly look at the operations of hertz_tick(), timer_call_interrupt(), and sched_tick_continue(). hertz_tick() [osfmk/kern/mach_clock.c] performs certain operations on all processors, such as gathering statistics, tracking thread states, and incrementing user-mode and kernel-mode thread timers. Examples of statistics gathered include the total number of clock ticks and profiling information (if profiling is enabled). On the master processor, hertz_tick() additionally calls bsd_hardclock(). bsd_hardclock() [bsd/kern/kern_clock.c] performs several operations if there is a valid, current BSD process and the process is not exiting. If the processor was in user mode, bsd_hardclock() checks whether the process has a virtual interval timerthat is, an interval timer of type ITIMER_VIRTUAL that decrements in process-virtual time (only when the process is executing). Such a timer can be set by setitimer(2). If such a timer exists and has expired, bsd_hardclock() arranges for a SIGVTALRM signal to be delivered to the process.
As we saw in Chapter 6, the USER_MODE() macrodefined in osfmk/ppc/proc_reg.his used to examine the saved SRR1, which holds the old contents of the MSR. The PR (privileged) bit of the MSR distinguishes between kernel and user mode.
bsd_hardclock() performs other operations regardless of whether the processor was in user mode, as long as the processor was not idle. It charges the currently scheduled process with resource utilization for a tick. It then checks whether the process has exceeded its CPU time limit (as specified by the RLIMIT_CPU resource limit), sending it a SIGXPU signal if it has. Next, it checks whether the process has a profiling timerthat is, an interval timer of type ITIMER_PROF. Such a timer decrements both in process-virtual time and when the kernel is running on behalf of the process. It can also be set by setitimer(2). If such a timer exists and has expired, bsd_hardclock() arranges for a SIGPROF signal to be delivered to the process. timer_call_interrupt() [osfmk/kern/timer_call.c] traverses the timer call queue for the current processor and calls handlers for those timers whose deadlines have expired (Figure 744). Figure 744. Timer call processing
sched_tick_continue() [osfmk/kern/sched_prim.c] performs periodic bookkeeping functions for the scheduler. As Figure 745 shows, it increments the sched_tick global variable by 1, calls compute_averages() [osfmk/kern/sched_average.c] to compute the load average and the Mach factor, and calls tHRead_update_scan() [osfmk/kern/sched_prim.c] to scan the run queues of all processor sets and processors to possibly update thread priorities. Figure 745. The scheduler's bookkeeping function
7.4.2. Scheduler Operation
Mac OS X is primarily a timesharing system in that threads are subject to timesharing scheduling unless explicitly designated otherwise. Typical timesharing scheduling aims to providewithout guaranteeseach competing thread a fair share of processor time, where fairness implies that the threads receive roughly equal amounts of processor resources over a reasonably long time.
Figure 746. A nonexhaustive call graph of functions involved in thread execution and scheduling
The following general points are noteworthy about scheduling on Mac OS X.
7.4.2.1. Priority Ranges
The Mac OS X scheduler is priority-based. The selection of threads for running takes into account the priorities of runnable threads. Table 72 shows the various priority ranges in the scheduling subsystemnumerically higher values represent higher priorities. The HOST_PRIORITY_INFO flavor of the host_info() Mach routine can be used to retrieve the values of several specific priorities.
7.4.2.2. Run Queues
A fundamental data structure maintained by the Mach scheduler is a run queue. Each run queue structure (Figure 747) represents a priority queue of runnable threads and contains an array of NRQS doubly linked lists, one corresponding to each priority level. The structure's highq member is a hint that indicates the likely location of the highest priority thread, which may be at a priority lower than the one specified by highq but will not be at a higher priority. Recall that each processor set has a run queue and each processor has a local run queue. Figure 747. The run queue structure
7.4.2.3. Scheduling Information in Tasks and Threads
To balance processor usage among threads, the scheduler adjusts thread priorities to account for each thread's usage. Associated with each thread and task are several priority-related limits and measurements. Let us revisit the task and thread structures to examine some of the scheduling-related information contained within them. The relevant portions of the structures are annotated in Figure 748. Figure 748. Important scheduling-related fields of the task and thread structures
As shown in Figure 748, each thread has a base priority. However, the thread's scheduled priority is the one that the scheduler examines while selecting threads to run.[18] The scheduled priority is computed from the base priority along with an offset derived from the thread's recent processor usage. The default base priority for timesharing user threads is 31, whereas the minimum kernel priority is 80. Consequently, kernel threads are substantially favored over standard user threads. [18] This discussion applies only to timesharing threads. Real-time threads are treated specially by the scheduler. 7.4.2.4. Processor Usage Accounting
As a thread accumulates processor usage, its priority decreases. Since the scheduler favors higher priorities, this could lead to a situation where a thread has used so much processor time that the scheduler will assign it no further processor time owing to its greatly lowered priority. The Mach scheduler addresses this issue by aging processor usageit exponentially "forgets" a thread's past processor usage, gradually increasing that thread's priority. However, this creates another problem: If the system is under such heavy load that most (or all) threads receive little processor time, the priorities of all such threads will increase. The resultant contention will deteriorate system response under heavy load. To counter this problem, the scheduler multiplies a thread's processor usage by a conversion factor related to system load, thereby ensuring that thread priorities do not rise because of increased system load alone. Figure 749 shows the calculation of a thread's timesharing priority based on its processor usage and the system's load. Figure 749. Computation of the timesharing priority of a thread
We see in Figure 749 that the thread's processor usage (thread->sched_usage), after being lowered by a conversion factor (thread->pri_shift), is subtracted from its base priority (thread->priority) to yield the scheduled priority. Let us now see how the conversion factor is calculated and how the thread's processor usage decays over time.
update_priority() [osfmk/kern/priority], which is frequently called as part of the scheduler's operation, under certain conditions updates the thread's conversion factor value by setting it to that of the processor set containing the thread.
The conversion factor consists of two components: a fixed part based on the machine-dependent absolute-time unit and a dynamic part based on system load. The global variable sched_pri_shift contains the fixed part, which is computed during scheduler initialization. The dynamic part is an entry in a constant array, with the array index based on the system load. Figure 750 shows a user-space implementation of a function to convert clock intervals to absolute-time intervals. Using this function, we can reconstruct the computation of sched_pri_shift in user space. The program also computes the value of sched_tick_interval, which corresponds to an interval of 125 ms. Figure 750. User-space computation of sched_pri_shift and sched_tick_interval
Figure 751 shows a code excerpt from the computation of the conversion factor's dynamic part. Figure 751. Computation of the usage-to-priority conversion factor for timeshared priorities
The scheduler ages processor usage of threads in a distributed manner: update_priority() [osfmk/kern/priority.c], which performs the relevant calculations, is called from several places. For example, it is called when a thread's quantum expires. The function call graph in Figure 746 shows several invocations of update_priority(). It begins by calculating the difference (ticks) between the current scheduler tick (sched_tick), which is incremented periodically, and the thread's recorded scheduler tick (thread->sched_stamp). The latter is brought up to date by adding ticks to it. If ticks is equal to or more than SCHED_DECAY_TICKS (32), the thread's processor usage is reset to zero. Otherwise, the usage is multiplied by 5/8 for each unit of differencethat is, it is multiplied by (5/8)ticks. There were two primary reasons behind the choice of 5/8 as the exponential decay factor: It provided scheduling behavior similar to other timesharing systems, and multiplication with it can be approximated by using only shift, addition, and subtraction operations. Consider multiplying a number by 5/8, which can be written as (4 + 1)/8that is, (4/8 + 1/8), or (1/2 + 1/8). Multiplication with (1/2 + 1/8) can be performed with a right shift by 1, a right shift by 3, and an addition. To facilitate decay calculations, the kernel maintains a static array with SCHED_DECAY_TICKS pairs of integersthe pair at index i contains shift values to approximate (5/8)i. If the value of ticks falls between 0 and 31, both inclusive, the pair at index ticks is used according to the following formula: if (/* the pair's second value is positive */) { usage = (usage >> (first value)) + (usage >> abs(second value))); else usage = (usage >> (first value)) - (usage >> abs(second value))); The program in Figure 752 computes (5/8)n, where 0 <= n < 32, using the shift values in the kernel's decay shift array and using functions from the math library. It also calculates the percentage differencethat is, the approximation error, which is less than 15% in the worst case. Figure 752. Approximating multiplication by 5/8 as implemented in the scheduler
Note that it is not sufficient to make a thread responsible for decaying its processor usage. Threads with low priorities may continue to remain on the run queue without getting a chance to run because of higher-priority threads. In particular, these low-priority threads will be unable to raise their priorities by decaying their own usagesomebody else must do so on their behalf. The scheduler runs a dedicated kernel threadthread_update_scan()for this purpose. // Pass #1 of thread run queue scanner // Likely threads are referenced in thread_update_array[] // This pass locks the run queues, but not the threads // static boolean_t runq_scan(run_queue_t runq) { ... } // Pass #2 of thread run queue scanner (invokes pass #1) // A candidate thread may have its priority updated through update_priority() // This pass locks the thread, but not the run queue // static void thread_update_scan(void) { ... }
thread_update_scan() is called from the scheduler tick function sched_tick_continue(), which periodically runs to perform scheduler-related bookkeeping functions. It consists of two logical passes. In the first pass, it iterates over the run queues, comparing the sched_stamp values of timesharing threads with sched_tick. This pass collects up to ThrEAD_UPDATE_SIZE (128) candidate threads in an array. The second pass iterates over this array's elements, calling update_priority() on timesharing threads that satisfy the following criteria.
7.4.3. Scheduling Policies
Mac OS X supports multiple scheduling policies, namely, ThrEAD_STANDARD_POLICY (timesharing), ThrEAD_EXTENDED_POLICY, THREAD_PRECEDENCE_POLICY, and ThrEAD_TIME_CONSTRAINT_POLICY (real time). The Mach routines tHRead_policy_get() and thread_policy_set() can be used to retrieve and modify, respectively, the scheduling policy of a thread. The Pthreads API supports retrieving and setting pthread scheduling policies and scheduling parameters through pthread_getschedparam() and pthread_setschedparam(), respectively. Scheduling policy information can also be specified at pthread creation time as pthread attributes. Note that the Pthreads API uses different policies, namely, SCHED_FIFO (first in, first out), SCHED_RR (round robin), and SCHED_OTHER (system-specific policymaps to the default, timesharing policy on Mac OS X). In particular, the Pthreads API does not support specifying a real-time policy. Let us now look at each of the scheduling policies. 7.4.3.1. THREAD_STANDARD_POLICY
This is the standard scheduling policy and is the default for timesharing threads. Under this policy, threads running long-running computations are fairly assigned approximately equal processor resources. A count of timesharing threads is maintained for each processor set. 7.4.3.2. THREAD_EXTENDED_POLICY
This is an extended version of the standard policy. In this policy, a Boolean hint designates a thread as non-long-running (nontimesharing) or long-running (timesharing). In the latter case, this policy is identical to ThrEAD_STANDARD_POLICY. In the former case, the thread will run at a fixed priority, provided its processor usage does not exceed an unsafe limit, in which case the scheduler will temporarily demote it to being a timesharing thread through a fail-safe mechanism (see Section 7.4.3.4). 7.4.3.3. ThrEAD_PRECEDENCE_POLICY
This policy allows an importance valuea signed integerto be associated with a thread, thus allowing threads within a task to be designated as more or less important relative to each other. Other aspects being equal (the same time constraint attributes, say), the more important thread in a task will be favored over a less important thread. Note that this policy can be used in conjunction with the other policies. Let us look at an example of using ThrEAD_PRECEDENCE_POLICY. The program in Figure 753 creates two pthreads within a task. Both threads run a function that continuously prints a thread labelthe first thread prints the character 1 whereas the second thread prints 2. We set the scheduling policies of both threads to THREAD_PRECEDENCE_POLICY, with the respective importance values specified on the command line. The program runs for a few seconds, with both threads printing their labels on the standard output. We can pipe the output through the awk command-line tool to count how many times 1 and 2 were printed, which will indicate the respective amounts of processing time the two threads received. Figure 753. Experimenting with the ThrEAD_PRECEDENCE_POLICY scheduling policy
7.4.3.4. ThrEAD_TIME_CONSTRAINT_POLICY
This is a real-time scheduling policy intended for threads with real-time constraints on their execution. Using this policy, a thread can specify to the scheduler that it needs a certain fraction of processor time, perhaps periodically. The scheduler will favor a real-time thread over all other threads, except perhaps other real-time threads. The policy can be applied to a thread using thread_policy_set() with the following policy-specific parameters: three integers (period, computation, and constraint) and a Boolean (preemptible). Each of the three integer parameters is specified in absolute-time units. A nonzero period value specifies the nominal periodicity in the computationthat is, the time between two consecutive processing arrivals. The computation value specifies the nominal time needed during a processing span. The constraint value specifies the maximum amount of real time that may elapse from the start of a processing span to the end of the computation. Note that the constraint value cannot be less than the computation value. The difference of the constraint and the computation values is the real-time latency. Finally, the preemtible parameter specifies whether the computation may be interrupted. Note that the real-time policy does not require special privileges to be used. Therefore, it must be used with care, given that it raises a thread's priority above that of several kernel threads. For example, using a real-time thread may be beneficial if the thread has a time-critical deadline to meet and latency is an issue. However, if the thread consumes too much processor time, using the real-time policy can be counterproductive. The scheduler includes a fail-safe mechanism for nontimesharing threads whose processor usage exceeds an unsafe threshold. When such a thread's quantum expires, it is demoted to being a timesharing thread, and its priority is set to DEPRESSPRI. However, in the case of a real-time thread, the scheduler remembers the thread's erstwhile real-time desires. After a safe release duration, the thread is promoted to being a real-time thread again, and its priority is set to BASEPRI_RTQUEUES.
The maximum unsafe computation is defined as the product of the standard quantum and the max_unsafe_quanta constant. The default value of max_unsafe_quanta is MAX_UNSAFE_QUANTA, defined to be 800 in osfmk/kern/sched_prim.c. An alternate value can be provided through the unsafe boot-time argument.
The following are examples of the use of ThrEAD_TIME_CONSTRAINT_POLICY:
You can use the lstasks program from Figure 721 to display the scheduling policy of a task's threads. $ sudo ./lstasks -v ... Task #70 BSD process id (pid) = 605 (QuickTime Player) ... thread 2/4 (0x16803) in task 70 (0x5803) ... scheduling policy = TIME_CONSTRAINT period = 0 computation = 166650 constraint = 333301 preemptible = TRUE ...
The program in Figure 754 is a crude example of time-constrained processing. It creates a thread that performs a periodic computation that involves sleeping for a fixed duration followed by processing for a fixed duration. We use mach_absolute_time() to measure the approximate difference between the time the thread wished to sleep for and the actual sleeping time. If the difference is more than a predefined threshold, we increment an error count. If the program is run with no command-line arguments, it will not modify the thread's scheduling policy. If one or more command-line arguments are provided, the program will set the policy to ThrEAD_TIME_CONSTRAINT_POLICY using predefined parameters. Thus, we can compare the number of errors in the two cases. Moreover, we can run other programs to load the system. For example, we can run an infinite loopsay, through a command such as perl -e 'while (1) {}'. Figure 754. Experimenting with the ThrEAD_TIME_CONSTRAINT_POLICY scheduling policy
7.4.3.5. Priority Recomputation on Policy Change
When thread_policy_set() is used to change a thread's scheduling policy, or to modify the parameters of an existing policy in effect, the kernel recomputes the thread's priority and importance values, subject to the thread's maximum and minimum priority limits. Figure 755 shows the relevant calculations. Figure 755. Recomputing a thread's priority on a scheduling-policy change
7.4.3.6. Task Roles
As we saw earlier in this chapter, the task_policy_set() routine can be used to set the scheduling policy associated with a task. TASK_CATEGORY_POLICY is an example of a task policy flavor. It informs the kernel about the role of the task in the operating system. With this flavor, task_policy_set() can be used to designate a task's role. The following are examples of task roles in Mac OS X.
Note that roles are not inherited across tasks. Therefore, every task begins life with TASK_UNSPECIFIED as its role. We can use our lstasks program to examine the roles of various tasks in the system. $ sudo ./lstasks -v ... Task #21 BSD process id (pid) = 74 (loginwindow) ... role = CONTROL_APPLICATION ... Task #29 BSD process id (pid) = 153 (Dock) ... role = BACKGROUND_APPLICATION ... Task #31 BSD process id (pid) = 156 (Finder) ... role = BACKGROUND_APPLICATION ... Task #45 BSD process id (pid) = 237 (Terminal) ... role = FOREGROUND_APPLICATION ...
|
Категории