HP-UX 11i Internals

   

Various levels of abstraction have been introduced to facilitate the use of relocatable code, shared memory objects, and private data areas. To support these abstraction layers, memory must be managed from the logical, virtual, and physical page levels. In addition, memory page images may be stored on a disk device as part of a "front store" or in a "back store" when the kernel needs to temporarily move them out of the way.

Logical Memory

As we discussed in Chapter 5, "Process and Thread Management from the Process's Viewpoint," a process takes a "logical" view of memory. As far as the process is concerned, it is only interested in its own memory requirements: Where is my code? Where is my data? Where are the shared library routines I requested?

The process doesn't really know whether memory objects are shared or private. That is not to say that the programmer doesn't need to know the difference, but from the individual process's point of view as long as the next instruction is available when it is fetched, its data is present and accounted for, and its system calls are fulfilled, then all is right with the world. The kernel must be much more pragmatic and play the part of the "man behind the curtain," pulling the levers and turning the cranks to keep up the appearance that the process is the end-all and be-all for when its threads are selected to take their turn at running.

The proc table's vas and pregion list provides the mapping of the process perspective, but the hardware has no understanding of this point of view. The processor only understands virtual and physical memory addressing. As we learned in our discussion of the PA-RISC processor, the hardware is specifically designed to allow virtual addressing (remember the TLB and the PDIR). The hardware view of virtual memory must be supported by the kernel if it is to be allowed the privilege of scheduling various threads for execution. Let's take a look at the kernel's view of virtual memory.

Virtual Memory

Of all the memory perspectives, this one encompasses the largest scope. When the thing you are managing is "make believe," size is not a limiting factor. This is not to say that virtual memory is unlimited; after all, we need to establish an addressing scheme and it must fit within the capabilities of the underlying architecture.

During our discussion of the HP PA-RISC architecture, we learned that the range of the virtual address space (VAS) depends upon the specific processor we are using. In the case of narrow 32-bit processors, the theoretical virtual address range is 2^32 spaces, each containing 2^32 bytes. In actual practice, most narrow PA-RISC processors implement 2^16 spaces (65,534), each containing 2^32 bytes (4 GB).

For wide 64-bit processors, the scale is much larger. Each space consists of 2^64 bytes (16 EB), and there could be 2^64 spaces. As is the case with the narrow mode, the current processors do not implement the entire range of virtual addresses available. Each space is currently limited to 2^42 bytes (16 TB), and there are 2^22 (4 million) spaces this still provides a very large virtual memory perspective.

Remember that each "space" contains four quadrants of equal size. On narrow systems, each quadrant is 1 GB in size, while on wide systems each quadrant is 4 TB in size. This configuration dictates that the two most significant bits of an offset address be used to determine which quadrant we are referencing. These are sometimes called the space register selection bits.

It is the kernel's job to manage and map this VAS, providing both private and shared regions of virtual memory. Pointers in a process pregion connect the process to kernel regions; this is where the two perspectives meet.

Physical Memory

Physical memory is one of the system's most precious commodities, and the kernel devotes much of its time and effort to the efficient management of this resource.

Prior to the introduction of the V-Class family of computers (and the Super-Dome, which closely followed), all the physical memory on an HP-UX system was managed by a single memory controller and was mapped to a contiguous range of physical addresses. This early model allowed for a fairly simple data structure to track the utilization of the physical pages.

With the release of HP-UX 11.0, this simple model became more complex, as the newer system configurations demanded ways to track noncontiguous blocks of physical memory while also allowing a great deal more physical memory to be configured for the system. On a narrow system, the maximum physical memory was limited to 4 GB, while the newer wide systems could map 512 GB (most models cannot physically hold this amount of RAM, at least not yet).

NOTE

The current wide system size limitation is due to constraints imposed by some of the kernel data structures and could be expanded greatly with the re-sizing of these structures. Try to spot them as we continue our discussion in this chapter.

To facilitate these growing requirements, the physical memory management structures have been partitioned to increase their scope and flexibility.

Front Store and Back Store

These two terms refer to carbon copies of a physical memory page frame stored as part of a program file, the front store, or temporarily placed on a swap device, the back store.

The front store is commonly known as an executable file, the product of a compiler and a linking-loader (generically a file named a.out by default). An executable program file also contains reference in its header to other types of memory which may be require to run, such as initialized data, uninitialized data and shared library routines. Initialized data pages are loaded from a copy in the front store while uninitialized pages must be produced out of thin air by the kernel as they are. In reality, the kernel isn't a magician and simply finds an unused physical page to fill with "0's" when one is requested.

When system memory pressure is high (the number of unallocated or free memory pages is relatively low), the kernel employs a paging system to free up space. Allocated pages that are not in frequent use are identified and copied to a back-store image on available swap space. The kernel's paging system is responsible for the reservation and allocation of the back-store pages. We discuss paging in detail later, but next we will look at the specific kernel structures created to manage each memory perspective.

Категории

© amp.flylib.com,