Solaris Internals: Solaris 10 and OpenSolaris Kernel Architecture (2nd Edition)

10.1. Physical Memory Allocation

Solaris uses the system's RAM as a central pool of physical memory for many different consumers within the system. Physical memory is distributed through the central pool at allocation time and returned to the pool when it is no longer needed. A system daemon (the page scanner) proactively manages memory allocations when there is a systemwide shortage of memory. The flow of memory allocations is shown in Figure 10.1.

Figure 10.1. Life Cycle of Physical Memory

10.1.1. The Allocation Cycle of Physical Memory

The most significant central pool physical memory is the freelist. Physical memory is placed on the freelist in page-size chunks when the system is first booted and then consumed as required. Three major types of allocations occur from the freelist, as shown in Figure 10.1.

  • Anonymous/process allocations. Anonymous memory, the most common form of allocation from the freelist, is used for most of a process's memory allocation, including heap and stack. Anonymous memory also fulfills shared memory mappings allocations. A small amount of anonymous memory is also used in the kernel for items such as thread stacks. Anonymous memory is pageable and is returned to the freelist when it is unmapped or if it is stolen by the page scanner daemon.

  • File system "page cache." The page cache is used for caching of file data for file systems other than the ZFS file system. The file system page cache grows on demand to consume available physical memory as a file cache and caches file data in page-size chunks. Pages are consumed from the freelist as files are read into memory. The pages then reside in one of three places: the segmap cache, a process's address space to which they are mapped, or on the cachelist.

    The cachelist is the heart of the page cache. All unmapped file pages reside on the cachelist. Working in conjunction with the cache list are mapped files and the segmap cache.

    Think of the segmap file cache as the fast first level file system read/write cache. segmap is a cache that holds file data read and written through the read and write system calls. Memory is allocated from the freelist to satisfy a read of a new file page, which then resides in the segmap file cache. File pages are eventually moved from the segmap cache to the cachelist to make room for more pages in the segmap cache.

    The cachelist is typically 12% of the physical memory size on SPARC systems. The segmap cache works in conjunction with the system cachelist to cache file data. When files are accessed through the read and write system calls, up to 12% of the physical memory file data resides in the segmap cache and the remainder is on the cache list.

    Memory mapped files also allocate memory from the freelist and remain allocated in memory for the duration of the mapping or unless a global memory shortage occurs. When a file is unmapped (explicitly or with madvise), file pages are returned to the cache list.

    The cachelist operates as part of the freelist. When the freelist is depleted, allocations are made from the oldest pages in the cachelist. This allows the file system page cache to grow to consume all available memory and to dynamically shrink as memory is required for other purposes.

  • Kernel allocations. The kernel uses memory to manage information about internal system state; for example, memory used to hold the list of processes in the system. The kernel allocates memory from the freelist for these purposes with its own allocators: vmem and slab. However, unlike process and file allocations, the kernel seldom returns memory to the freelist; memory is allocated and freed between kernel subsystems and the kernel allocators. Memory is consumed from the freelist only when the total kernel allocation grows.

    Memory allocated to the kernel is mostly nonpageable and so cannot be managed by the system page scanner daemon. Memory is returned to the system freelist proactively by the kernel's allocators when a global memory shortage occurs.

Категории