HP-UX Virtual Partitions

   

HP-UX Virtual Partitions

By Marty Poniatowski

Table of Contents
Chapter 4.  Building an HP-UX Kernel

The following is a description of kernel parameters in HP-UX 11i at the time of this writing. The first section is a list of and description of each kernel parameter. The second section is an overview of kernel parameters. The full and most recent descriptions can be found at the following URL:

http://docs.hp.com/hpux/onlinedocs/os/KCparams.OverviewAll.html

I encourage you to use the online help of SAM and the URL above to access the most recent and complete information on kernel parameters. Due to limitations of the amount of material I can include in the book, I eliminated much of the background information related to kernel parameters in order to save space. This information is available in the SAM online help and the URL at the time of this writing.

I performed minimal formatting on this material. It is close to the form in which I received it from my HP associates in the lab. I think it is most effective in this somewhat "raw" form because, after all, we're dealing with kernel-related information.

Kernel Parameters

aio_listio_max specifies the maximum number of POSIX asynchronous I/O operations that can be specified in a ''listio()'' call.

Acceptable Values:

Minimum 2

Maximum 0x10000

Default 256

This parameter places a limit on the system resources that can be consumed if a large number of POSIX asynchronous I/O operations are requested in a single ''listio()'' call. The value should be set large enough to meet system programming needs while protecting the system against excessive asynchronous I/O operations initiated by a malfunctioning process.

The value specified must not exceed the value of aio_max_ops.

aio_max_ops specifies the system-wide maximum number of POSIX asynchronous I/O operations that can be queued simultaneously at any given time.

Acceptable Values:

Minimum 1

Maximum 0x100000

Default 2048

Specify integer value.

Description

This parameter places a limit on the system resources that can be consumed if a large number of POSIX asynchronous I/O operations are queued on the system at the same time. This parameter limits the ability of competing processes to overwhelm the system with large numbers of asynchronous I/O operations and the memory they require.

Each enqueued asynchronous operation requires allocation of system memory for its internal control structure, thus making this limit necessary. In addition to the system-wide limit, there is a per-process limit that is controlled using the argument ''RLIMIT_AIO_OPS'' to ''getrlimit()'' and ''setrlimit()'' calls.

aio_physmem_pct specifies the maximum percentage of the total physical memory in the system that can be locked for use in POSIX asynchronous I/O operations.

Acceptable Values:

Minimum 5

Maximum 50

Default 10

Specify integer value.

Description

This parameter places a limit on how much system memory can be locked by the combined total number of POSIX asynchronous I/O operations that are in progess at any given time. It is also important to be aware that an operation remains on the active queue and memory is not released, even if the operation is complete, until it is properly terminated by an ''aio_return()'' call for that operation.

Asynchronous I/O operations that use a request-and-callback mechanism for I/O must be able to lock the memory they are using. The request-and-callback mechanism is used only if the device drivers involved support it. Memory is locked only while the I/O transfer is in progress. On a large server it is better to increase ''aio_physmem_pct'' to higher values (up to 50).

''aio_physmem_pct'' imposes a system-wide limit on lockable physical memory. A per-process lockable-memory limit can also be self-imposed by using the ''setrlimit()'' system call within the application program.

Remember too that the total amount of memory that can be locked at any given time for any reason, not just for asynchronous I/O, is controlled by the system-wide limit ''lockable_mem''. Other system activity, including explicit memory locking with plock() and/or mlock() interfaces can also affect the amount of lockable memory at any given time.

There is no kernel parameter named ''lockable_mem'', but there is a parameter named ''unlockable_mem'' which affects it. The value of ''lockable_mem'' is determined by subtracting the value of ''unlockable_mem'' from the amount of system memory available after system startup. During startup, the system displays on the system console the amount of its lockable memory (along with available memory and physical memory). These values can be retrieved while the system is running by using the ''/sbin/dmesg'' command.

aio_prio_delta_max specifies the maximum slow-down factor (priority offset) for POSIX asynchronous I/O operations. This is the maximum priority-offset value allowed in the ''aio_reqprio'' field in the asynchronous I/O control block (''aiocb'' structure).

Acceptable values:

Minimum 0

Maximum 20

Default 20

lablist

Specify integer value

Description

This parameter places a limit on how much the priority of a POSIX asynchronous I/O operation can be reduced to slow down it down. This limits the value allowed for ''int aio_reqprio'' in the asynchronous-I/O control block structure ''aiocb''.

acctresume

Resume accounting when sufficient free file-system space becomes available.

Acceptable Values:

Minimum -100

Maximum 101

Default 4

Specify integer value.

Description

This parameter is of interest only if process accounting is being used on the system.

''acctresume'' specifies the minimum amount of free space that must be available in the file system before the system can resume process accounting if it is suspended due to insufficient free space. The threshold at which accounting is suspended is defined by ''acctsuspend''.

Related Parameters

''acctsuspend'' and ''acctresume'' are interrelated. To prevent suspend-resume conflicts, the signed, integer value of ''acctresume'' must always be greater than the signed, integer value of ''acctsuspend''.

acctsuspend

Suspend accounting when available free file-system space drops below specified amount.

Acceptable Values:

Minimum -100

Maximum 100

Default 2

Specify integer value.

Description

This parameter is of interest only if process accounting is being used on the system.

''acctsuspend'' prevents accounting files from invading file system free space by specifying the minimum amount of available file system space that must be kept available for other uses while process accounting is running. If the available space drops below that value, accounting is suspended until sufficient file space becomes available again so accounting can resume.

Selecting a Value for ''acctsuspend''

Related Parameters

''acctsuspend'' and ''acctresume'' are interrelated. To prevent suspend-resume conflicts, the signed, integer value of ''acctsuspend'' must always be less than the signed, integer value of ''acctresume''.

allocate_fs_swapmap

Preallocate sufficient kernel data structures for file-system swap use.

Acceptable Values:

Minimum 0 (allocate swap data structures as needed)

Maximum 1 (preallocate necessary kernel data structures)

Default 0

Specify integer value of ''0'' or ''1''.

Description

''allocate_fs_swapmap'' determines whether kernel data structures for file-system swap are allocated only when needed or reserved in advance (does not apply to device swap). Only two values are recognized as valid:

''allocate_fs_swapmap'' = 0

(default value) System allocates data structures as they are needed in order to conserve system memory. Under certain conditions, the system could deny swap requests because it lacks available data structures, even though the file system has space available for swapping.

''allocate_fs_swapmap'' = 1

System allocates sufficient data structures to accommodate the maximum file system swap limit specified by the the ''swapon()'' system call or ''swapon'' command. This ensures that space in memory will support swap requests as long as the file systems have swap space available. This mode is most commonly used on high availability systems where prevention of process failures due to unavailable resources is more important than reduced system efficiency caused by reserving resources before they are needed.

alwaysdump

''alwaysdump'' is a bit-map value that defines which classes of kernel memory pages are to be dumped if a kernel panic occurs.

Acceptable Values:

Minimum 0

Maximum none

Default 0

Specify integer value.

Description

On large systems, the time required to dump system memory when a kernel panic occurs can be excessive or even prohibitive, depending on how much physical memory is installed in the system. Fast-dump capabilities controlled by the ''dontdump'' and ''alwaysdump'' parameters provides ameans for restricting kernel dumps to specific types of information:

  • * Unused Physical Memory

  • * Kernel Static Data

  • * Kernel Dynamic Data

  • * File-System Metadata

  • * Kernel Code

  • * Buffer Cache

  • * Process Stack

  • * User Process

The bit-map value stored in ''alwaysdump'' specifies which of these memory classes are to be included in the memory dumps associated with a kernel panic.

Related Parameters

''alwaysdump'' and ''dontdump'' have opposite effects. If the bit corresponding to a particular memory-page class is set in one parameter, it should not be set in the other parameter; otherwise a conflict occurs and the actual kernel behavior based on parameter values is undefined. These conflicts do not occur when SAM is used to set the values ([[Modify Page-Class Configuration]] in the [[Actions]] menu, SAM "Dump Devices" subarea in the kernel-configration area).

bufpages

Define number of 4096-byte memory pages in the file system buffer cache.

Acceptable Values:

Minimum: ''0 or 6'' (''Nbuf*2'' or 64 pages)

Maximum: Memory limited

Default: ''0''

Specify integer value or use integer formula expression. Use non-zero value !!only!! if dynamic buffer cache is !!not!! being used. Description

''bufpages'' specifies how many 4096-byte memory pages are allocated for the file system buffer cache. These buffers are used for all file system I/O operations, as well as all other block I/O operations in the system (''exec'', ''mount'', inode reading, and some device drivers.)

Specifying a Value for ''bufpages''

To enable dynamic buffer cache allocation, set ''bufpages'' to zero. Otherwise, set ''bufpages'' to the desired number of 4-Kbyte pages to be allocated for buffer cache. If the value specified for ''bufpages'' is non-zero but less than 64, the number is increased at boot time and a message is printed, announcing the change. If ''bufpages'' is larger than the maximum supported by the system, the number is decreased at boot time and a message is printed.

Related Parameters and System Values

''bufpages'' controls how much actual memory is allocated to the buffer pool.

If ''bufpages'' is zero at system boot time, the system allocates two pages for every buffer header defined by ''nbuf''. If ''bufpages'' and ''nbuf'' are both zero, the system enables dynamic buffer cache allocation and allocates a percentage of available memory.

The maximum amount of memory that can be allocated to the buffer pool is also affected by the amount of memory allocated to the system for other purposes. Thus, modifying parameters that affect system memory may also affect the maximum amount of memory can be made available to the buffer pool.

clicreservedmem

''clicreservedmem'' specifies how many bytes of system memory are to be reserved for I/O-mapping use by user processes in high-speed, distributed-server environments such as those used for running large database-processing programs.

Acceptable Values:

Minimum: ''0''

Maximum: none

Default: ''0''

Specify integer value.

Description

Normal HP-UX systems reserve a relatively small amount of system memory for I/O mapping. However, some specialized applications (such as large database-processing programs) often run on clusters of high-speed servers that are interconnected by specialized high-speed, wideband communication networks. Because of the intense demands on system resources, these applications often communicate by means of memory-mapped I/O where large blocks of memory are shared by the application and the corresponding I/O or networking software.

The configurable parameter ''clicreservedmem'' provides a means for setting aside as much as 15/16 (approximately 93%) of total system memory for use by applications that perform large-volume, memory-mapped, network I/O. While this value could be as much as 512 Gbytes or even 1 Tbyte, it more commonly ranges from about 1 Gbyte to perhaps 64 Gbytes. Regardless of the value chosen within SAM, the actual memory reserved by the system cannot exceed 15/16 or total system memory.

create_fastlinks

Use fast symbolic links.

Acceptable Values:

Minimum: ''0''

Maximum: ''1''

Default: ''0''

Specify integer value.

Description

When ''create_fastlinks'' is non-zero, it causes the system to create HFS symbolic links in a manner that reduces the number of disk-block accesses by one for each symbolic link in a pathname lookup. This involves a slight change in the HFS disk format, which makes any disk formatted for fast symbolic links unusable on Series 700 systems prior to HP-UX Release 9.0 and Series 800 systems prior to HP-UX Release 10.0 (this configurable parameter was present on Series 700 Release 9.0 systems, but not on Series 800 HP-UX 9.0 systems).

To provide backward compatibility, the default setting for ''create_fastlinks'' is zero, which does not create the newer, faster format. However, all HP-UX 10.0 kernels (and all Series 700 HP-UX 9.0 kernels) understand both disk formats, whether "create_fastlinks" is set to zero or non-zero.

dbc_max_pct

Define maximum percentage of memory to be used by dynamic buffer cache.

Acceptable Values:

Minimum: '' 2''

Maximum: ''90''

Default: ''50''

Specify integer value.

Description

When the parameters''bufpages'' ''nbuf'' are both set to their default value of 0, the size of the buffer cache grows or shrinks dynamically, depending on competing requests for system memory.

The value of ''dbc_max_pct'' sets the maximum percentage of physical memory that can be allocated to the dynamic buffer cache.

dbc_min_pct

Define minimum percentage of memory to be used by dynamic buffer cache.

Acceptable Values:

Minimum: '' 2''

Maximum: ''90''

Default: '' 5''

Specify integer value.

Description

During file-system I/O operations, data is stored in a buffer cache, the size of which can be fixed or dynamically allocated. When the parameters ''bufpages'' ''nbuf'' are both set to their default value of 0, the size of the buffer cache grows or shrinks dynamically, depending on competing requests for system memory.

The value of ''dbc_min_pct'' specifies the minimum percentage of physical memory that is reserved for use by the dynamic buffer cache.

Selecting an Appropriate Value

If ''dbc_min_pct'' is set to too low a value, very high demand on the buffer cache can effectively hang the system. The is also true when using fixed buffer cache. To determine a reasonable (and conservative) value for the minimum cache size in Mbytes, use the following formula:

(number of system processes) × (largest file-system block size) / 1024

To determine the value for ''dbc_min_pct'', divide the result by the number of Mbytes of physical memory installed in the computer and multiply that value by 100 to obtain the correct value in percent.

Only those processes that actively use disk I/O should be included in the calculation. All others can be excluded. Here are some examples what processes should be included in or excluded from the calculation:

Include:

NFS daemons, text formatters such as ''nroff'', database management applications, text editors, compilers, etc. that access or use source and/or output files stored in one or more file systems mounted on the system.

Exclude:

X-display applications, ''hpterm'', ''rlogin'', login shells, system daemons, ''telnet'' or ''uucp'' connections, etc. These process use very little, if any, disk I/O.

dst

Enable or disable daylight-savings-time conversion and specify conversion schedule.

Acceptable Values:

Specify one of the following integer values:

''0''

Disable daylight-saving time

''1''

Set daylight-saving time to USA style (this is the default)

''2''

Set daylight-saving time to Australian style

''3''

Set daylight-saving time to Western-Europe style

''4''

Set daylight-saving time to Middle-Europe style

''5''

Set daylight-saving time to Eastern-Europe style

Description

''dst'' specifies whether to convert to daylight savings time, and which schedule to use when converting between daylight-savings and standard time.

A zero value disables conversion to daylight-savings time. Non-zero values enable conversion and select a conversion schedule according to the following definitions in the file ''usr/include/sys/time.h'':

#define DST_NONE 0 /* not on dst */ #define DST_USA 1 /* USA style dst */ #define DST_AUST 2 /* Australian style dst */ #define DST_WET 3 /* Western European dst */ #define DST_MET 4 /* Middle European dst */ #define DST_EET 5 /* Eastern European dst */

default_disk_ir

Enable Immediate Reporting on disk I/O.

Acceptable Values:

Minimum: ''0'' (off)

Maximum: ''1'' (on)

Default: ''0'' (off)

Set to''0'' (disable immediate reporting) or ''1'' (enable immediate reporting).

Description

''default_disk_ir'' enables or disables immediate reporting.

With Immediate Reporting ON, disk drives that have data caches return from a ''write()'' system call when the data is cached, rather than returning after the data is written on the media. This sometimes enhances write performance, especially for sequential transfers, but cached data can be lost if a device power failure or reset occurs before the device writes the cached data to media. The recommended value for this parameter on Series 800 systems is zero (OFF).

Although not an option to the ''mount'' command, the configurable parameter, ''default_disk_ir'', has a profound effect upon filesystem (and raw) disk performance and, conversely, data integrity through resets. It may either be turned ON (set to 1) or OFF (set to 0).

If this configurable parameter is omitted from the kernel configuration file used to create the kernel (''/stand/system''), it is assumed to be OFF (0). Thus, the default behavior for Immediate Reporting (also known as Write Cache Enable, WCE) is OFF (disabled).

''default_disk_ir'' also affects delayed-write versus write-through-filesystem behavior.

dontdump

''dontdump'' is a bit-map value that defines which classes of kernel memory pages are to be dumped if a kernel panic occurs.

Acceptable Values:

Minimum: ''0''

Maximum: none

Default: ''0''

Specify integer value.

Description

On large systems, the time required to dump system memory when a kernel panic occurs can be excessive or even prohibitive, depending on how much physical memory is installed in the system. Fast-dump capabilities controlled by the ''dontdump'' and ''alwaysdump'' parameters provide a means for restricting kernel dumps to specific types of information:

  • * Unused Physical Memory

  • * Kernel Static Data

  • * Kernel Dynamic Data

  • * File-System Metadata

  • * Kernel Code

  • * Buffer Cache

  • * Process Stack

  • * User Process

The bit-map value stored in ''alwaysdump'' specifies which of these memory classes are to be included in the memory dumps associated with a kernel panic.

Related Parameters

''alwaysdump'' and ''dontdump'' have opposite effects. If the bit corresponding to a particular memory-page class is set in one parameter, it should not be set in the other parameter; otherwise a coflict occurs and the actual kernel behavior based on parameter values is undefined. These conflicts do not occur when SAM is used to set the values ([[Modify Page-Class Configuration]] in the [[Actions]] menu, SAM "Dump Devices" subarea in the kernel-configration area).

enable_idds

enable_idds is reserved for future use.

Acceptable Values:

Default: ''0 (off)''

Specify boolean value.

Description

enable_idds is reserved for future use in an optional product.

Customers should not attempt to change this parameter. Enabling this parameter without the optional product provides no benefit and will lower system performance by a few percentage points.

IDDS = Intrusion Detection Data Source

eqmemsize

Specify size, in pages, of the equivalently mapped memory reserve pool.

Acceptable Values:

Minimum: '' 0''

Maximum: Memory limited

Default: ''15'' pages

Specify integer value or use integer formula expression.

Description

''eqmemsize'' specifies the minimum amount of memory space, in pages, that is to be reserved as a pool of space for use by drivers and subsystems that require allocated memory where addressing is the same in real and virtual mode. At boot time, the system may increase the actual space reserved, based upon how much physical memory is installed.

Drivers use these pages to transfer information between hardware interfaces. The driver places data on the page, using virtual mode. The hardware then transfers the data, using DMA (Direct Memory Access) in real mode, bypassing the addressing translation. Since the virtual and real addresses are the same, no special address processing is needed. The space is also used to support address aliasing requests issued on behalf of ''EXEC_MAGIC'' processes.

Normally, the system handles requests for equivalently mapped memory dynamically by trying to obtain a free page with a matching virtual address from the system-wide memory pool. If the system cannot dynamically obtain an equivalently mapped page, it resorts to its reserve pool. Normally this reserve pool should never be exhausted because the system can usually dynamically allocate an equivalently mapped page. However, systems with a relatively high load and/or a physical memory configuration that exceeds 1 Gbyte could potentially deplete this reserve pool.

Depending on the exact nature of applications running on the system, system load, memory and I/O configurations, and other factors, the reserve pool could still become exhausted. If this happens, the system prints a message to the console indicating that the reserve pool has been exhausted and that ''eqmemsize'' should be increased.

executable_stack

Allows or denies program execution on the stack (security feature).

Acceptable Values:

Minimum: '' 0''

Maximum: '' 2''

Default: '' 1''

Specify integer value. ''0'' is disable, ''1'' is enable, ''2'' is enable with warning.

Description

executable_stack provides protection against commonly attempted security breaches. It sets the system-wide default for whether to use system memory-mapping hardware to help protect against one of the most common classes of security breaches, commonly known as 'stack buffer overflow attacks.'

Unless you have a proven need to do otherwise, HP strongly recommends that you set this parameter to a value of '0' (zero). This is the most secure of the settings, incurs no performance penalty, and will very rarely interfere with legitimate applications.

Note that, for compatibility reasons, the default setting of this parameter in this release is '1' (one), which is the most compatible but least secure setting. It is equivalent to system behavior on HP-UX 11.00 and earlier, and does not provide protection against this type of attack.

Refer to the description of the '+es' option in the manual page for chatr(1) for a detailed description of the effects of this parameter, the meanings of the possible settings, how to recognize if a different setting may be needed on your system, and how to combine system-wide and per-application settings for the best tradeoffs between security and compatibility for your system.

fs_async

Select synchronous or asynchronous writes of file-system data structures to disk.

Acceptable Values:

Minimum: ''0'' (Use synchronous disk writes only)

Maximum: ''1'' (Allow asynchronous disk writes)

Default: ''0''

Specify integer value of ''0'' or ''1''.

Description

''fs_async'' specifies whether or not asychronous writing of file-system data structures to disk is allowed. If no value for ''fs_async'' is specified, synchronous writes are used.

Synchronous writes to disk make it easier to restore file system integrity if a system crash occurs while file system data structures are being updated on the file system.

If asynchronous writes are selected, HP-UX file system semantics for NFS cluster environments are preserved. In addition, files opened using ''open()'' with the ''0_SYNC'' flag (synchronous writing) will continue to be written synchronously when the asynchronous-writes feature has been configured into the kernel.

Asynchronous writes to disk can improve file system performance significantly. However, asynchronous writes can leave file system data structures in an inconsistent state in the event of a system crash. For more information about when to select synchronous or asynchronous writing, refer to the explanatory text later in this help page.

What are Synchronous and Asynchronous Writes?

If a file is open for writing and data is being written to a file, the data is accumulated in buffers and periodically written to disk. When an end-of-file condition occurs and the file is to be closed, any remaining buffer contents are written to the disk, the inode is updated with file size and block pointer information, and the file system's list of free disk blocks is updated. To ensure maximum protection of file system integrity, these operations are handled in a specific sequence that minimizes the risk of file system corruption on the disk if a system crash or power failure occurs while writing to the disk. This sequential update process is called is called !!synchronous writing!!.

HP-UX file systems store free space lists, blocks, inodes, and other file components at random and widely separate locations on disk devices. This means that writing file information blocks in a particular sequence requires additional time to move to the desired location on the disk before performing the write operation. If a power failure or system crash occurs during this sequence, one or more blocks may not be properly updated, leaving a potentially inconsistent file system. The ''fsck'' command is used to repair such inconsistencies.

Asynchronous writing as it relates to the ''fs_async'' kernel parameter allows the system to update file system information on the disk in a more convenient (hence faster) sequence rather than in a more secure (safer but slower) sequence, thus reducing search and move delays between writes. However, if a system crash occurs while these operations are being performed, the risk of an inconsistent file system that cannot be automatically repaired by fsck is significantly greater than with synchronous writes.

Consequences of a Crash

If only synchronous writing is used, all updates to directories, file inodes, free space lists, etc. are handled in a sequence that is known to ''fsck''. If a crash occurs while updating any disk block in the sequence, ''fsck'' can readily determine where the crash occurred and repair the missing update information, probably without assistance from the system administrator.

If ''fs_async'' is set to allow asynchronous writes and a crash occurs, ''fsck'' does not know what sequence was used, and thus will probably require interactive assistance from the administrator while fixing inconsistent file system information, repairing directory and inode entries, etc.

Why Allow Asynchronous Writes?

Waiting for synchronous writing and updating of disk blocks when closing files after writing to them degrades the performance of programs and applications that require frequent file and directory write and close operations. Allowing asynchronous writing significantly reduces those delays, producing a corresponding improvement in performance. However, when applications are CPU intensive with relatively little disk I/O, performance improvements are much lower.

When Should I Use Asynchronous Writes?

Asynchronous writing is advisable for improving system performance if:

  • * Risk of power failure is low (very dependable power source and/or uninterruptible power sources).

  • * Precautions have been taken to enhance data security (sophisticated file system backup or redundancy strategies), or potential loss of data due to a system crash is less important than system performance.

  • * User applications require frequent opening, writing, and closing of disk files and directories.

  • * Elimination of synchronous writing would improve system performance sufficiently to offset any associated risks.

To enable asynchronous writing, set the ''fs_async'' kernel parameter to ''1'' instead of the default value of ''0'').

hfs_max_ra_blocks

Set the maximum number of read-ahead blocks that the kernel may have outstanding for a single HFS filesystem.

Acceptable Values:

Minimum: '' 0''

Maximum: ''128''

Default: '' 8''

Specify integer value or use integer formula expression.

Description

When data is read from a disk drive, the system may read additional data beyond that requested by the operation. This "read-ahead" speeds up sequential disk accesses, by anticipating that additional data will be read, and having it available in system buffers before it is requested. This parameter limits the number of read-ahead blocks that the kernel is allowed to have outstanding for any given HFS filesystem. The limit applies to each individual HFS filesystem, !!not!! to the system-wide total.

''hfs_max_ra_blocks'' and ''hfs_ra_per_disk''should be adjusted according to the characteristics of the workload on the system.

Note

To determine the block size of the filesystem containing the current directory, use the command:

df -g

EXAMPLE ONE

A software development environment typically consists of small or medium sized I/Os with a fair number of disk seeks. Therefore, ''hfs_max_ra_blocks'' should be set to 8-to-16 blocks and ''hfs_ra_per_disk'' should be set to 32-to-64 kilobytes.

EXAMPLE TWO

An out-of-core solver for an MCAE application has a significant sequential I/O component, so ''hfs_max_ra_blocks'' should be set to 64-to-128 blocks and ''hfs_ra_per_disk'' to 128-to-256 kilobytes.

hfs_ra_per_disk

Set the amount of HFS filesystem read-ahead per disk drive, in Kbytes.

Acceptable Values:

Minimum: '' 0''

Maximim: ''8192''

Default: '' 64''

Specify an integer value or use an integer formula expression.

Description

When data is read from a disk drive, the system may read additional data beyond that requested by the operation. This "read-ahead" speeds up sequential disk accesses, by anticipating that additional data will be read, and having it available in system buffers before it is requested. This parameter specifies the amount of read-ahead permitted per disk drive.

The total amount of read-ahead is determined by multiplying ''hfs_ra_per_disk'' by the number of drives in the logical volume. If the filesystem does not reside in a logical volume, then the number of drives is effectively one.

hfs_revra_blocks

This parameter sets the maximum blocks read with each HFS reverse read-ahead operation.

Acceptable Values:

Minimum: 0''

Maximum: 128''

Default; 8''

Specify integer value.

Description

This tunable defines the maximum number of Kbytes to be read in a read-ahead operation when sequentially reading backwards.

Only HP Field Engineers should modify the hfs_revra_per_disk kernel parameter. Customers should not change this parameter from its default value.

Purpose

This value should be raised when the workload is known to include frequent reverse-order sequential reading of files. This value should be lowered back to its default value if raising the value does not provide noteworthy performance improvement. Increasing this value has potential additional disk contention and performance penalty due to excess read-ahead.

Interactions

The following additional tunable parameters may also need to be modified when changing the value of ''hfs_revra_per_disk'':

hfs_max_revra_blocks; hfs_ra_per_disk; hfs_max_ra_blocks

hfs_revra_per_disk

This parameter sets the maximum HFS file system blocks read with each reverse read-ahead operation.

Minimum:''0''

Maximum:''8192''

Default:''64''

Specify integer value.

Description

This tunable defines the maximum number of file system blocks to be read in a read-ahead operation when sequentially reading backwards.

Only HP Field Engineers should modify the hfs_revra_per_disk kernel parameter. Customers should not change this parameter from its default value.

Purpose

An increase in the value of this parameter is indicated when there are a large number of reverse sequential file I/Os on file systems with small file system block sizes. Raising the value of this parameter will mean that more memory is used in the buffer cache.

A decrease in the value of this parameter is indicated when there are a small number of reverse sequential file I/Os on files systems with large file system block sizes. Decreasing the value of this parameter can cause a decreased file throughput rate.

Interactions

The following additional tunable parameters may also need to be modified when changing the value of ''hfs_revra_per_disk'':

hfs_max_revra_blocks; hfs_ra_per_disk; hfs_max_ra_blocks

initmodmax

''initmodmax'' specifies the maximum number of kernel modules that the ''savecrash'' command will handle when a kernel panic causes a system-memory dump.

Acceptable Values:

Minimum: ''0''

Maximum: none

Default: ''50''

Specify integer value.

Description

When a kernel panic (system crash) occurs, specified areas of system memory are copied to the dump devices before the system shuts down. The ''savecrash'' command can then be used to copy the dump area into a directory in a file system (requires !!lots!! of space, depending on system size and what memory classes were dumped).

When the kernel includes dynamically loadable kernel modules (drivers that are not statically installed in the kernel), ''savecrash'' must allocate space in one of its structures to hold information about those modules. Since there is no means to predict how many modules might be loaded at any given time, this parameter provides an upper limit that ''savecrash'' is prepared to deal with.

If ''initmodmax'' is set to less than the number of loaded kernel modules, only the first modules encountered up to the limit are processed by ''savecrash''. It is therefore important that system administrators keep track of how many kernel modules are being loaded during system operation to ensure that ''initmodmax'' has a value sufficient to properly handle them in case of a kernel panic and dump.

Note that this parameter only affects the operation of the ''savecrash'' command. It does not limit the number of modules that can be loaded into the kernel during normal system operation.

ksi_alloc_max

''ksi_alloc_max'' specifies the system-wide maximum number of queued signals that can be allocated.

Acceptable Values:

Minimum: ''32''

Maximum: (memory limited)

Default: ''nproc * 8''

Specify integer value.

Description

The kernel allocates storage space for the data structures required to support queued signals that are sent by processes using the ''sigqueue()'' system call. This parameter is used to determine how much space should be allocated. At any given time during normal system operation, if the combined total number of queued signals sent by existing processes and still pending at receivers are enough to fill the available data-structure space, no new queued signals can be sent.

Note that queued signals are different than traditional HP-UX/UNIX signals. Traditional signals (such as kill or hangup signals) were sent to the receiving process. If multiple identical signals were sent to a single process, there was no way for the process to determine that more than one signal had been sent. Queued signals eliminate that ambiguity because a process can handle a queued signal, then examine the queue again to discover another signal on the queue.

''ksi_alloc_max'' specifies the maximum number of queued signals that can be queued at any given time, system wide, by controlling how much data-structure space is allocated in the kernel for handling queued signals. The limit value of ''SIGQUEUE_MAX'' and ''_POSIX_SIGQUEUE_MAX'' defined in ''/usr/include/limits.h'' are affected by the value of this parameter.

The default value of this parameter is set to ''nproc * 8'' which allows a total large enough to accommodate eight signals pending for every process running on the system, assuming that the system is running at full capacity. This should be adequate for nearly all systems unless system software requirements dictate that a more are needed.

ksi_send_max

''ksi_send_max'' specifies the maximum number of queued signals that a single process can send and have pending at one or more receivers.

Acceptable Values:

Minimum: ''32''

Maximum: (memory limited)

Default: ''32''

Specify integer value.

Description

The kernel allocates storage space for the data structures required to support queued signals that are sent by processes using the ''sigqueue()'' system call. This parameter is used to determine how much space should be allocated. At any given time during normal system operation, if the combined total number of queued signals sent by existing processes and still pending at receivers are enough to fill the available data-structure space, no new queued signals can be sent.

Note that queued signals are different than traditional HP-UX/UNIX signals. Traditional signals (such as kill or hangup signals) were sent to the receiving process. If multiple identical signals were sent to a single process, there was no way for the process to determine that more than one signal had been sent. Queued signals eliminate that ambiguity because a process can handle a queued signal, then examine the queue again to discover another signal on the queue.

''ksi_send_max'' places a limit on the number of queued signals that a single process can send and/or have pending at one or more receivers. It provides a mechanism for preventing a single process from monopolizing the signals data-structure space by issuing too many signals and thereby preventing other processes from being able to send and receive signals due to insufficient kernel resources.

The default value of ''32'' is adequate for most common HP-UX applications. If you have specialized applications that require more than that number (''sigqueue()'' returns ''EAGAIN''), the number should be increased sufficiently to prevent the error unless the ''EAGAIN'' error returned by ''sigqueue()'' is due to a run-away process generating signals when it should not.

maxvgs

Maximum number of volume groups configured by the Logical Volume Manager on the system.

no_lvm_disks

Flag that notifies the kernel when no logical volumes exist on the system. If set, all file systems coincide with physical disks on the system and physical disk boundaries. The only exception to this is when disks are configured for partitions or part of the disk is reserved for swap and other non-file-system uses.

max_async_ports

Specify the system-wide maximum number of ports to the asynchronous disk I/O driver that processes can have open at any given time.

Acceptable Values:

Minimum: ''1''

Maximum: '' ''

Default: ''50''

Specify integer value.

Description

''max_async_ports'' limits the total number of open ports to the ansynchronous disk-I/O driver that processes on the system can have at any given time (this has nothing to do with any RS-232 asynchronous data-communications interfaces). The system allocates an array of port structures for each port when it is opened that is used for all communication between the process and the asynchronous disk driver. The number of asynchronous ports required by a given application is usually specified in the documentation for that application (such as database applications software, video management software, etc.).

To determine a suitable value for ''max_async_ports'':

  • * Determine how many ports are required for each application and/or process that uses asynchronous disk I/O.

  • * Determine which of these applications will be running simultaneously as separate processes. Also determine whether multiple copies of an application will be running at the same time as separate processes.

  • * Based on these numbers, determine the maximum number of open ports to the asynchronous disk driver that will be needed by all processes at any given time to obtain a reasonable total.

  • *Set ''max_async_ports'' to a value that is not less than this number.

maxdsiz and maxdsiz_64bit

Specify the maximum data segment size, in bytes, for an executing process.

Acceptable Values:

''maxdsiz''for32-bit processes:

Minimum: ''0x400000'' (4 Mbytes)

Maximum: ''0x7B03A000'' (approx 2 Gbytes)

Default: ''0x4000000'' (64 Mbytes)

''maxdsiz_64bit'' for 64-bit processes:

Minimum: ''0x400000'' (4 Mbytes)

Maximum: ''4396972769279''

Default: ''0x4000000'' (64 Mbytes)

Specify integer value.

Description

Enter the value in bytes.

''maxdsiz'' and ''maxdsiz_64bit'' define the maximum size of the data storage segment of an executing process for 32-bit and 64-bit processes, respectively.

The data storage segment contains fixed data storage such as statics and strings, as well as dynamic data space allocated using ''sbrk()'' and ''malloc()''.

Increase the value of ''maxdsiz'' or ''maxdsiz_64bit'' only if you have one or more processes that use large amounts of data storage space.

Whenever the system loads a process, or an executing process attempts to expand its data storage segment, the system checks the size of the process's data storage segment.

If the process' requirements exceed ''maxdsiz'' or ''maxdsiz_64bit'', the system returns an error to the calling process, possibly causing the process to terminate.

max_fcp_reqs

Define the maximum number of concurrent Fiber-Channel FCP requests that are to be allowed on any FCP adapter installed in the machine. Acceptable Values:

Minimum: '' 0''

Maximum: ''1024''

Default: '' 512''

Specify integer value or use integer formula expression.

Description

''max_fcp_reqs'' specifies the maximum number of concurrent FCP requests that are allowed on an FCP adapter. The default value specified when the system is shipped is 512 requests. To raise or lower the limit, specify the desired value for this parameter. The optimal limit on concurrent requests depends on several different factors such as configuration, device characteristics, I/O load, host memory, and other values that FCP software cannot easily determine.

Related Parameters and System Values

The system allocates memory for use by Tachyon FCP adapters based on the comination of values specified for ''num_tachyon_adapters'' and ''max_fcp_reqs''.

maxssiz and maxssiz_64bit

Set the maximum dynamic storage segment (DSS) size in bytes.

Acceptable Values:

''maxssiz'' for 32-bit processes:

Minimum: ''0x4000 (16'' Kbytes)

Maximum: ''0x17F00000'' (approx 200 Mbytes)

Default: ''0x800000'' (8 Mbytes)

''maxssiz_64bit'' for 64-bit processes:

Minimum: ''0x4000 (16'' Kbytes)

Maximum: ''1073741824''

Default: ''0x800000'' (8 Mbytes)

Specify integer value.

Description

Enter the value in bytes.

''maxssiz'' and ''maxssiz_64bit'' define, for 32-bit and 64-bit processes respectively, the maximum size of the dynamic storage segment (DSS), also called the user-stack segment, or an executing process's run-time stack. This segment contains stack and register storage space, generally used for local variables.

The default DSS size meets the needs of most processes. Increase the value of ''maxssiz'' or ''maxssiz_64bit'' only if you have one or more processes that need large amounts of dynamic storage.

The stack grows dynamically. As it grows, the system checks the size of the process' stack segment. If the stack size requirement exceeds ''maxssiz'' or ''maxssiz_64bit'', the system terminates the process.

maxswapchunks

Set the maximum amount of swap space configurable on the system.

Acceptable Values:

Minimum: '' 1''

Maximum: ''16384''

Default: '' 256''

Specify integer value.

Description

''maxswapchunks'' specifies the maximum amount of configurable swap space on the system. The maximum swap space limit is calculated as follows:

  • * Disk blocks contain ''DEV_BSIZE'' (1024) bytes each. ''DEV_BSIZE'' is the system-wide mass storage block size and is not configurable.

  • * Swap space is allocated from device to device in chunks, each chunk containing ''swchunk'' blocks. Selecting an appropriate value for ''swchunk'' requires extensive knowledge of system internals. Without such knowledge, the value of ''swchunk'' should not be changed from the standard default value.

  • * The maximum number of chunks of swap space allowed system-wide is ''maxswapchunks'' chunks.

  • * The maximum swap space in bytes is:

    ''maxswapchunks'' × ''swchunk'' × ''DEV_BSIZE''

    For example, using default values for ''swchunk'' (2048) and ''maxswapchunks'' (256), and assuming ''DEV_BSIZE'' is 1024 bytes, the total configurable swap space equals 537 Mbytes.

Selecting Values

The amount of swap space available on system disk devices is determined by the contents of file ''/etc/fstab'', and is not affected by kernel configuration.

On a stand-alone system or on a cluster client with local swap space, ''maxswapchunks'' should be set to support sufficient swap space to accommodate all swap anticipated. Set the parameter large enough to avoid having to reconfigure the kernel.

For a server node, set the parameter to include not only the server's local swap needs, but also sufficient swap for each client node that will use the swap. At a minimum, allot swap space equal to the amounts of memory used by each client.

max_thread_proc

Specify the maximum number of threads a single process is allowed to have.

Acceptable Values:

Minimum: ''64''

Maximum: ''30000''

Default: ''64''

Specify integer value.

Description

''max_thread_proc'' limits the number of threads a single process is allowed to create. This protects the system from excessive use of system resources if a run-away process creates more threads than it should in normal operation. The value assigned to this parameter is the limit value assigned to the limit variables ''PTHREAD_THREADS_MAX'' and ''_SC_THREAD_THREADS_MAX'' defined in ''/usr/include/limits.h''.

When a process is broken into multiple threads, certain portions of the process space are replicated for each thread, requiring additional memory and other system resources. If a run-away process creates too many processes, or if a user is attacking the system by intentionally creating a large number of threads, system performance can be seriously degraded or other malfunctions can be introduced.

Selecting a value for ''max_thread_proc'' should be based on evaluating the most complex threaded applications the system will be running and determine how many threads will be required or created by such applications under worst-case normal use. The value should be at least that large but not enough larger that it could compromise other system needs if something goes wrong.

maxtsiz

Set maximum shared-text segment size in bytes.

Acceptable Values:

''maxtsiz'' for 32-bit processes:

Minimum: '' 262144'' (256 kbytes)

Maximum: ''1073741824'' (1 Gbyte)

Default:'' 0x4000000'' (64 Mbytes)

''maxtsiz_64bit'' for 64-bit processess:

Minimum: '' 262144'' (256 kbytes)

Maximum: ''4398046507008'' (approx 4 Tbytes)

Default: '' 0x4000000'' (64 Mbytes)

Specify integer value.

Description

''maxtsiz'' and ''maxtsiz_64bit'' define, for 32-bit and 64-bit processes respectively, the maximum size of the shared text segment (program storage space) of an executing process. Program executable object code is stored as read-only, and thus can be shared by multiple processes if two or more processes are executing the same program simultaneously, for example.

The normal default value accommodates the text segments of most processes. Unless you plan to execute a process with a text segment larger than 64 Mbytes, do not modify ''maxtsiz'' or ''maxtsiz_64bit''.

Each time the system loads a process with shared text, the system checks the size of its shared text segment. The system issues an error message and aborts the process if the process' text segment exceeds ''maxtsiz'' or ''maxtsiz_64bit''.

''maxtsiz'' and ''maxtsiz_64bit'' can be set by rebuilding the kernel or be set in the running kernel with ''settune()''. ''SAM'' and ''kmtune'' use ''settune()''. Dynamic changes to ''maxtsiz'' and ''maxtsiz_64bit'' only affect future calls to ''exec()''. Dynamically lowering these parameters will not affect any running processes, until they call ''exec()''.

maxuprc

Set maximum number of simultaneous user processes.

Acceptable Values:

Minimum: '' 3''

Maximum: ''Nproc-5''

Default:''50''

Specify integer value.

Description

''maxuprc'' establishes the maximum number of simultaneous processes available to each user on the system. A user is identified by the user ID number, not by a login instance. Each user requires at least one process for the login shell, and additional processes for all other processes spawned in that process group. (the default is usually adequate).

The super-user is exempt from this limit.

Pipelines need at least one simultaneous process for each side of a ''|''. Some commands, such as cc, fc, and pc, use more than one process per invocation.

If a user attempts to start a new process that would cause the total number of processes for that user to exceed ''maxuprc'', the system issues an error message to the user:

no more processes

If a user process executes a ''fork()'' system call to create a new process, causing the total number of processes for the user to exceed ''maxuprc'', ''fork()'' returns −1 and sets ''errno'' to ''EAGAIN''.

''maxuprc'' can be set by rebuilding the kernel or be set in the running kernel with ''settune()''. ''SAM'' and ''kmtune'' use ''settune()''. Dynamic changes to ''maxuprc'' only affect future calls to ''fork()''. Lowering ''maxuprc'' below a user's current number of processes will not affect any running processes. The user's processes will not be able to ''fork()'' until enough of the current processes exit that the user is below the new limit.

maxusers

Allocate system resources according to the expected number of simultaneous users on the system.

Acceptable Values:

Minimum: ''0''

Maximum: Memory limited

Default: ''32''

Specify integer value.

Description

''maxusers'' limits system resource allocation, not the actual number of users on the system. ''maxusers'' does not it self determine the size of any structures in the system; instead, the default value of other global system parameters depend on the value of ''maxusers''. When other configurable parameter values are defined in terms of ''maxusers'', the kernel is made smaller and more efficient by minimizing wasted space due to improperly balanced resource allocations.

''maxusers'' defines the C-language macro MaxUsers (for example, ''#define MaxUsers 8''). It determines the size of system tables. The actual limit of the number of users depends on the version of the HP-UX license that was purchased. To determine the actual limit, use the ''uname -a''command.

Rather than varying each configurable parameter individually, it is easier to specify certain parameters using a formula based on the maximum number of expected users (for example, ''nproc'' ''(20+8*MaxUsers)''). Thus, if you increase the maximum number of users on your system, you only need to change the ''maxusers'' parameter.

''maxvgs''

Specify maximum number of volume groups on the system.

Acceptable Values:

Minimum: '' 1''

Maximum: ''256''

Default: '' 10''

Specify integer value.

Description

''maxvgs'' specifies the maximum number of volume groups on the system. A set of data structures is created in the kernel for each logical volume group on the system. Setting this parameter to match the number of volume groups on the system conserves kernel storage space by creating only enough data structures to meet actual system needs.

''maxvgs'' is set for to ten volume groups by default. To allow more or fewer, change ''maxvgs'' to reflect a new maximum number.

Related Parameters

None.

maxfiles

Set soft limit for the number of files a process is allowed to have open simultaneously.

Acceptable Values:

Minimum: '' 30''

Maximum: ''60000''

Default: '' 60''

Specify integer value.

Description

''maxfiles'' specifies the system default soft limit for the number of files a process is allowed to have open at any given time. It is possible for a process to increase its soft limit and therefore open more than ''maxfiles'' files.

Non-superuser processes can increase their soft limit until they reach the hard limit''maxfiles_lim''.

maxfiles_lim

Set hard limit for number of files a process is allowed to have open simultaneously.

Acceptable Values:

Minimum: '' 30''

Maximum: ''nfile''

Default: ''1024''

Specify integer value.

Description

''maxfiles_lim'' specifies the system default hard limit for the number of open files a process may have. It is possible for a non-superuser process to increase its soft limit up to this hard limit.

''maxfiles_lim'' can be set by rebuilding the kernel or be set in the running kernel with ''settune()''. ''SAM'' and ''kmtune'' use ''settune()''. Dynamic changes affect all existing processes in the system with two classes of exceptions: Process that are already over the new limit will be unaffected. Process that have specifically set their limits through a call to ''setrlimit()'' (or ''ulimit'') will be unaffected.

mesg

Enable or disable System V IPC message support in kernel at system boot time (Series 700 only).

Acceptable Values:

Minimum: ''0'' (Exclude System V IPC message parameters from kernel)

Maximum: ''1'' (Include System V IPC message parameters in kernel)

Default: ''1''

Specify integer value of ''0'' or ''1''.

Description

''mesg'' specifies whether the code for System V IPC message parameters is to be included in the kernel at system boot time (Series 700 systems only).

''mesg'' = 1 Code is included in the kernel (enable IPC messages).

''mesg'' = 0 Code is not included in the kernel (disable IPC messages).

Series 800 systems: IPC messages are always enabled in the kernel.

Series 700 systems: If ''mesg'' is set to zero, all other IPC message parameters are ignored.

modstrmax

''modstrmax'' specifies the maximum size of the ''savecrash'' kernel-module table that contains module names and their location in the file system.

Acceptable Values:

Minimum: ''500''

Maximum: none

Default: ''500''

Specify integer value.

Description

When a kernel panic (system crash) occurs, specified areas of system memory are copied to the dump devices before the system shuts down. The ''savecrash'' command can then be used to copy the dump area into a directory in a file system (requires !!lots!! of space, depending on system size and what memory classes were dumped).

When the kernel includes dynamically loadable kernel modules (drivers that are not statically installed in the kernel), ''savecrash'' allocates space to keep track of module names and their locations on the file system. The space stores full path names to directories containing modules, and also module names. Space usage has been optimized by keeping only one copy of a directory path, even if more than one module is found there.

As more modules are added to the system, and if module names tend to be long, or if modules are scattered around the file system, ''modstrmax'' will need to be increased to accommodate the extra data.

Note that this parameter only affects the operation of the ''savecrash'' command. It does not limit the number of modules that can be loaded into the kernel during normal system operation.

msgmap

Specify size of the free-space resource map used for assigning locations for new messages in shared memory.

Acceptable Values:

Minimum: ''3''

Maximum: Memory limited

Default: ''msgtql+2''

Specify integer value or use integer formula expression.

Description

Message queues are implemented as linked lists in shared memory, each message consisting of one or more contiguous slots in the message queue. As messages are allocated and deallocated, the shared memory area reserved for messages may become fragmented.

''msgmap'' specifies the size of a resource map used for allocating space for new messages. This map shows the free holes in the shared memory message space used by all message queues. Each entry in the map contains a pointer to a corresponding set of contiguous unallocated slots, and includes a pointer to the set plus the size of (number of segments in) the set.

Free-space fragmentation increases as message size variation increases. Since the resource map requires an entry for each fragment of free space, excessive fragmentation can cause the free-space map array to fill up and overflow. If an overflow occurs when the kernel requests space for a new message or releases space used by a received message, the system issues the message:

DANGER: mfree map overflow

If this error message occurs, regenerate the kernel using a larger value for ''msgmap''.

msgmax

Specify the maximum individual messages size allowed, in bytes.

Acceptable Values:

Minimum: '' 0''

Maximum: ''min(msgmnb, msgseg * msgssz, 65535) bytes''

Default: '' 8192 bytes''

Specify integer value.

Description

''msgmax'' defines the maximum allowable size, in bytes, of individual messages in a queue.

Increase the value of ''msgmax'' only if applications being used on the system require larger messages. This parameter prevents malicious or poorly written programs from consuming excessive message buffer space.

msgmnb

Specify maximum total size, in bytes, of all messages that can be queued simultaneously on a message queue.

Acceptable Values:

Minimum: '' 0''

Maximum: ''min(msgseg * msgssz, 65535) bytes''

Default: ''16384 bytes''

Specify integer value.

Description

''msgmnb'' specifies the maximum total combined size, in bytes, of all messages queued in a given message queue at any one time.

Any ''msgsnd()'' system call that attempts to exceed this limit returns the error:

''EAGAIN'' If ''IPC_NOWAIT'' is set.

''EINTR'' If ''IPC_NOWAIT'' is not set.

''msgmnb'' can be set by rebuilding the kernel or be set in the running kernel with ''settune()''. ''SAM'' and ''kmtune'' use ''settune()''. Dynamically changing this parameter will affect only new message queues as they are created. Existing message queues will be unaffected.

msgmni

Specify maximum number of message queues that can exist simultaneously on the system.

Acceptable Values:

Minimum: '' 1''

Maximum: Memory limited

Default: ''50''

Specify integer value.

Description

''msgmni'' defines the maximum number of message queue identifiers allowed on the system at any given time.

One message queue identifier is needed for each message queue created on the system.

msgseg

''msgseg'' specifies the system-wide maximum total number of message segments that can exist in all message queues at any given time.

Acceptable Values:

Minimum: '' 1''

Maximum: ''32767''

Default: ''2048''

Specify integer value.

Description

''msgseg'', multiplied by ''msgssz'', defines the total amount of shared-memory message space that can exist for all message queues, system-wide (not including message header space).

The related parameter, ''msgssz'' (message segment size in bytes), defines the number of bytes that are reserved for each message segment in any queue. When a message is placed in the queue, the length of the message determines how many ''msgssz'' segments are used for that message. Space consumed by each message in the queue is always an integer multiple of ''msgssz''.

''msgseg'' (message segments) defines the number of these units that are available for all queues, system-wide.

''msgssz''

Specify message segment size to be used when allocating message space in message queues.

Acceptable Values:

Minimum: ''1''

Maximum: Memory limited

Default: ''8 bytes''

Specify integer value.

Description

''msgssz'', multiplied by ''msgseg'', defines the total amount of shared-memory message space that can exist for all message queues, system-wide (not including message header space).

''msgssz'' specifies the size, in bytes, of the segments of memory space to be allocated for storing IPC messages. Space for new messages is created by allocating one or more message segments containing ''msgssz'' bytes each as required to hold the entire message.

msgtql

Specify maximum number of messages allowed to exist on the system at any given time.

Acceptable Values:

Minimum: ''1''

Maximum: Memory limited

Default: ''40''

Specify integer value.

Description

''msgtql'' dimensions an area for message header storage. One message header is created for each message queued in the system. Thus, the size of the message header space defines the maximum total number of messages that can be queued system-wide at any given time. Message headers are stored in shared (swappable) memory.

If a ''msgsnd()'' system call attempts to exceed the limit imposed by ''msgtql'', it:

  • * Blocks waiting for a free header if the ''IPC_NOWAIT'' flag is !!not!! set, or it

  • * returns ''EAGAIN'' if ''IPC_NOWAIT'' is set.

ndilbuffers

Set maximum number of Device I/O Library device files that can be open simultaneously at any given time.

Acceptable Values:

Minimum: '' 1''

Maximum: Memory limited

Default: ''30''

Specify integer value.

Description

''ndilbuffers'' defines the maximum number of Device I/O Library (DIL) device files that can be open, system-wide, at any given time.

''ndilbuffers'' is used exclusively by the Device I/O Library. If DIL is not used, no DIL buffers are necessary.

nbuf

Set system-wide number of file-system buffer and cache buffer headers (determines maximum total number of buffers on system). See note below.

Acceptable Values:

Minimum: ''0 or 16''

Maximum: Memory limited

Default: ''0''

Specify integer value of zero (see below).

Description

This parameter is for backwards compatibility and should be set to zero because dynamic buffer cache is preferred.

If set to a non-zero value, ''nbuf'' specifies the number of buffer headers to be allocated for the file system buffer-cache. Each buffer is allocated 4096 bytes of memory unless overridden by a conflicting value for ''bufpages''.

If ''nbuf'' is set to a non-zero value that is less than 16 or greater than the maximum supported by the system, or to a value that is inconsistent with the value of ''bufpages'', the number will be increased or decreased as appropriate, and a message printed at boot time.

Related Parameters

''nbuf'' interacts with ''bufpages'' as follows:

  • * ''bufpages'' = 0, ''nbuf'' = 0: Enables dynamic buffer cache.

  • * ''bufpages'' not zero, ''nbuf'' = zero: Creates ''BufPages/2'' buffer headers and allocates ''bufpages'' times 4 Kbytes of buffer pool space at system boot time.

  • * ''bufpages'' = 0, ''nbuf'' not zero: Allocates ''Nbuf*2'' pages of buffer pool space and creates ''Nbuf'' headers at boot time.

  • * ''bufpages'' not zero, ''nbuf'' not zero: Allocates ''BufPages'' pages of buffer pool space and creates ''Nbuf'' buffer headers at boot time. If the two values conflict such that it is impossible to configure a system using both of them, ''bufpages'' takes precedence.

''ncallout''

Specify the maximum number of timeouts that can be scheduled by the kernel at any given time.

Acceptable Values:

Minimum: ''6''

Maximum: Memory limited

Default: ''16+nproc''

Specify integer value or use integer formula expression.

Description

''ncallout'' specifies the maximum number of timeouts that can be scheduled by the kernel at any given time. Timeouts are used by:

  • * ''alarm()'' system call,

  • * ''setitimer()'' system call,

  • * ''select()'' system call,

  • * drivers,

  • * ''uucp'' processes,

  • * process scheduling.

When the system exceeds the timeout limit, it prints the following fatal error to the system console:

panic: timeout table overflow

Related Parameters

If the value of ''nproc'' is increased, ''ncallout'' should be increased proportionately. A general rule is that one callout per process should be allowed unless you have processes that use multiple callouts.

ncdnode

Maximum number of open CD-ROM file-system nodes that can be in memory.

Acceptable Values:

Minimum: ''14''

Maximum: Memory limited

Default: ''150''

Specify integer value or use integer formula expression.

Description

''ncdnode'' specifies the maximum number of CD-ROM file-system nodes that can be in memory (in the vnode table) at any given time. It is functionally similar to ''ninode'' but applies only to CD-ROM file systems. Behavior is identical on Series 700 and Series 800 systems.

Each node consumes 288 bytes which means, for example, that if ''ncdnodes'' is set to 10&sigspace;000, nearly 3 Mbytes of memory is reserved exclusively for CD-ROM file-system node tables.

nclist

Specify number of cblocks for pty and tty data transfers.

Acceptable Values:

Minimum: ''132''

Maximum: Limited by available memory

Default: ''(100 + 16 * MAXUSERS)''

Specify integer value or use integer formula expression.

Description

''nclist'' specifies how many cblocks are allocated in the system. Data traffic is stored in cblocks as it passes through tty and pty devices.

The default value for ''nclist'', ''(100 + 16 * MAXUSERS)'', is based on a formula of 100 cblocks for system use in handling traffic to the console, etc., plus an average of 16 cblocks per user session. Note that cblocks are also used for serial connections other than login sessions, such as as SLIP connections, UUCP transfers, terminal emulators, and such. If your system is using these other kinds of connections, ''nclist'' should be increased accordingly.

If the cblock pool is exhausted, data being passed through a tty or pty device might be lost because no cblock was available when it was needed.

nfile

Set maximum number of files that can be open simultaneously on the system at any given time.

Acceptable Values:

Minimum: ``14''

Maximum: Memory limited

Default:

``((16*(NPROC+16+MAXUSERS)/10)+32+2*(NPTY+NSTRPTY+NSTRTEL)''

Specify integer value or use integer formula expression.

Description

``nfile'' defines the maximum number files that can be open at any one time, system-wide.

It is the number of slots in the file descriptor table. Be generous with this number because the required memory is minimal, and not having enough slots restricts system processing capacity.

nflocks

Specify the maximum combined total number of file locks that are available system-wide to all processes at any given time.

Acceptable Values:

Minimum: ``2''

Maximum: Memory limited

Default: ``200''

Specify integer value or use integer formula expression.

Description

``nflocks'' gives the maximum number of file/record locks that are available system-wide. When choosing this number, note that one file may have several locks and databases that use ``lockf()'' may need an exceptionally large number of locks.

Open and locked files consume memory and other system resources. These resources must be balanced against other system needs to maintain optimum overall system performance. Achieving an optimum balance can be quite complex, especially on large systems, because of wide variation in the kinds of applications being used on each system and the number and types of applications that might be running simultaneously, the number of local and/or remote users on the system, and many other factors.

ninode

Specify the maximum number of open inodes that can be in memory.

Acceptable Values:

Minimum: ``14''

Maximum: Memory limited

Default: ``nproc+48+maxusers+(2*npty)''

Specify integer value or use integer formula expression.

Description

``ninode'' defines the number of slots in the inode table, and thus the maximum number of open inodes that can be in memory. The inode table is used as a cache memory. For efficiency reasons, the most recent

``Ninode'' (number of) open inodes is kept in main memory. The table is hashed.

Each unique open file has an open inode associated with it. Therefore, the larger the number of unique open files, the larger ``ninode'' should be.

nkthread

Specify the maximum number of threads that all processes combined can run, system-wide, at any given time.

Acceptable Values:

Minimum `` 50''

Maximum ``30000''

Default ``(nproc*2)+16''

Specify integer or formula value.

Description

Processes that use threads for improved performance create multiple copies of certain portions of their process space, which requires memory space for thread storage as well as processor and system overhead related to managing the threads. On systems running large threaded applications, a large number of threads may be required. The kernel parameter ``max_thread_proc'' limits the number of threads that a single process can create, but there may be other threaded applications on the system that also use a large number of threads or they may have more modest requirements.

``nkthread'' limits the combined total number of threads that can be running on the system at any given time from all processes on the system. This value protects the system against being overwhelmed by a large number of threads that exceeds normal, reasonable operation. It protects the system against overload if multiple large applications are running, and also protects the system from users who might maliciously attempt to sabotage system operation by launching a large number of threaded programs, causing resources to become unavailable for normal system needs.

The default value allows an average of two threads per process plus an additional system allowance. If you need to use a larger value:

  • * Determine the total number of threads required by each threaded application on the system; especially any large applications.

  • * Determine how many and which of these will be running simultaneously at any given time.

  • * Add these together and combine with a reasonable allowance for other users or processes that might run occasionally using threads (``nproc*2'' might be a useful number).

  • * Select a value for ``nkthread'' that is large enough to accommodate the total, but not so large that it compromises system integrity.

no_lvm_disk

Tell the kernel that no logical volume groups exist on the system (Series 700 only).

Acceptable Values:

Minimum: ``0'' (check for LVM disks)

Maximum: ``1'' (system has no LVM disks)

Default: ``0''

Specify integer value of ``0'' or ``1''.

Description

By default at boot time, the system checks for LVM data structures on the configured root, swap, and dump disks. If no LVM disks exist on the system, setting ``no_lvm_disks'' to 1 speeds up the boot process by omitting the check for LVM data structures.

Setting this parameter to a non-zero value on systems where LVM is being used causes kernel panics because the kernel does not obtain the necessary information about logical volumes on the system during the normal boot process.

``nproc''

number of processes

Minimum: ``10''

Maximum: Memory limited

Default: ``20+(8 * maxusers)''

Specify integer value or use integer formula expression.

Description

``nproc'' specifies the maximum total number of processes that can exist simultaneously in the system.

There are at least four system overhead processes at all times, and one entry is always reserved for the super-user.

When the total number of processes in the system is larger than ``nproc'', the system issues these messages:

At the system console:

proc: table is full

Also, if a user tries to start a new process from a shell, the following message prints on the user's terminal:

no more processes

If a user is executing ``fork()'' to create a new process, ``fork()'' returns −1 and sets ``errno'' to ``EAGAIN''.

npty

Specifies the maximum number of pseudo-tty data structures available on the system.

Acceptable Values:

Minimum: `` 1''

Maximum: Memory limited

Default: ``60''

Specify integer value.

Description

``npty'' limits the number of the following structures that can be used by the pseudo-teletype driver:

struct tty pt_tty[npty];

struct tty *pt_line[npty];

struct pty_info pty_info[npty];

NSTREVENT'

Set the maximum number of outstanding streams bufcalls that are allowed to exist on the system at any given time.

Acceptable Values:

Minimum: none

Maximum: none

Default: ``50''

Specify integer value.

Description

This parameter limits the maximum number of outstanding bufcalls that are allowed to exist in a stream at any given time. The number of bufcalls that exist in a given stream is determined by the number and nature of the streams modules that have been pushed into that stream.

This parameter is intended to protect the system against resource overload caused if the combination of modules running in all streams issue an excessive number of bufcalls. The value selected should be equal to or greater than the combined maximum number of bufcalls that can be reasonably expected during normal operation from all streams on the system. This value depends on the behavior and structure of each available streams module as well as the number and combinations of modules that can be pushed onto all streams in the system at any given time.

nstrpty

Setthe system-wide maximum number of streams-based PTYs that are allowed on the system.

Acceptable Values:

Minimum: ``0''

Maximum: Memory limited

Default: ``0''

Specify integer value.

Description

This parameter limits the number of streams-based PTYs that are allowed system-wide. When sending data to PTY devices (such as windows), a PTY device must exist for every window that is open at any given time.

This parameter should be set to a value that is equal to or greater than the number of PTY devices on the system that will be using streams-based I/O pipes. Using a parameter value significantly larger than the number of PTYs is not recommended. ``nstrpty'' is used when creating data structures in the kernel to support those streams-based PTYs, and an excessively large value wastes kernel memory space.

NSTRPUSH

Set the maximum number of streams modules that are allowed to exist in any single stream at any given time on the system.

Acceptable Values:

Minimum: none

Maximum: none

Default: ``16''

Specify integer value.

Description

This parameter defines the maximum number of streams modules that can be pushed onto any given stream. This provides some protection against run-away processes that might automatically select modules to push onto a stream, but it is not intended as a defense against malicious use of streams modules by system users.

Most systems do not require more than about three or four modules in any given stream. However, there may be some unusual cases where more modules are needed. The default value for this parameter allows as many as 16 modules in a stream, which should be sufficient for even the most demanding installations.

If your system needs more than 16 modules in a stream, the need should be carefully evaluated, and the demands on other system resources such as outstanding bufcalls and other factors should also be carefully evaluated.

NSTRSCHED

Set the maximum number of streams scheduler daemons (``smpsched'') that are allowed to run at any given time on the system.

Acceptable Values:

Minimum: ``0''

Maximum: ``32''

Default: ``0''

Specify integer value.

Description

This parameter defines the maximum number of multi-processor (MP) streams-scheduler daemons to run on systems containing more than one processer. Note that uni-processor (UP) systems do not use an MP scheduler daemon, but both MP and UP systems always have one UP streams scheduler (``supsched'').

If the parameter value is set to zero, the system determines how many daemons to run, based on the number of processors in the system. The value selected is ``1'' for 2-4 processors, ``2'' for 5-8 processors,

``3'' for 9-16 processors, and ``4'' for more than 16 processors.

If the parameter value is set to a positive, non-zero value, that is the number of ``smpsched'' daemons that will be created on an MP system.

nstrtel

Specifies the number of telnet device files that the kernel can support for incoming ``telnet'' sessions.

Acceptable Values:

Minimum: ``60''

Maximum: `` ''

Default: ``60''

Specify integer value.

Description

``nstrtel'' specifies the number of kernel data structures that are created at system boot time that are required to support the device files used by incoming telnet sessions on a server. This number should match the number of device files that exist on the system. If the ``insf'' command or SAM is used to create more telnet device files, the value of ``nstrtel'' must be increased accordingly or the device files cannot be used because there are no kernel data structures available for communicating with the system.

Select a value for ``nstrtel'' that is equal to or greater than the number of telnet device files on the system. Selecting a value that exceeds the number of device files actually existing on the system wastes the memory consumed by extra data structures, but it may be justified if you are planning to add more device files.

nswapdev

Specify number of disk devices that can be enabled for device swap.

Acceptable Values:

Minimum: `` 1''

Maximum: ``25''

Default: ``10''

Specify an integer value equal to the number of physical disk devices that have been configured for device swap up to the maximum limit of 25. Only an integer value is allowed (formula values do not work for this parameter).

Description

``nswapdev'' defines the maximum number of devices that can be used for device swap.

At system boot time, the kernel creates enough internal data structures to support device swap to the specified number of physical devices that have reserved system swap areas. If the specified value is greater than the number of available devices, the extra data structure space is never used, thus wasting a little bit of memory (<50 bytes per structure). If the value is less than the number of available devices, some devices cannot be used for swap due to lack of supporting data structures in the kernel.

Related Parameters

None.

nswapfs

Specify number of file systems that can be enabled for file-system swap.

Acceptable Values:

Minimum: `` 1''

Maximum: ``25''

Default: ``10''

Specify an integer value equal to the number of file systems that are available for file-system swap up to the maximum limit of 25.

Description

``nswapfs'' defines the maximum number of file systems that can be used for file system swap.

At system boot time, the kernel creates enough internal data structures (about 300 bytes per structure) to support file system swap to the specified number of file systems. If the specified value is greater than the number of available file systems, the extra data structure space is never used, thus wasting that much memory. If the value is less than the number of available file systems, some file systems cannot be used for swap due to lack of supporting data structures.

Related Parameters

nsysmap

Set the number of entries in the kernel dynamic memory virtual address space resource map.

Acceptable Values:

Minimum: `` 800''

Maximum: Memory Limited

Default: `` 2 * nproc''

Specify integer value.

Description

nsysmap and it's 64-bit equivalent, nsysmap64, sets the size of the kernel dynamic memory resource map, an array of address/length pairs that describe the free virtual space in the kernel's dynamic address space. There are different tunables for the 32-bit and 64-bit kernel because the 64-bit kernel has more virtual address space.

Previously, the kernel dynamic memory resource map was set by the system solely, and not easily changed. Certain workloads, which fragmented the kernel address space significantly, resulted in too many entries in the resource map. When this happened, the last entry in the resource map was thrown away, resulting in "leaked" kernel virtual address space. If this overflow happened often enough, virtual space was exhausted.

The system uses an algorithm to automatically scale the map size, at boot time, according to the system workload. If the value is still not set high enough to avoid the problem of overflowing the memory resource map array, you can tune this parameter to fit a particular workload.

Note that even when you override the default value, the kernel may increase the value beyond that value depending on the system size.

Purpose

This tunable was added to address the problem of ``kalloc: out of virtual space'' system panics. Only systems that experience the resource map overflow will need to modify this tunable parameter.

The following message will appear on the console when the resource map overflow occurs:

sysmap32: rmap ovflo, lost [X,Y] *

or

sysmap64: rmap ovflo, lost [X,Y]

  • * Where X and Y are hexadecimal numbers.

If this happens rarely, no action is necessary. If this happens frequently, (for example, several times a day on a system which the user does not intend to reboot for a long time (a year of more), the tunable should be increased. If the tunable is not increased, the following panic may occur:

kalloc: out of kernel virtual space

When increasing nsysmap{32|64}, doubling the tunable value is a reasonable rule of thumb. If the problem persists after doubling the tunable several times from the default, there is likely another kernel problem, and the customer should go through their normal HP support channels to investigate.

Side-Effects

If the value of this parameter is increased, kernel memory use increases very slightly. Depending on the workload, if the tunable is quite large, the performance of kernel memory allocation may be negatively affected.

Lowering the value of this parameter from the default is risky and increases the probability of resource map overflows, eventually leading to a kernel panic. Consult your HP support representative prior to decreasing the value of nsysmap.

num_tachyon_adapters

Define number of Fiber-Channel Tachyon adapters in the system if system does not support I/O virtual addressing.

Acceptable Values:

Minimum: ``0''

Maximum: ``5''

Default: ``0''

Specify integer value or use integer formula expression. A non-zero value is !!required!! if the system does not provide I/O virtual addressing. Choose a value equal to the number of Tachyon FCP adapters installed in the system.

Description

``num_tachyon_adapters'' specifies how many Tachyon FCP adapters are installed in the system so that an appropriate amount of memory can be allocated for them at system start-up if the system does not provide I/O virtual addressing.

Specifying a Value for ``num_tachyon_adapters''

If your system does not provide I/0 virtual addressing, set ``num_tachyon_adapters'' equal to the number of Tachyon FCP adapters actually installed in the machine. During boot-up, the system then reserves a corresponding amount of memory for use by those adapters, varying that amount according to the value of ``max_fcp_reqs''.

If the system supports I/O virtual addressing, set this parameter to zero. The system then automatically allocates memory as needed.

If you do not know whether your system provides I/O virtual addressing, setting this parameter to a non-zero value is harmless, provided the value does not exceed the number of Tachyon FCP adapters actually installed in the system. If the value exceeds the number of installed adapters, a corresponding amount of memory is wasted because it cannot be used for other purposes.

Related Parameters and System Values

The system allocates memory for use by Tachyon FCP adapters based on the comination of values specified for ``num_tachyon_adapters'' and ``max_fcp_reqs''.

o_sync_is_o_dsync

``o_sync_is_o_dsync'' specifies whether the system is allowed to translate the ``O_SYNC'' flag in an ``open()'' or ``fcntl()'' call into an ``O_DSYNC'' flag.

Acceptable Values:

Minimum: ``0''

Maximum: ``1''

Default: ``0''

Specify integer value.

Description

In an ``open()'' or ``fcntl()'' call, the ``O_SYNC'' and ``O_DSYNC'' flags are used to ensure that data is properly written to disk before the call returns. If these flags are not set, the function returns as soon as the disk-access request is initiated, and assumes that the write operation will be successfully completed by the system software and hardware.

Setting the ``O_SYNC'' or ``O_DSYNC'' flag prevents the function from returning to the calling process until the requested disk I/O operation is complete, thus ensuring that the data in the write operation has been successfully written on the disk. Both flags are equivalent in this regard except for one important difference: if ``O_SYNC'' is set, the function does not return until the disk operation is complete !!and!! until all all file attributes changed by the write operation (including access time, modification time, and status change time) are also written to the disk. Only then does it return to the calling process.

Setting ``o_sync_is_o_dsync'' to ``1'' allows the system to convert any ``open()'' or ``fcntl()'' calls containing an ``O_SYNC'' flag into the same call using the ``O_DSYNC'' flag instead. This means that the function returns to the calling process before the file attributes are updated on the disk, thus introducing the risk that this information might not be on the disk if a system failure occurs.

Setting this parameter to a non-zero value allows the function to return before file time-stamp attributes are updated, but still ensures that actual file data has been committed to disk before the calling process can continue. This is useful in installations that perform large volumes of disk I/O and require file data integrity, but which can gain some performance advantages by not forcing the updating of time stamps before proceeding. When the benefits of performance improvement exceeds the risks associated with having incorrect file-access timing information if a system or disk crash occurs, this parameter can be set to ``1''. If that is not the case, it should remain set to its default value of zero.

The setting of this parameter does not affect disk I/O operations where ``O_SYNC'' is not used.

page_text_to_local

Enable or disable swapping of program text segments to local swap device on NFS cluster client.

Acceptable Values:

Minimum: ``0'' (stand-alone, or client uses file-system server)

Maximum: ``1'' (use client local swap)

Default: ``1'' (use client local swap)

Specify integer value of ``0'' or ``1''.

Description

Programs usually contain three segments:

Text segment Unchanging executable-code part of the program.

Data segment Arrays and other fixed data structures

DSS segment Dynamic storage, stack space, etc.

To minimize unnecessary network traffic, NFS cluster clients that have no local swap device discard the text segment of programs when it becomes necessary to swap memory in order to make space available for another program or application. Text segments are discarded because swapping to swap space on the server when no local disk is available then later retrieving the same data that exists in the original program file wastes server disk space and increases network data traffic.

However, when adequate swap space is available on a local disk device that is connected to the client machine, it is more efficient to write the text segment to local swap and retrieve it later. This eliminates two separate text-segment data transfers to and from the server, thus improving cluster performance (depending on the particular applications and programs being used). To use local swap this way, the available swap space must be greater than the !!maximum!! total swap space required by !!all!! processes running on the system at any time. If this condition is not met, system memory-allocation errors occur when space conflicts arise.

``page_text_to_local'' is the configurable kernel parameter that determines whether text segments are discarded or swapped to the local device to save network traffic.

``0''

Do not use local client swap device. Client either has no local swap device, or sufficient space is not available for full text swap support. Discard text segment if memory space is needed, then retrieve original file from server when ready to execute again.

``1''

Swap text pages to local swap device when memory space is needed for other purposes, then retrieve from swap device when the segment is required. This usually improves client performance and decreases cluster network data traffic. If you use this value, local swap !!must!! be enabled, and the available device-swap space on the client's local disk must be sufficient for the maximum required virtual memory for !!all!! programs that may be running on the client at any given time. Otherwise, processes may fail due to insufficient memory.

On stand-alone, non-cluster systems, set ``page_text_to_local'' to ``0''.

pfail_enabled

Disable or enable system power-failure routines (Series 800 only).

Acceptable Values:

Minimum: ``0''

Maximum: ``1''

Default: ``0''

Specify integer value.

Description

``pfail_enabled'' determines whether a Series 800 system can recognize a local power failure (that halts the computer by affecting its central bus). The value can be set to zero or ``1'' as follows:

``0''

Disable powerfail detect. This prevents the system from running the ``/sbin/powerfail'' command (started from ``/etc/inittab'') so that it can provide for recovery when a power failure occurs. Programs running when power fails cannot resume execution when power is restored.

``1''

Enable powerfail detection. This causes the system to recognize a power failure and employ recovery mechanisms related to the ``/sbin/powerfail'' command (started from ``/etc/inittab'') so that when a power failure occurs, programs running when power fails can resume execution when power is restored.

Be sure to follow guidelines for correct shutdown and start-up of a system necessitated by powerfail. These guidelines are discussed in the <book|System Administration Tasks| manual.

Note that although powerfail appears in all ``/etc/inittab'' files, the entry is only recognized and used by systems that support powerfail.

public_shlibs

Enable "public" protection IDs on shared libraries.

Acceptable Values:

Minimum: ``0''

Maximum: ``1'' (or non-zero)

Default: ``1''

Specify integer value.

Description

``public_shlibs'' enables the use of "public" protection IDs on shared libraries.

Shared libraries are implemented using ``mmap()'', and each individual ``mmap()'' is given a unique protection ID. Processes have four protection ID registers of which two are hard-coded to text/data. The remaining two are shared back and forth between whatever shared library/shared memory segments the user process is accessing.

A performance problem arose when the shared libraries were introduced, causing increased process ID (pid) thrashing. To minimize this effect, a public protection ID was added to all shared-library mappings, thus effectively removing shared libraries from the pool of objects that were accessing the two protection ID registers.

Setting ``public_shlibs'' to ``1'' allows the system to assign public protection id's to shared libraries. Setting it to ``0'' disables public access and places a unique protection id on every shared library. The default value is ``1'', and any non-zero value is interpreted as ``1''.

Set the parameter to zero value !!only!! if there is some "security hole" or other reason why a public value should not be used.

rtsched_numpri

Specify the number of available, distinct real-time process scheduling priorities.

Acceptable Values:

Minimum: `` 32''

Maximum: ``512''

Default: `` 32''

Specify integer value.

Description

``rtsched_numpri'' specifies the number of distinct priorities that can be set for real-time processes running under the real-time scheduler (POSIX Standard, P1003.4).

Appropriate Values

The default value of 32 satisfies the needs of most configurations. In cases where you need more distinct levels of priorities among processes, increase the value accordingly. However, be aware that increasing the value of ``rtsched_numpri'' to specify a larger number of priorities can cause the system to spend more time evaluating eligible processes, thus resulting in possible reduced overall system performance.

remote_nfs_swap

``remote_nfs_swap'' enables or disables the ability of the system to perform swap to NFS-mounted devices or file systems.

Acceptable Values:

Minimum: ``0''

Maximum: ``1''

Default: ``0''

Specify integer value.

Description

Use ``remote_nfs_swap'' to enable or disable the ability of the system to perform swap to NFS-mounted devices or file systems. The default value of zero disables NFS swap. To enable, change the value to ``1''.

This parameter was initially created to allow clients in an NFS cluster to use disk space on the server for swap. NFS clusters are no longer supported on HP-UX systems, and this parameter is set to zero by default (remote NFS swap not allowed). Setting this parameter to allow remote NFS swap is not very useful unless the system where it is allowed has extremely fast NFS capabilities.

scsi_maxphys

Set the maximum record size for the SCSI I/O subsystem, in bytes.

Acceptable Values:

Minimum: `` 1048576 (1 MB)''

Maximum: ``16777215 (16MB - 1)''

Default: `` 1048576 (1 MB)''

Specify integer value.

Description

This parameter is used in conjunction with ``st_large_recs'' to enable large tape record support without logical record breakup and recombination.

scsi_max_qdepth

Set the maximum number of SCSI commands queued up for SCSI devices.

Acceptable Values:

Minimum: `` 1''

Maximum: ``255''

Default: `` 8''

Specify integer value.

Description

For devices that support a queue depth greater than the system default, this parameter controls how many I/Os the driver will attempt to queue to the device at any one time. Valid values are (1-255). Some disk devices will not support the maximum queue depth settable by this command. Setting the queue depth in software to a value larger than the disk can handle will result in I/Os being held off once a QUEUE FULL condition exists on the disk.

st_fail_overruns

If set, SCSI tape read resulting in data overrun causes failure.

Acceptable Values:

Disabled: ``0''

Enabled: ``1''

Default: ``0''

Specify ``0'' or ``1''.

Description

Certain technical applications depend on the fact that reading a record smaller than the actual tape record size should generate an error.

st_large_recs

If set, enables large record support for SCSI tape.

Acceptable Values:

Disabled: ``0''

Enabled: ``1''

Default: ``0''

Specify ``0'' or ``1''.

Description

This parameter is used in conjunction with ``scsi_maxphys''to enable large tape record support without logical record breakup and recombination.

scroll_lines

Specify the number of display lines in ITE console screen buffer.

Acceptable Values:

Minimum: `` 60''

Maximum: ``999''

Default: ``100''

Specify integer value.

Description

``scroll_lines'' defines the scrolling area (the number of lines of emulated terminal screen memory on each Internal Terminal Emulator (ITE) port configured into the system).

semmni

Specify maximum number of sets of IPC semaphores that can exist simultaneously on the system.

Acceptable Values:

Minimum: `` 2''

Maximum: Memory limited

Default: ``64''

Specify integer value or use integer formula expression.

Description

``semmni'' defines the number of sets (identifiers) of semaphores available to system users.

When the system runs out of semaphore sets, the ``semget()'' system call returns a ``ENOSPC'' error message.

semmns

Define the system-wide maximum number of individual IPC semaphores that can be allocated for users.

Acceptable Values:

Minimum: `` 2''

Maximum: Memory limited

Default: ``128''

Specify integer value or use integer formula expression.

Description

``semmns'' defines the system-wide maximum total number of individual semaphores that can be made available to system users.

When the free-space map shows that there are not enough contiguous semaphore slots in the semaphore area of shared memory to satisfy a ``semget()'' request, ``semget()'' returns a ``ENOSPC'' error. This error can occur even though there may be enough free semaphores slots, but they are not contiguous.

semmnu

Define the maximum number of processes that can have undo operations pending on any given IPC semaphore on the system.

Acceptable Values:

Minimum: `` 1''

Maximum: ``nproc-4''

Default: ``30''

Specify integer value.

Description

An !!undo!! is a special, optional, flag in a semaphore operation which causes that operation to be undone if the process which invoked it terminates.

``semmnu'' specifies the maximum number of processes that can have undo operations pending on a given semaphore. It determines the size of the ``sem_undo'' structure.

A``semop()'' system call using the ``SEM_UNDO'' flag returns an ``ENOSPC'' error if this limit is exceeded.

semmap

Specify size of the free-space resource map used for allocating new System V IPC semaphores in shared memory.

Acceptable Values:

Minimum: ``4''

Maximum: Memory limited

Default: ``SemMNI+2''

Specify integer value or use integer formula expression.

Description

Each set of semaphores allocated per identifier occupies 1 or more contiguous slots in the sem array. As semaphores are allocated and deallocated, the sem array might become fragmented.

``semmap'' dimensions the resource map which shows the free holes in the sem array. An entry in this map is used to point to each set of contiguous unallocated slots; the entry consists of a pointer to the set, plus the size of the set.

If semaphore usage is heavy and a request for a semaphore set cannot be accommodated, the following message appears:

danger: mfree map overflow

You should then configure a new kernel with a larger value for ``semmap''.

Fragmentation of the sem array is reduced if all semaphore identifiers have the same number of semaphores; ``semmap'' can then be somewhat smaller.

Four is the lower limit: 1 slot is overhead for the map and the second slot is always needed at system initialization to show that the sem array is free.

semume

Define the maximum number of IPC semaphores that a given process can have undo operations pending on.

Acceptable Values:

Minimum: `` 1''

Maximum: ``SemMNS''

Default: ``10''

Specify integer value.

Description

An !!undo!! is a special, optional, flag in a semaphore operation which causes that operation to be undone if the process which invoked it terminates.

``semume'' specifies the maximum number of semaphores that any given process can have undos pending on.

``semop'' is the value of the maximum number of semaphores you can change with one system call. This value is specified in the file ``/usr/include/sys/sem.h''.

A``semop()'' system call using the ``SEM_UNDO'' flag returns an ``EINVAL'' error if the ``semume'' limit is exceeded.

semvmx

Specify maximum possible semaphore value.

Acceptable Values:

Minimum: `` 1''

Maximum: ``65535''

Default: ``32767''

Specify integer value.

Description

``semvmx'' specifies the maximum value a semaphore can have. This limit must not exceed the largest number that can be stored in a 16-bit unsigned integer or undetectable semaphore overflows can occur.

Any ``semop()'' system call that tries to increment a semaphore value to greater than ``semvmx'' returns an ``ERANGE'' error. If ``semvmx'' is greater than 65&sigspace;535, semaphore values can overflow without being detected.

``semop'' is the value of the maximum number of semaphores you can change with one system call. This value is specified in the file ``/usr/include/sys/sem.h''.

sema

Enable or disable System V IPC semaphores support in kernel at system boot time (Series 700 only).

Acceptable Values:

Minimum: ``0'' (exclude System V IPC semaphore code from kernel)

Maximum: ``1'' (include System V IPC semaphore code in kernel) Default: ``1''

Specify integer value of ``0'' or ``1''.

Description

``sema'' determines whether the code for System V IPC semaphore is to be included in the kernel at system boot time (Series 700 systems only).

``sema'' = 1 Code is included in the kernel (enable IPC shared memory).

``sema'' = 0 Code is not included in the kernel (disable IPC shared memory).

Series 800 systems: IPC shared memory is always enabled in the kernel.

Series 700 systems: If ``shmem'' is set to zero, all other IPC shared memory parameters are ignored.

Starbase graphics library and some other HP-UX subsystems use semaphores. Disable only if you are certain that no applications on your system depend on System V IPC semaphores.

If ``sema'' is zero, any program that uses ``semget()'' or ``semop()'' system calls, will return a ``SIGSYS'' signal.

semaem

Define the maximum amount a semaphore value can be changed by a semaphore "undo" operation.

Acceptable Values:

Minimum: ``0''

Maximum: ``SEMVMX'' or 32767, whichever is smaller''

Default: ``16384''

Specify integer value or use integer formula expression.

Description

An !!undo!! is an optional flag in a semaphore operation which causes that operation to be undone if the process which invoked it dies.

``semaem'' specifies the maximum amount the value of a semaphore can be changed by an undo operation.

The undo value is cumulative per process, so if one process has more than one undo operation on a semaphore, the values of each undo operation are added together and the sum is stored in a variable named ``semadj''. ``semadj'' then contains the number by which the semaphore will be incremented or decremented if the process dies.

sendfile_max

``sendfile_max'' defines the maximum number of pages of buffer cache that can be in transit via the ``sendfile()'' system call at any given time.

Acceptable Values:

Minimum: `` 0''

Maximum: ``0x40000''

Default: `` 0''

Specify integer value.

Description

``sendfile_max'' places a limit on the number of pages of buffer cache that can be monopolized by the ``sendfile()'' system call at any given time. ``sendfile()'' is a system call used by web servers so they can avoid the overhead of copying data from user space to kernel space using the ``send()'' system call. The networking software uses the buffer-cache buffer directly while data is in-transit over the wire. Normally this is a very short time period, but when sending data over a slow link or when retransmitting due to errors, the in-transit period can be much longer than usual.

``sendfile_max'' prevents ``sendfile()'' from locking up all of the available buffer cache by limiting the amount of buffer cache memory that sendfile can access at any given time.

``sendfile_max'' is the upper bound on the number of !!pages!! of buffer cache that can be in transit via sendfile at any one time. The minimum value of zero means there is no limit on the number of buffers. Any other value limits buffer-cache access to the number of pages indicated. The default value is 0.

Setting ``sendfile_max'' to ``1'' means, in effect, that no buffers are available. Every buffer is at least one page, and that value prevents any access because the first request for buffer space would exceed the upper bound, forcing ``sendfile()'' to revert back to using ``malloc()'' and data-copy operations, which is what the ``send()'' system call does.

Setting ``sendfile_max'' to any other value up to 0x40000, allows the adminstrator to protect the system against the possibility of ``sendfile()'' monopolizing too many buffers, if buffer availability becomes a problem during normal system operation.

shmem

Enable or disable System V IPC shared memory support in kernel at system boot time (Series 700 only).

Acceptable Values:

Minimum: ``0'' (exclude System V IPC shared memory code from kernel)

Maximum: ``1'' (include System V IPC shared memory code in kernel)

Default: ``1''

Specify integer value of ``0'' or ``1''.

Description

``shmem'' determines whether the code for System V IPC shared memory is to be included in the kernel at system boot time (Series 700 systems only).

``shmem'' = ``1'' Code is included in the kernel (enable IPC shared memory).

``shmem'' = ``0'' Code is not included in the kernel (disable IPC shared memory).

Series 800 systems: IPC shared memory is always enabled in the kernel.

Series 700 systems: If ``shmem'' is set to zero, all other IPC shared memory parameters are ignored.

When to Disable Shared Memory

Some subsystems such as Starbase graphics require shared memory. Others such as X Windows use shared memory (often in large amounts) for server-client communication if it is available, or sockets if it is not. If memory space is at a premium and such applications can operate, albeit slower, without shared memory, you may prefer to run without shared memory enabled.

shmmax

Specify system-wide maximum allowable shared memory segment size.

Acceptable Values:

Minimum: ``2048'' (2 Kbytes)

Maximum: ``1 Gbyte'' on 32-bit systems

Maximum: ``4 Tbyte'' on 64-bit systems

Default: ``0x04000000'' (64 Mbytes)

Specify integer value.

Description

``shmmax'' defines the system-wide maximum allowable shared memory segment size in bytes. Any ``shmget()'' system call that requests a segment larger than this limit returns an error.

The value used cannot exceed maximum available swap space. For minimum and maximum allowable values, as well as the default value for any given system, refer to values in ``/etc/conf/master.d/*'' files.

``shmmax'' can be set by rebuilding the kernel or be set in the running kernel with ``settune()''. ``SAM'' and ``kmtune'' use ``settune()''. ``shmmax'' is only checked when a new shared memory segment is created. Dynamically changing this parameter will only limit the size of shared memory segments created after the call to ``settune()''.

shmmni

Specify system-wide maximum allowable number of shared memory segments (by limiting the number of segment identifiers).

Acceptable Values:

Minimum: `` 3''

Maximum: (memory limited)

Default: `` 200'' identifiers

Specify integer value.

Description

``shmmni'' specifies the maximum number of shared memory segments allowed to exist simultaneously, system-wide. Any ``shmget()'' system call requesting a new segment when ``shmni'' segments already exist returns an error. This parameter defines the number of entries in the shared memory segment identifier list which is stored in non-swappable kernel space.

Setting ``shmmni'' to an arbitrarily large number wastes memory and can degrade system performance. Setting the value too high on systems with small memory configuration may consume enough memory space that the system cannot boot. Select a value that is as close to actual system requirements as possible for optimum memory usage. A value not exceeding 1024 is recommended unless system requirements dictate otherwise.

Starbase graphics requires that ``shmmni'' be set to not less than 4.

shmseg

Define maximum number of shared memory segments that can be simultaneously attached to a single process.

Acceptable Values:

Minimum: ``1''

Maximum: ``shmmni''

Default: ``120''

Specify integer value.

Description

``shmseg'' specifies the maximum number of shared memory segments that can be attached to a process at any given time. Any calls to ``shmat()'' that would exceed this limit return an error.

``shmseg'' can be set by rebuilding the kernel or be set in the running kernel with ``settune()''. ``SAM'' and ``kmtune'' use ``settune()''. ``shmseg'' is only checked in ``shmat()'' whenever a segment is attached to a process. Dynamically changing this parameter will only affect future calls to ``shmat()''. Existing shared memory segments will be unaffected.

STRCTLSZ

Set the maximum number of control bytes allowed in the control portion of any streams message on the system.

Acceptable Values:

Minimum: ``0''

Maximum: Memory limited

Default: ``1024'' bytes

Specify integer value.

Description

This parameter limits the number of bytes of control data that can be inserted by ``putmsg()'' in the control portion of any streams message on the system. If the parameter is set to zero, there is no limit on how many bytes can be placed in the control segment of the message.

``putmsg()'' returns ``ERANGE'' if the buffer being sent is larger than the current value of ``STRCTLSZ''.

STRMSGSZ

Set the maximum number of data bytes allowed in any streams message on the system.

Acceptable Values:

Minimum: ``0''

Maximum: Memory limited

Default: ``8192'' bytes

Specify integer value.

Description

This parameter limits the number of bytes of control data that can be inserted by ``putmsg()'' or ``write()'' in the data portion of any streams message on the system. If the parameter is set to zero, there is no limit on how many bytes can be placed in the data segment of the message.

``putmsg()'' returns ``ERANGE'' if the buffer being sent is larger than the current value of ``STRMSGSZ''; ``write()'' segments the data into multiple messages.

``streampipes''

Force All Pipes to be Streams-Based.

Acceptable Values:

Minimum: ``0''

Maximum: ``1''

Default: ``0''

Specify integer value.

Description

This parameter determines the type of pipe that is created by the ``pipe()'' system call. If set to the default value of zero, all pipes created by ``pipe()'' are normal HP-UX file-system pipes. If the value is ``1'', ``pipe()'' creates streams-based pipes and modules can be pushed onto the resulting stream.

If this parameter is set to a non-zero value, the ``pipemod'' and ``pipedev'' module and driver must be configured in file ``/stand/system''.

swchunk

Specify chunk size to be used for swap.

Acceptable Values:

Minimum: `` 2048''

Maximum: ``65536''

Default: `` 2048''

Specify integer value.

Use the default value of ``2048'' unless you need to configure more than 32Gb of swap. See the help for ``maxswapchunks'' before changing this parameter.

Description

``swchunk'' defines the chunk size for swap. This value must be an integer power of two.

Swap space is allocated in "chunks", each containing ``swchunk'' blocks of ``DEV_BSIZE'' bytes each. When the system needs swap space, one swap chunk is obtained from a device or file system. When that chunk has been used and another is needed, a new chunk is obtained from a different device or file system, thus distributing swap use over several devices and/or file systems to improve system efficiency and minimize monopolization of a given device by the swap system.

swapmem_on

Enable pseudo-swap reservation.

Acceptable Values:

Minimum: ``0'' (disable pseudo-swap reservation)

Maximum: ``1'' (enable pseudo-swap reservation)

Default: ``1''

Specify integer value of ``0'' or ``1''.

Description

``swapmem_on'' enables or disables the reservation of pseudo-swap, which is space in system memory considered as available virtual memory space in addition to device swap space on disk. By default, pseudo-swap is enabled.

Virtual memory (swap) space is normally allocated from the device swap area on system disks. However, on systems that have massive amounts of installed RAM and large disks or disk arrays, there may be situations where it would be advantageous to not be restricted to the allocated device swap space.

For example, consider an administrator running a system in single-user mode that has 200 Mbytes of installed RAM, only 20 Mbytes of which is used by the kernel, and 1 Gbyte of swap area on the root disk array. Suppose a process is running that requires 1.1 Gbytes of swap space. Since no other users have processes running on the system, providing access to the unused RAM by the swap system would provide sufficient swap space. ``swapmem_on'' accomplishes this.

Administrators of workstations and smaller systems may prefer to disable this capability, depending on system and user needs.

``timeslice''

scheduling timeslice interval

Acceptable Values:

Minimum: `` -1''

Maximum: ``2147483647'' (approximately 8 months)

Default: ``10'' (ten 10-msec ticks)

Specify integer value or use integer formula expression.

Description

The ``timeslice'' interval is the amount of time one process is allowed to run before the CPU is given to the next process at the same priority. The value of ``timeslice'' is specified in units of (10 millisecond) clock ticks. There are two special values:

`` 0''

Use the system default value (currently ten 10-msec ticks, or 100 milliseconds).

``-1''

Disable round-robin scheduling completely.

Impact on System

``timeslice'' imposes a time limit which, when it expires, forces a process to check for pending signals. This guarantees that any processes that do not make system calls can be terminated (such as a runaway process in an infinite loop). Setting ``timeslice'' to a very large value, or to &minus;1, allows such processes to continue operating without checking for signals, thus causing system performance bottlenecks or system lock-up.

Use the default value for ``timeslice'' unless a different value is required by system applications having specific real-time needs.

No memory allocation relates to this parameter. Some CPU time is spent at each timeslice interval, but this time has not been precisely measured.

timezone

Specify the time delay from Coordinated Universal Time west to the local time zone.

Acceptable Values:

Minimum: ``-720''

Maximum: `` 720''

Default: `` 420''

Specify integer value.

Description

``timezone'' specifies the time delay in minutes from Coordinated Universal Time in a westerly direction to the local time zone where the system is located. A negative value is interpreted as minutes east from Coordinated Unversal Time. The value is stored in a structure defined in ``/usr/include/sys/time.h'' as follows:

struct timezone tz = { TimeZone, DST };

struct timezone {

int tz_minuteswest; /* minutes west of Greenwich */

int tz_dsttime; /* type of dst correction */

};

#define DST_NONE 0 /* not on dst */ #define DST_USA 1 /* USA style dst */ #define DST_AUST 2 /* Australian style dst */ #define DST_WET 3 /* Western European dst */ #define DST_MET 4 /* Middle European dst */ #define DST_EET 5 /* Eastern European dst */

unlockable_mem

Specify minimum amount of memory that is to remain reserved for system overhead and virtual memory management use.

Acceptable Values:

Minimum: ``0''

Maximum: Available memory indicated at power-up

Default: ``0'' (system sets to appropriate value)

Specify integer value.

Description

``unlockable_mem'' defines the minimum amount of memory that is to always remain available for virtual memory management and system overhead. Increasing the amount of unlockable memory decreases the amount of lockable memory.

Specify ``unlockable_mem'' in 4-Kbyte pages. Note that current amounts of available and lockable memory are listed along with the physical page size in startup messages, which you can view later by running ``/etc/dmesg.''

If the value for ``unlockable_mem'' exceeds available system memory, it is set equal to available memory (reducing lockable memory to zero).

Any call that requires lockable memory may fail if the amount of lockable memory is insufficient. Note that lockable memory is available for virtual memory except when it is locked.

vas_hash_locks

vas_hash_locks is reserved for future use.

Acceptable Values:

Default: `` 128''

Customers should not attempt to change this parameter.

vmebpn_public_pages

``vmebpn_public_pages'' specifies the number of 4-Kbyte pages reserved for the VME slave I/O memory mapper.

Acceptable Values:

Minimum: `` 0''

Maximum: ``32''

Default: `` 1''

Specify integer value.

Description

``vmebpn_public_pages'' specifies the number of 4-Kbyte pages reserved for the VME slave I/O memory mapper.

Refer to VME documentation for further information about setting kernel parameters for the optional VME subsystem.

vmebpn_sockets

``vmebpn_sockets'' specifies whether the VME socket domain ``AF_VME_LINK'' is active or not.

Acceptable Values:

Minimum: ``0'' (``AF_VME_LINK'' inactive)

Maximum: ``1'' (``AF_VME_LINK'' active)

Default: ``1'' (``AF_VME_LINK'' active)

Specify integer value.

Description

``vmebpn_sockets'' enables or disables the VME socket domain ``AF_VME_LINK''.

Refer to VME documentation for further information about setting kernel parameters for the optional VME subsystem.

vmebpn_tcp_ip

Maximum number of DLPI PPAs allowed.

Acceptable Values:

Minimum: ``0''

Maximum: ``1''

Default: ``1''

Specify integer value.

Description

``vmebpn_tcp_ip'' specifies the maximum number of DLPI PPAs allowed on the system. If set to zero, TCP-IP is disabled. Otherwise, the maximum value is ``1''.

Refer to VME documentation for further information about setting kernel parameters for the optional VME subsystem.

vmebpn_tcp_ip_mtu

``vmebpn_tcp_ip_mtu'' specifies the maximum number of Kbytes allowed in PPA transmission units.

Acceptable Values:

Minimum: `` 0''

Maximum: ``64''

Default: `` 8''

Specify integer value.

Description

``vmebpn_tcp_ip_mtu'' specifies the maximum number of Kbytes allowed in PPA transmission units.

Refer to VME documentation for further information about setting kernel parameters for the optional VME subsystem.

vmebpn_total_jobs

``vmebpn_total_jobs'' specifies the system-wide maximum number of VME ports that can be open at any given time.

Acceptable Values:

Minimum: `` 0''

Maximum: ``8096''

Default: `` 16''

Specify integer value.

Description

``vmebpn_total_jobs'' specifies the system-wide maximum number of VME ports that can be open at any given time.

Refer to VME documentation for further information about setting kernel parameters for the optional VME subsystem.

vme_io_estimate

``vme_io_estimate'' specifies the number of 4-Kbyte pages in the kernel I/O space that are needed by and are to be allocated to the VME subsystem.

Acceptable Values:

Minimum: `` 0''

Maximum: ``0x800''

Default: ``0x800''

Specify integer value.

Description

``vme_io_estimate'' specifies how many 4-Kbyte pages in the kernel I/O space are to be allocated for use by the VME subsystem.

Refer to VME documentation for further information about setting kernel parameters for the optional VME subsystem.

vps_ceiling

Specify the maximum page size (in Kbytes) that the kernel can select when it chooses a page size based on system configuration and object size.

Acceptable Values:

Minimum: `` 4''

Maximum: ``65536''

Default: `` 16''

Specify integer value.

Description

This parameter is provided as a means to minimize lost cycle time caused by TLB (translation look-aside buffer) misses on systems using newer PA-RISC devices such as the PA-8000 that have smaller TLBs and no hardware TLB walker.

If a user application does not use the ``chatr'' command to specify a page size for program text and data segments, the kernel selects a page size that, based on system configuration and object size, appears to be suitable. This is called transparent selection. The selected size is then compared to the default maximum page-size value defined by ``vps_ceiling'' that is configured at system-boot time. If the the value is larger than ``vps_ceiling'', ``vps_ceiling'' is used.

The value is also compared with the default minimum page-size value defined by``vps_pagesize'' that is configured at system-boot time. If the the value is smaller than ``vps_pagesize'', ``vps_pagesize'' is used.

Note also that if the value specified by ``vps_ceiling'' is not a legitimate page size, the kernel uses the next !!lower!! valid value.

For more information about how these parameters are used, and how they affect system operation, refer to the whitepaper entitled !!Performance Optimized Page Sizing: Getting the Most out of your HP-UX Server!!.

This document is available on the World Wide Web at: http://www.unixsolutions.hp.com/products/hpux/pop.html

vps_chatr_ceiling

Specify the maximum page size (in Kbytes) that can be specified when a user process uses the ``chatr'' command to specify a page size.

Acceptable Values:

Minimum: `` 4'' Kbytes

Maximum: ``65536'' Kbytes

Default: ``65536'' Kbytes

Specify integer value.

Description

This parameter is provided as a means to minimize lost cycle time caused by TLB (translation look-aside buffer) misses on systems using newer PA-RISC devices such as the PA-8000 that have smaller TLBs and no hardware TLB walker.

vps_pagesize

Specify the default user-page size (in Kbytes) that is used by the kernel if the user application does not use the ``chatr'' command to specify a page size.

Acceptable Values:

Minimum: `` 4''

Maximum: ``65536''

Default: `` 4''

Specify integer value.

Description

This parameter is provided as a means to minimize lost cycle time caused by TLB (translation look-aside buffer) misses on systems using newer PA-RISC devices such as the PA-8000 that have smaller TLBs and no hardware TLB walker.

vxfs_max_ra_kbytes

Set the maximum amount of read-ahead data, in kilobytes, that the kernel may have outstanding for a single VxFS filesystem.

Acceptable Values:

Minimum: `` 0''

Maximum: ``65536''

Default: ``1024''

Specify integer value or use integer formula expression.

Description

When data is read from a disk drive, the system may read additional data beyond that requested by the operation. This "read-ahead" speeds up sequential disk accesses, by anticipating that additional data will be read, and having it available in system buffers before it is requested. This parameter limits the number of read-ahead blocks that the kernel is allowed to have outstanding for any given VxFS filesystem. The limit applies to each individual VxFS filesystem, !!not!! to the system-wide total.

vxfs_ra_per_disk

Set the amount of VxFS filesystem

Acceptable Values:

Minimum: `` 0''

Maximum: ``8192''

Default: ``1024''

Specify an integer value or use an integer formula expression.

Description

When data is read from a disk drive, the system may read additional data beyond that requested by the operation. This "read-ahead" speeds up sequential disk accesses, by anticipating that additional data will be read, and having it available in system buffers before it is requested. This parameter specifies the amount of read-ahead permitted per disk drive.

The total amount of read-ahead is determined by multiplying ``vxfs_ra_per_disk'' by the number of drives in the logical volume. If the filesystem does not reside in a logical volume, then the number of drives is effectively one

The total amount of read-ahead that the kernel may have outstanding for a single VxFS filesystem is constrained by ``vxfs_max_ra_kbytes''.

vx_ncsize

Specify the number of bytes to be reserved for the directory path-name cache used by the VxFS file system.

Acceptable Values:

Minimum: ``0''

Maximum: None

Default: ``1024''

Specify integer value.

Description

The VxFSfile system uses a name cache to store directory pathname information related to recently accessed directories in the file system. Retrieving this information from a name cache allows the system to access directories and their contents without having to use direct disk accesses to find its way down a directory tree every time it needs to find a directory that is used frequently. Using a name cache in this way can save considerable overhead, especially in large applications such as databases where the system is repetitively accessing a particular directory or directory path.

``vx_ncsize'' specifies the how much space, in bytes, is set aside for the VxFS file system manager to use for this purpose. The default value is sufficient for most typical HP-UX systems, but for larger systems or systems with applications that use VxFS disk I/O intensively, some performance enhancement may result from expanding the cache size. The efficiency gained, however, depends greatly on the variety of directory paths used by the application or applications, and what percentage of total process time is expended while interacting with the VxFS file system.


       
    Top
     

    Категории