This glossary lists the terms that are used to describe Tru64 UNIX performance, availability, and tuning.
Pages that are being used by the virtual memory subsystem or the UBC.
The Arbitrated Loop Physical Address (AL_PA) is used to address nodes on the Fibre Channel loop. When a node is ready to transmit data, it transmits Fibre Channel primitive signals that include its own identifying AL_PA.
Modifiable memory that is used for stack, heap, or
malloc.
A Fibre Channel topology in which frames are routed around a loop set up by the links between the nodes in the loop. All nodes in a loop share the bandwidth, and bandwidth degrades slightly as nodes and cables are added.
Dynamically configurable kernel variables, whose values you can modify to improve system performance. You can utilize new attribute values without rebuilding the kernel.
The rate at which an I/O subsystem or component can transfer bytes of data. Bandwidth is especially important for applications that perform large sequential transfers.
See also transfer rate
The bitfile metadata table describes the file extents on the volume.
The blocking queue is a queue in which reads and synchronous write requests are cached. The blocking queue is used primarily for reads and for kernel synchronous write requests.
See also flush queue
A system resource that is being pushed near to its capacity and is causing a performance degradation.
Bus extenders are used by the UltraSCSI technology to configure systems and storage over long distances.
See also bus segments
Bus segments are used by the UltraSCSI technology to configure systems and storage over long distances.
See also bus extenders
A temporary location for holding data that is used to improve performance by reducing latency. CPU caches and secondary caches hold physical addresses. Disk track caches and write-back caches hold disk data. Caches can be volatile (that is, not backed by disk data or a battery) or nonvolatile.
Data found in a cache.
The measure of effective cached data.
Data that was not found in a cache.
The maximum theoretical throughput of a system resource, or the maximum amount of data, in bytes, that a disk can contain. A resource that has reached its capacity may become a bottleneck and degrade performance.
Multiple switches that may be connected to each other to form a network of switches.
See also meshed fabric
A loosely coupled group of servers (cluster member systems) that share data for the purposes of high availability. Some cluster products utilize a high-performance interconnect for fast and dependable communication.
A diagnostic tool that provides error event analysis and translation.
A page fault that occurs when a process needs to modify a read-only virtual page.
The assemblage of hardware and software that comprises a system or a cluster. For example, CPUs, memory boards, the operating system, and mirrored disks are parts of a configuration.
To set up or modify a hardware or software configuration. For example, configuring the I/O subsystem can include connecting SCSI buses and setting up mirrored disks.
Determines the actual bandwidth for a bus.
A swap space allocation mode by which swap space is not reserved until the system needs to write a modified virtual page to swap space. Deferred mode is sometimes referred to as lazy mode.
See latency
A combination of the seek time and the rotational latency, measured in milliseconds. A low access time is especially important for applications that perform many small I/O operations.
See also rotational latency, seek time
Disk partitions are logical divisions of a disk that allow you to organize files by putting them into separate areas of varying sizes. Partitions hold data in structures called file systems and can also be used for system operations such as paging and swapping.
Allows the system administrator to limit the disk space available to users and to monitor disk space usage.
Wired memory that is used for dynamically allocated data structures,
such as system hash tables.
User processes also allocate dynamically wired
memory for address space by using virtual memory locking interfaces, including
the
mlock
function.
Also called adaptive RAID3/5, dynamic parity RAID combines the features of RAID3 and RAID5 to improve disk I/O performance and availability for a wide variety of applications. Adaptive RAID3/5 dynamically adjusts, according to workload needs, between data transfer-intensive algorithms and I/O operation-intensive algorithms.
See immediate mode
Contiguous area of disk space that AdvFS allocates to a file.
Communication between two switches which is routed between two expansion ports.
A switch, or multiple interconnected switches, that route frames between the originator node (transmitter) and destination node (receiver).
To automatically utilize a redundant resource after a hardware or software failure, so that the resource remains available. For example, if a cluster member system fails, the applications running on that system automatically fail over to another member system.
Enables I/O devices to attain high peak-rate transfers in synchronous mode.
See Fast SCSI
See UltraSCSI
See arbitrated loop
Memory that is used for program text or shared libraries.
The flush queue is a queue in which reads and synchronous write requests are cached. The flush queue is used primarily for buffer write requests or synchronous writes.
See also blocking queue
All data is transferred in a packet of information called a frame. A frame is limited to 2112 bytes. If the information consists of more than 2112 bytes, it is divided up into multiple frames.
Pages that are clean and are not being used (the size of the free list controls when page reclamation occurs).
The ports within the fabric (fabric port). This port is called an F_port. Each F_port is assigned a 64-bit unique node name and a 64-bit unique port name when it is manufactured. Together, the node name and port name make up the worldwide name.
An F_Port containing the loop functionality is called an FL_Port.
Zones are enforced at the physical level across all fabric switches by hardware blocking of the Fibre Channel frames.
A storage subsystem that provides RAID functionality by using intelligent controllers, caches, and software.
The ability of a resource to withstand a hardware or software failure. High availability is achieved by using some form of resource duplication that removes single points of failure. Availability also is measured by a resource's reliability. No resource can be protected against an infinite number of failures.
A swap space allocation mode by which swap space is reserved when modifiable virtual address space is created. Immediate mode is often referred to as eager mode and is the default swap space allocation mode.
The oldest pages that are being used by processes.
The interprocess communication (IPC) is the exchange of information between two or more processes.
Variables that determine kernel and subsystem behavior and performance. System attributes and parameters are used to access kernel variables.
The amount of time to complete a specific operation. Latency is also called delay. High performance requires a low latency time. I/O latency can be measured in milliseconds, while memory latency is measured in microseconds. Memory latency depends on the memory bank configuration and the system's memory requirements.
See deferred mode
Logical series of queues in which asynchronous write requests are cached.
The physical connection between an N_Port and another N_Port or an N_Port and an F_Port. A link consists of two connections, one to transmit information and one to receive information. The transmit connection on one node is the receive connection on the node at the other end of the link. A link may be optical fiber, coaxial cable, or shielded twisted pair.
See meshed fabric
A cascaded switch configuration, which allows for network failures up to and including the switch without losing a data path to a SAN connected node.
Maintaining identical copies of data on different disks, which provides high data availability and improves disk read performance. Mirroring is also known as RAID 1.
A system with two or more processors (CPUs) that share common physical memory.
Location where the virtual file system (VFS) caches a recently accessed file name and its corresponding vnode.
A Redundant Array of Independent Netowork Adaptors interface provides a mechanism to protect against certain kinds of network connectivity failures.
A circuit board used to create a physical connection to a network. A NIC is also called a network adapter or a network interface.
The source and destination of a frame. A node may be a computer system, a redundant array of independent disks (RAID) array controller, or a disk device. Each node has a 64-bit unique node name (worldwide name) that is built into the node when it is manufactured.
Each node must have at least one Fibre Channel port from which to send or receive data. This node port is called an N_Port. Each port is assigned a 64-bit unique port name (worldwide name) when it is manufactured. An N_Port is connected directly to another N_Port in a point-to-point topology. An N_Port is connected to an F_Port in a fabric topology.
In an arbitrated loop topology, information is routed around a loop. A node port that can operate on the loop is called an NL_Port (node loop port). The information is repeated by each NL_Port until it reaches its destination. Each port has a 64-bit unique port name (worldwide name) that is built into the node when it is manufactured.
The smallest portion of physical memory that the system can allocate (8 KB of memory).
Physical memory that is not wired.
The attempt to map a process' entire resident set into the secondary cache.
An instruction to the virtual memory subsystem to locate a requested page and make the virtual-to-physical address translation in the page table.
To move a page from a disk location to physical memory.
A page fault that occurs when a requested address is found in swap space.
To write the contents of a modified (dirty) page from physical memory to swap space.
An array containing an entry for each current virtual-to-physical address translation.
The process by which pages that are allocated to processes and the UBC are reclaimed for reuse.
See also Unified Buffer Cache
The most common type of SCSI which supports SCSI variants that provide a variety of performance and configuration options.
Statically configurable kernel variables, whose values can be modified to improve system performance. You must rebuild the kernel to utilize new parameter values. Many parameters have corresponding attributes.
A type of RAID functionality that provides high data availability by storing on a separate disk or multiple disks redundant information that is used to regenerate data. Parity RAID is also knows as a type of RAID3.
The total capacity of the memory boards installed in your system. Physical memory is either wired or it is shared by processes and the UBC.
Value of data transfer to and from the disk in sizes that are most efficient for the device driver. This value is provided by the device driver.
Controls the movement of addresses and data among the CPU cache, the secondary and tertiary caches, and physical memory. This movement is transparent to the operating system.
RAID (redundant array of independent disks) technology provides high disk I/O performance and data availability. The Tru64 UNIX operating system provides RAID functionality by using disks and the Logical Storage Manager software (LSM). Hardware-based RAID functionality is provided by intelligent controllers, caches, disks, and software.
Also known as disk striping, RAID0 functionality divides data into blocks and distributes the blocks across multiple disks in a array. Distributing the disk I/O load across disks and controllers improves disk I/O performance. However, striping decreases availability because one disk failure makes the entire disk array unavailable.
Also known as data mirroring, RAID1 functionality maintains identical copies of data on different disks in an array. Duplicating data provides high data availability. In addition, RAID1 improves the disk read performance, because data can be read from two locations. However, RAID1 decreases disk write performance, because data must be written twice. Mirroring n disks requires 2n disks.
RAID3 functionality divides data blocks and distributes (stripes) the data across a disk array, providing parallel access to data. RAID3 provides data availability; a separate disk stores redundant parity information that is used to regenerate data if a disk fails. It requires an extra disk for the parity information. RAID3 increases bandwidth, but it provides no improvement in the throughput. RAID3 can improve the I/O performance for applications that transfer large amounts of sequential data.
RAID5 functionality distributes data blocks across disks in an array. Redundant parity information is distributed across the disks, so each array member contains the information that is used to regenerate data if a disk fails. RAID5 allows independent access to data and can handle simultaneous I/O operations. RAID5 provides data availability and improves performance for large file I/O operations, multiple small data transfers, and I/O read operations. It is not suited to applications that are write-intensive.
Refers to an access pattern in which data is read from or written to blocks in various locations on a disk.
I/O to a disk or disk partition that does not use a file system. Raw I/O bypasses buffers and caches, and can provide better performance than file system I/O.
The duplication of a resource for purposes of high availability. For example, you can obtain data redundancy by mirroring data across different disks or by using parity RAID. You can obtain system redundancy by setting up a cluster, and network redundancy by using multiple network connections. The more levels of resource redundancy you have, the greater the resource availability. For example, a cluster with four member systems has more levels of redundancy and thus higher availability than a two-system cluster.
The average amount of time that a component will perform before a failure that causes a loss of data. Often expressed as the mean time to data loss (MTDL), the mean time to first failure (MTTF) or the mean time between failures (MTBF).
The complete set of all the virtual addresses that have been mapped to physical addresses (that is, all the pages that have been accessed during process execution).
A hardware or software component (such as the CPU, memory, network, or disk data) that is available to users or applications.
The amount of time, in milliseconds, for a disk to rotate to a specific disk sector.
The path a packet takes through a network from one system to another. It enables you to commmunicate with other systems on other networks. Routes are stored on each system in the routing tables or routing database.
The ability of a system to utilize additional resources with a predictable increase in performance, or the ability of a system to absorb an increase in workload without a significant performance degradation.
A system's ability to utilize additional hardware resources with a predictable impact on performance.
Small Computer System Interface (SCSI) is a device and interconnect technology.
See bandwidth
The amount of time, in milliseconds, for a disk head to move to a specific disk track.
Controls which server will have access to each storage unit. SSP also controls access at the storage unit level.
Refers to an access pattern in which data is read from or written to contiguous blocks on a disk.
Reduces parallel SCSI's limitation on speed, distance, and connectivity, and also provides availability features like hot swap and fault tolerance. Serial SCSI is the next generation of SCSI.
A page fault that occurs when a requested address is found in the virtual memory subsystem's internal data structures.
A switch service that stores names, addresses, and attributes for up to 15 minutes, and provides them to other devices in the fabric. SNS is defined by fibre channel standards and exists at a well known address. May also be referred to as a directory service.
Symmetrical multiprocessing (SMP) is the ability of a multiprocessor system to execute the same version of the operating system, access common memory, and execute instructions simultaneously.
A software implementation that is based on the Simple Name Server (SNS), enforcing a zone. It also works if all hosts honor it; it does not work if a host is not programmed to allow for soft zoning.
Storage subsystem that provides RAID functionality by using software (for example, LSM).
Wired memory that is allocated at boot time and used for operating system data and text and for system tables, static wired memory is also used by the metadata buffer cache, which holds recently accessed UNIX file system (UFS) and CD-ROM file system (CDFS) metadata.
Distributing data across multiple disks in a disk array, which improves I/O performance by allowing parallel access. Striping is also known as RAID 0. Striping can improve the performance of sequential data transfers and I/O operations that require high bandwidth.
A block device in a configured section of a disk.
To move a swapped-out process' pages from disk swap space to physical memory in order for the process to execute. Swapins occur only if the number of pages on the free page list is higher than a specific amount for a period of time.
To move all the modified pages associated with a low-priority process or a process with a large resident set size from physical memory to swap space. A swapout occurs when number of pages on the free page list falls below a specific amount for a period of time. Swapouts will continue until the number of pages on the free page list reaches a specific amount.
See striping
Writing a suspended process' modified (dirty) pages to swap space, and putting the clean pages on the free list. Swapping occurs when the number of pages on the free list falls below a specific threshold.
Controls which server can communicate with each other and each storage controller host port. Switch zoning also controls access at the storage system level.
The rate at which an I/O subsystem or component can perform I/O operations. Throughput is especially important for applications that perform many small I/O operations.
See bandwidth
Refers to the electrical implementation of the SCSI specification for a bus.
To modify the kernel by changing the values of kernel variables, which will improve system performance.
The Unified Buffer Cache least-recently used (UBC LRU) pages are the oldest pages that are being used by the UBC.
A portion of physical memory that is used to cache most-recently accessed file system data.
Refers to a storage configuration of devices (adapters or controllers) and disks that doubles the performance of SCSI-2 configurations. UltraSCSI (also called Fast-20) supports increased bandwidth and throughput, and can support extended cable distances.
The array of pages that an application can map into physical
memory.
Virtual address space is used for anonymous memory (memory used for
stack, heap, or
malloc) and for file-backed memory (memory
used for program text or shared libraries).
A subsystem that uses a portion of physical memory, disk swap space, and daemons and algorithms to control the allocation of memory to processes and to the UBC.
Refers to very-large database (VLDB) systems, which are VLM systems that use a large and complex storage configuration. The following is a typical VLM/VLDB system configuration:
An SMP system with two or more high-speed CPUs
More than 4 GB of physical memory
Multiple high-performance host bus adapters
RAID storage configuration for high performance and high availability
Refers to very-large memory (VLM) systems, which utilize 64-bit architecture, multiprocessing, and at least 2 GB of memory.
The kernel data structure for an open file.
Pages that are wired and cannot be reclaimed.
Pages of memory that are wired and cannot be reclaimed by paging.
The set of virtual addresses that are currently mapped to physical addresses. The working set is a subset of the resident set and represents a snapshot of the process' resident set.
The total number of applications running on a system and the users utilizing a system at any one time under normal conditions.
A unique number assigned to a subsystem by the Institute of Electrical and Electronics Engineers (IEEE) and set by the manufacturer prior to shipping. The worldwide name assigned to a subsystem never changes. Fibre Channel devices have both a node name and a port name worldwide name, both of which are 64-bit numbers.
A page fault that occurs when a requested address is accessed for the first time.
A logical subset of the Fibre Channel devices that re connected to the fabric.
Allows partitioning of resources for management and access control. It may provide efficient use of hardware resources by allowing one switch to serve multiple clusters or even mulitple operating systems. It entails splitting the fabric into zones, whre each zone is essentially a virtual faric.