Glossary

This glossary lists the terms that are used to describe Tru64 UNIX performance, availability, and tuning.

active list

Pages that are being used by the virtual memory subsystem or the UBC.

adaptive RAID 3/5

See dynamic parity RAID

AL_PA

The Arbitrated Loop Physical Address (AL_PA) is used to address nodes on the Fibre Channel loop. When a node is ready to transmit data, it transmits Fibre Channel primitive signals that include its own identifying AL_PA.

anonymous memory

Modifiable memory that is used for stack, heap, or malloc.

arbitrated loop

A Fibre Channel topology in which frames are routed around a loop set up by the links between the nodes in the loop. All nodes in a loop share the bandwidth, and bandwidth degrades slightly as nodes and cables are added.

attributes

Dynamically configurable kernel variables, whose values you can modify to improve system performance. You can utilize new attribute values without rebuilding the kernel.

bandwidth

The rate at which an I/O subsystem or component can transfer bytes of data. Bandwidth is especially important for applications that perform large sequential transfers.

See also transfer rate

bitfile metadata table (BMT)

The bitfile metadata table describes the file extents on the volume.

blocking queue

The blocking queue is a queue in which reads and synchronous write requests are cached. The blocking queue is used primarily for reads and for kernel synchronous write requests.

See also flush queue

BMT

See bitfile metadata table (BMT)

bottleneck

A system resource that is being pushed near to its capacity and is causing a performance degradation.

bus extenders

Bus extenders are used by the UltraSCSI technology to configure systems and storage over long distances.

See also bus segments

bus segments

Bus segments are used by the UltraSCSI technology to configure systems and storage over long distances.

See also bus extenders

cache

A temporary location for holding data that is used to improve performance by reducing latency. CPU caches and secondary caches hold physical addresses. Disk track caches and write-back caches hold disk data. Caches can be volatile (that is, not backed by disk data or a battery) or nonvolatile.

cache hit

Data found in a cache.

cache hit rate

The measure of effective cached data.

cache miss

Data that was not found in a cache.

capacity

The maximum theoretical throughput of a system resource, or the maximum amount of data, in bytes, that a disk can contain. A resource that has reached its capacity may become a bottleneck and degrade performance.

cascaded switches

Multiple switches that may be connected to each other to form a network of switches.

See also meshed fabric

cluster

A loosely coupled group of servers (cluster member systems) that share data for the purposes of high availability. Some cluster products utilize a high-performance interconnect for fast and dependable communication.

Compaq Analyze

A diagnostic tool that provides error event analysis and translation.

copy-on-write page fault

A page fault that occurs when a process needs to modify a read-only virtual page.

configuration

The assemblage of hardware and software that comprises a system or a cluster. For example, CPUs, memory boards, the operating system, and mirrored disks are parts of a configuration.

configure

To set up or modify a hardware or software configuration. For example, configuring the I/O subsystem can include connecting SCSI buses and setting up mirrored disks.

data path

Determines the actual bandwidth for a bus.

deferred mode

A swap space allocation mode by which swap space is not reserved until the system needs to write a modified virtual page to swap space. Deferred mode is sometimes referred to as lazy mode.

delay

See latency

disk access time

A combination of the seek time and the rotational latency, measured in milliseconds. A low access time is especially important for applications that perform many small I/O operations.

See also rotational latency, seek time

disk partitions

Disk partitions are logical divisions of a disk that allow you to organize files by putting them into separate areas of varying sizes. Partitions hold data in structures called file systems and can also be used for system operations such as paging and swapping.

disk quotas

Allows the system administrator to limit the disk space available to users and to monitor disk space usage.

dynamically wired memory

Wired memory that is used for dynamically allocated data structures, such as system hash tables. User processes also allocate dynamically wired memory for address space by using virtual memory locking interfaces, including the mlock function.

dynamic parity RAID

Also called adaptive RAID3/5, dynamic parity RAID combines the features of RAID3 and RAID5 to improve disk I/O performance and availability for a wide variety of applications. Adaptive RAID3/5 dynamically adjusts, according to workload needs, between data transfer-intensive algorithms and I/O operation-intensive algorithms.

eager mode

See immediate mode

extent

Contiguous area of disk space that AdvFS allocates to a file.

E_Port

Communication between two switches which is routed between two expansion ports.

fabric

A switch, or multiple interconnected switches, that route frames between the originator node (transmitter) and destination node (receiver).

fail over / failover

To automatically utilize a redundant resource after a hardware or software failure, so that the resource remains available. For example, if a cluster member system fails, the applications running on that system automatically fail over to another member system.

Fast SCSI

Enables I/O devices to attain high peak-rate transfers in synchronous mode.

Fast10

See Fast SCSI

Fast20

See UltraSCSI

FC-AL

See arbitrated loop

file-backed memory

Memory that is used for program text or shared libraries.

flush queue

The flush queue is a queue in which reads and synchronous write requests are cached. The flush queue is used primarily for buffer write requests or synchronous writes.

See also blocking queue

frame

All data is transferred in a packet of information called a frame. A frame is limited to 2112 bytes. If the information consists of more than 2112 bytes, it is divided up into multiple frames.

free list

Pages that are clean and are not being used (the size of the free list controls when page reclamation occurs).

F_Port

The ports within the fabric (fabric port). This port is called an F_port. Each F_port is assigned a 64-bit unique node name and a 64-bit unique port name when it is manufactured. Together, the node name and port name make up the worldwide name.

FL_Port

An F_Port containing the loop functionality is called an FL_Port.

hard zoning

Zones are enforced at the physical level across all fabric switches by hardware blocking of the Fibre Channel frames.

hardware RAID

A storage subsystem that provides RAID functionality by using intelligent controllers, caches, and software.

high availability

The ability of a resource to withstand a hardware or software failure. High availability is achieved by using some form of resource duplication that removes single points of failure. Availability also is measured by a resource's reliability. No resource can be protected against an infinite number of failures.

immediate mode

A swap space allocation mode by which swap space is reserved when modifiable virtual address space is created. Immediate mode is often referred to as eager mode and is the default swap space allocation mode.

inactive pages

The oldest pages that are being used by processes.

interprocess communication

The interprocess communication (IPC) is the exchange of information between two or more processes.

IPC

See interprocess communication

kernel variables

Variables that determine kernel and subsystem behavior and performance. System attributes and parameters are used to access kernel variables.

latency

The amount of time to complete a specific operation. Latency is also called delay. High performance requires a low latency time. I/O latency can be measured in milliseconds, while memory latency is measured in microseconds. Memory latency depends on the memory bank configuration and the system's memory requirements.

lazy mode

See deferred mode

lazy queue

Logical series of queues in which asynchronous write requests are cached.

link

The physical connection between an N_Port and another N_Port or an N_Port and an F_Port. A link consists of two connections, one to transmit information and one to receive information. The transmit connection on one node is the receive connection on the node at the other end of the link. A link may be optical fiber, coaxial cable, or shielded twisted pair.

mesh

See meshed fabric

meshed fabric

A cascaded switch configuration, which allows for network failures up to and including the switch without losing a data path to a SAN connected node.

mirroring

Maintaining identical copies of data on different disks, which provides high data availability and improves disk read performance. Mirroring is also known as RAID 1.

multiprocessor

A system with two or more processors (CPUs) that share common physical memory.

namei cache

Location where the virtual file system (VFS) caches a recently accessed file name and its corresponding vnode.

NetRAIN

A Redundant Array of Independent Netowork Adaptors interface provides a mechanism to protect against certain kinds of network connectivity failures.

network adapter

See network interface card (NIC)

network interface

See network interface card (NIC)

network interface card (NIC)

A circuit board used to create a physical connection to a network. A NIC is also called a network adapter or a network interface.

node

The source and destination of a frame. A node may be a computer system, a redundant array of independent disks (RAID) array controller, or a disk device. Each node has a 64-bit unique node name (worldwide name) that is built into the node when it is manufactured.

N_Port

Each node must have at least one Fibre Channel port from which to send or receive data. This node port is called an N_Port. Each port is assigned a 64-bit unique port name (worldwide name) when it is manufactured. An N_Port is connected directly to another N_Port in a point-to-point topology. An N_Port is connected to an F_Port in a fabric topology.

NL_Port

In an arbitrated loop topology, information is routed around a loop. A node port that can operate on the loop is called an NL_Port (node loop port). The information is repeated by each NL_Port until it reaches its destination. Each port has a 64-bit unique port name (worldwide name) that is built into the node when it is manufactured.

page

The smallest portion of physical memory that the system can allocate (8 KB of memory).

pageable memory

Physical memory that is not wired.

page coloring

The attempt to map a process' entire resident set into the secondary cache.

page fault

An instruction to the virtual memory subsystem to locate a requested page and make the virtual-to-physical address translation in the page table.

page in

To move a page from a disk location to physical memory.

page-in page fault

A page fault that occurs when a requested address is found in swap space.

page out

To write the contents of a modified (dirty) page from physical memory to swap space.

page table

An array containing an entry for each current virtual-to-physical address translation.

paging

The process by which pages that are allocated to processes and the UBC are reclaimed for reuse.

See also Unified Buffer Cache

parallel SCSI

The most common type of SCSI which supports SCSI variants that provide a variety of performance and configuration options.

parameters

Statically configurable kernel variables, whose values can be modified to improve system performance. You must rebuild the kernel to utilize new parameter values. Many parameters have corresponding attributes.

parity RAID

A type of RAID functionality that provides high data availability by storing on a separate disk or multiple disks redundant information that is used to regenerate data. Parity RAID is also knows as a type of RAID3.

physical memory

The total capacity of the memory boards installed in your system. Physical memory is either wired or it is shared by processes and the UBC.

preferred transfer size

Value of data transfer to and from the disk in sizes that are most efficient for the device driver. This value is provided by the device driver.

Privileged Architecture Library (PAL)

Controls the movement of addresses and data among the CPU cache, the secondary and tertiary caches, and physical memory. This movement is transparent to the operating system.

RAID

RAID (redundant array of independent disks) technology provides high disk I/O performance and data availability. The Tru64 UNIX operating system provides RAID functionality by using disks and the Logical Storage Manager software (LSM). Hardware-based RAID functionality is provided by intelligent controllers, caches, disks, and software.

RAID0

Also known as disk striping, RAID0 functionality divides data into blocks and distributes the blocks across multiple disks in a array. Distributing the disk I/O load across disks and controllers improves disk I/O performance. However, striping decreases availability because one disk failure makes the entire disk array unavailable.

RAID1

Also known as data mirroring, RAID1 functionality maintains identical copies of data on different disks in an array. Duplicating data provides high data availability. In addition, RAID1 improves the disk read performance, because data can be read from two locations. However, RAID1 decreases disk write performance, because data must be written twice. Mirroring n disks requires 2n disks.

RAID3

RAID3 functionality divides data blocks and distributes (stripes) the data across a disk array, providing parallel access to data. RAID3 provides data availability; a separate disk stores redundant parity information that is used to regenerate data if a disk fails. It requires an extra disk for the parity information. RAID3 increases bandwidth, but it provides no improvement in the throughput. RAID3 can improve the I/O performance for applications that transfer large amounts of sequential data.

RAID5

RAID5 functionality distributes data blocks across disks in an array. Redundant parity information is distributed across the disks, so each array member contains the information that is used to regenerate data if a disk fails. RAID5 allows independent access to data and can handle simultaneous I/O operations. RAID5 provides data availability and improves performance for large file I/O operations, multiple small data transfers, and I/O read operations. It is not suited to applications that are write-intensive.

random access pattern

Refers to an access pattern in which data is read from or written to blocks in various locations on a disk.

raw I/O

I/O to a disk or disk partition that does not use a file system. Raw I/O bypasses buffers and caches, and can provide better performance than file system I/O.

redundancy

The duplication of a resource for purposes of high availability. For example, you can obtain data redundancy by mirroring data across different disks or by using parity RAID. You can obtain system redundancy by setting up a cluster, and network redundancy by using multiple network connections. The more levels of resource redundancy you have, the greater the resource availability. For example, a cluster with four member systems has more levels of redundancy and thus higher availability than a two-system cluster.

reliability

The average amount of time that a component will perform before a failure that causes a loss of data. Often expressed as the mean time to data loss (MTDL), the mean time to first failure (MTTF) or the mean time between failures (MTBF).

resident set

The complete set of all the virtual addresses that have been mapped to physical addresses (that is, all the pages that have been accessed during process execution).

resource

A hardware or software component (such as the CPU, memory, network, or disk data) that is available to users or applications.

rotational latency

The amount of time, in milliseconds, for a disk to rotate to a specific disk sector.

route

The path a packet takes through a network from one system to another. It enables you to commmunicate with other systems on other networks. Routes are stored on each system in the routing tables or routing database.

scalability

The ability of a system to utilize additional resources with a predictable increase in performance, or the ability of a system to absorb an increase in workload without a significant performance degradation.

scalable

A system's ability to utilize additional hardware resources with a predictable impact on performance.

SCSI

Small Computer System Interface (SCSI) is a device and interconnect technology.

SCSI bus speed

See bandwidth

seek time

The amount of time, in milliseconds, for a disk head to move to a specific disk track.

selective storage presentation (SSP)

Controls which server will have access to each storage unit. SSP also controls access at the storage unit level.

sequential access pattern

Refers to an access pattern in which data is read from or written to contiguous blocks on a disk.

serial SCSI

Reduces parallel SCSI's limitation on speed, distance, and connectivity, and also provides availability features like hot swap and fault tolerance. Serial SCSI is the next generation of SCSI.

short page fault

A page fault that occurs when a requested address is found in the virtual memory subsystem's internal data structures.

simple name server (SNS)

A switch service that stores names, addresses, and attributes for up to 15 minutes, and provides them to other devices in the fabric. SNS is defined by fibre channel standards and exists at a well known address. May also be referred to as a directory service.

SMP

Symmetrical multiprocessing (SMP) is the ability of a multiprocessor system to execute the same version of the operating system, access common memory, and execute instructions simultaneously.

soft zoning

A software implementation that is based on the Simple Name Server (SNS), enforcing a zone. It also works if all hosts honor it; it does not work if a host is not programmed to allow for soft zoning.

software RAID

Storage subsystem that provides RAID functionality by using software (for example, LSM).

static wired memory

Wired memory that is allocated at boot time and used for operating system data and text and for system tables, static wired memory is also used by the metadata buffer cache, which holds recently accessed UNIX file system (UFS) and CD-ROM file system (CDFS) metadata.

striping

Distributing data across multiple disks in a disk array, which improves I/O performance by allowing parallel access. Striping is also known as RAID 0. Striping can improve the performance of sequential data transfers and I/O operations that require high bandwidth.

swap device

A block device in a configured section of a disk.

swap in

To move a swapped-out process' pages from disk swap space to physical memory in order for the process to execute. Swapins occur only if the number of pages on the free page list is higher than a specific amount for a period of time.

swap out

To move all the modified pages associated with a low-priority process or a process with a large resident set size from physical memory to swap space. A swapout occurs when number of pages on the free page list falls below a specific amount for a period of time. Swapouts will continue until the number of pages on the free page list reaches a specific amount.

swap-space interleaving

See striping

swapping

Writing a suspended process' modified (dirty) pages to swap space, and putting the clean pages on the free list. Swapping occurs when the number of pages on the free list falls below a specific threshold.

switch zoning

Controls which server can communicate with each other and each storage controller host port. Switch zoning also controls access at the storage system level.

throughput

The rate at which an I/O subsystem or component can perform I/O operations. Throughput is especially important for applications that perform many small I/O operations.

transfer rate

See bandwidth

transmission method

Refers to the electrical implementation of the SCSI specification for a bus.

tune

To modify the kernel by changing the values of kernel variables, which will improve system performance.

UBC

See Unified Buffer Cache

UBC LRU

The Unified Buffer Cache least-recently used (UBC LRU) pages are the oldest pages that are being used by the UBC.

Unified Buffer Cache

A portion of physical memory that is used to cache most-recently accessed file system data.

UltraSCSI

Refers to a storage configuration of devices (adapters or controllers) and disks that doubles the performance of SCSI-2 configurations. UltraSCSI (also called Fast-20) supports increased bandwidth and throughput, and can support extended cable distances.

virtual address space

The array of pages that an application can map into physical memory. Virtual address space is used for anonymous memory (memory used for stack, heap, or malloc) and for file-backed memory (memory used for program text or shared libraries).

virtual memory subsystem

A subsystem that uses a portion of physical memory, disk swap space, and daemons and algorithms to control the allocation of memory to processes and to the UBC.

VLDB

Refers to very-large database (VLDB) systems, which are VLM systems that use a large and complex storage configuration. The following is a typical VLM/VLDB system configuration:

VLM

Refers to very-large memory (VLM) systems, which utilize 64-bit architecture, multiprocessing, and at least 2 GB of memory.

vnode

The kernel data structure for an open file.

wired list

Pages that are wired and cannot be reclaimed.

wired memory

Pages of memory that are wired and cannot be reclaimed by paging.

working set

The set of virtual addresses that are currently mapped to physical addresses. The working set is a subset of the resident set and represents a snapshot of the process' resident set.

workload

The total number of applications running on a system and the users utilizing a system at any one time under normal conditions.

World Wide Names (WWN)

A unique number assigned to a subsystem by the Institute of Electrical and Electronics Engineers (IEEE) and set by the manufacturer prior to shipping. The worldwide name assigned to a subsystem never changes. Fibre Channel devices have both a node name and a port name worldwide name, both of which are 64-bit numbers.

zero-filled-on-demand page fault

A page fault that occurs when a requested address is accessed for the first time.

zone

A logical subset of the Fibre Channel devices that re connected to the fabric.

zoning

Allows partitioning of resources for management and access control. It may provide efficient use of hardware resources by allowing one switch to serve multiple clusters or even mulitple operating systems. It entails splitting the fabric into zones, whre each zone is essentially a virtual faric.