This chapter describes how to install and initialize LSM on a standalone
system or a TruCluster Server cluster and how to use LSM to provide redundancy
for the root file systems and domains for both environments.
To upgrade a
system using LSM from Version 4.0, see
Section 7.1.
3.1 Installing the LSM Software
The LSM software comprises three optional subsets. These are located on the CD-ROM containing the base operating system software for the Tru64 UNIX product kit. You can install the LSM subsets either at the same time or after you install the mandatory operating system software.
During a full operating system installation, you have the
option to install the system's root file system and
/usr
,
/var
, and
swap
partitions directly to LSM volumes.
If you choose that option, the LSM subsets are installed automatically.
If you plan to create a cluster from the system, skip that option. The standalone system's root volumes are not used in a cluster.
An upgrade installation automatically upgrades any subsets currently installed on the system. You do not need to specify the LSM subsets during an upgrade if they are currently installed.
Note
If you are upgrading a system with file systems that use LSM volumes, first boot to single-user mode, start LSM and its volumes, and then proceed with the Tru64 UNIX upgrade installation.
To install LSM on a running system (not as part of installing or upgrading the operating system), see the Tru64 UNIX Installation Guide.
To configure a new cluster with LSM:
Install the base operating system (Tru64 UNIX) and the LSM subsets on one system, but do not install the base file system to LSM volumes through the Installation GUI.
Create the cluster (clu_create
command).
Initialize LSM (volsetup
command).
Add other cluster members (clu_add_member
command).
Optionally, migrate AdvFS domains, including the
cluster_root
,
cluster_usr
, and
cluster_var
domains to LSM volumes (volmigrate
command).
Optionally, encapsulate the swap devices for the cluster members
to LSM volumes (volencap
command).
Table 3-1
lists the LSM subsets; in the subset
name,
nnn
indicates the operating system version.
Table 3-1: LSM Software Subsets
Subset | Function |
OSFLSMBINnnn |
Provides the kernel modules to build the kernel with LSM drivers. This software subset supports uniprocessor, SMP, and real-time configurations. This subset requires Standard Kernel Modules. |
OSFLSMBASEnnn |
Contains the LSM administrative commands and tools required to manage LSM. This subset is mandatory if you install LSM during a full Tru64 UNIX installation. This subset requires LSM Kernel Build Modules. |
OSFLSMX11nnn |
Contains the LSM Motif-based graphical user interface (GUI) management tool and related utilities. This subset requires the Basic X Environment. |
3.2 Installing the LSM License
The base operating system comes with a base LSM license, which lets you create LSM volumes that use a single concatenated plex (simple volumes). All other LSM features, such as the ability to create LSM volumes with striped, mirrored, and RAID5 plexes, and the ability to use the LSM GUIs, require a separate LSM license.
This separate LSM license is supplied in the form of a product authorization
key (PAK) called
LSM-OA
.
To install the LSM license, load the
LSM-OA
PAK into
the Tru64 UNIX License Management Facility (LMF).
If you need to order an LSM license, contact your service representative.
For more information on the License Management Facility, see
lmf
(8)3.3 Initializing LSM
LSM is automatically initialized when you perform a full operating system
installation and choose the option to install the root file system and
/usr
,
/var
, and
swap
partitions
directly to LSM volumes, or if you perform an upgrade installation on a system
or cluster that was previously running LSM.
Otherwise, you must initialize
LSM manually.
Initializing LSM does the following:
Creates the
rootdg
disk group.
Reestablishes an existing LSM configuration, if found.
Adds entries to the
/etc/inittab
file to
automatically start LSM when the system or cluster restarts.
Creates the
/etc/vol/volboot
file, which contains the
host ID.
Creates LSM files and directories. (For a description of these files and directories, see Section 3.6.)
Starts the
vold
and
voliod
daemons.
To initialize LSM, you need at least two unused disks or partitions
to create the
rootdg
disk group to ensure there are multiple
copies of the LSM configuration database.
See
Chapter 2
if
you need help choosing disks or partitions for the
rootdg
disk group.
In a cluster, do not use unused partitions on the quorum disk
or on any member's private boot disk.
To initialize LSM on standalone systems and clusters:
Verify (on any member in a cluster) that the LSM subsets are installed:
# setld -i | grep LSM
The following information is displayed, where nnn indicates the operating system revision:
OSFLSMBASEnnn installed Logical Storage Manager (System Administration) OSFLSMBINnnn installed Logical Storage Manager Kernel Modules (Kernel Build Environment) OSFLSMX11nnn installed Logical Storage Manager GUI (System Administration)
If the LSM subsets do not show a status of
installed
,
use the
setld
command to install them.
For more information
on installing software subsets, see the
Installation Guide.
Verify LSM drivers are configured into the kernel:
# devswmgr -getnum driver=LSM Device switch reservation list (*=entry in use) driver name instance major ------------------------------- -------- ----- LSM 4 43 LSM 3 42 LSM 2 41* LSM 1 40*
If LSM driver information is not displayed, you must rebuild the kernel
using the
doconfig
command.
For more information on rebuilding
the kernel, see the
Installation Guide.
Initialize LSM, specifying at least two disks or partitions:
# volsetup {disk|partition} {disk|partition...}
For example, to initialize LSM with disks
dsk4
and
dsk5
:
# volsetup dsk4 dsk5
If you omit a disk or partition name, the
volsetup
script prompts you for it.
If the
volsetup
command displays
an error message that the initialization failed, you might need to reinitialize
the disk.
For more information about reinitializing a disk, see the Disk Configuration
GUI online help.
When LSM is initialized on a Tru64 UNIX system, the LSM configuration
is propagated to the cluster when the cluster is created with the
clu_create
command and when members are added with the
clu_add_member
command.
In a cluster, synchronize LSM throughout the cluster by entering the following command on all the other current members (except the one on which you performed step 3):
# volsetup -s
If you subsequently add a new member to the cluster with the
clu_add_member
command, LSM is automatically synchronized on the
new member.
Do not run the
volsetup
-s
command on the new member.
Normally, you do not need to verify that LSM was initialized. If the initialization fails, the system displays error messages indicating the problem.
To verify that LSM is initialized, do one or more of the following:
To verify that the
rootdg
disk group exists:
# volprint
Disk group: rootdg TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg rootdg rootdg - - - - - - dm dsk4 dsk4 - 1854536 - - - - dm dsk5 dsk5 - 1854536 - - - -
In this example,
dsk4
and
dsk5
are part of the
rootdg
disk group.
To verify that the/etc/inittab
file was
modified to include LSM entries:
# grep LSM /etc/inittab lsmr:s:sysinit:/sbin/lsmbstartup -b /dev/console 2>&1 ##LSM lsm:23:wait:/sbin/lsmbstartup -n /dev/console 2>&1 ##LSM vol:23:wait:/sbin/vol-reconfig -n /dev/console 2>&1 ##LSM
To verify that the
/etc/vol/volboot
file was created:
# /sbin/voldctl list Volboot file version: 3/1 seqno: 0.4 hostid: hostname entries:
To verify that the
vold
daemon is enabled:
# voldctl mode mode: enabled
To
verify that two or more
voliod
daemons are running:
# voliod 2 volume I/O daemons are running
By default, LSM initiates one daemon for each CPU in the system or a minimum of two.
3.4 Using LSM for Critical File Systems and Swap Space
After you initialize LSM you can:
Encapsulate the root file system and primary swap space on a standalone system (Section 3.4.1)
Migrate the clusterwide root,
/usr
, and
/var
file system domains to LSM volumes (Section 3.4.2)
Encapsulate cluster members' swap devices to LSM volumes (Section 3.4.3)
Enable the hot-sparing feature of LSM to provide disk failover (Section 3.5), and configure hot-spare disks for each disk group (Section 3.5.1)
3.4.1 Creating Alternate Boot Disks (Standalone System)
You can use LSM to create an alternate boot disk for a standalone system by encapsulating the boot disk partitions to LSM volumes and mirroring those volumes. This copies the data in the boot disk partitions to another disk, which provides complete redundancy and recovery capability if the boot disk fails. If the primary boot disk fails, the system continues running off the surviving mirror, which is on a different disk. You can also reboot the system using the surviving mirror.
To create an alternate boot disk with LSM:
Use the LSM encapsulation procedure to configure each root file system partition and the primary swap space to use LSM volumes.
Add a mirror plex to the volumes to create copies of the data in the boot disk partitions.
Note
To facilitate recovery of environments that use LSM, you can use the bootable tape utility. This utility enables you to build a bootable standalone system kernel on magnetic tape. The bootable tape preserves your local configuration and provides a basic set of the LSM commands you will use during restoration. For more information on the SysMan Menu
boot_tape
option, see the System Administration manual or the online help, and. btcreate
(8)
3.4.1.1 Restrictions and Requirements
The following restrictions apply when you encapsulate the system partitions and primary swap space:
The system cannot be part of a TruCluster cluster.
To create LSM volumes for the clusterwide root,
/usr
,
and
/var
file system domains, see
Section 3.4.2.
You must encapsulate the root file system and the primary swap space partition at the same time. They do not have to be on the same disk.
The LSM volumes are created in the
rootdg
disk group and have the following names:
rootvol
Assigned to the volume
created for the root file system.
Do not change this name, move the rootvol
volume out of the
rootdg
disk group, or change the assigned
minor device number of 0.
swapvol
Assigned to the volume
created for the swap space partition.
Do not change this name, move the
swapvol
volume out of the
rootdg
disk group,
or change the assigned minor device number of 1.
All other partitions are assigned an LSM volume name based
on the original partition name; for example,
vol-dsk0g
.
The following disk requirements apply:
Original Boot Disk
The partition table for the boot disk (and primary swap disk, if different)
must have at least one unused partition for the LSM private region, which
cannot be the
a
or
c
partition.
The unused partition does not have to have any space in it; if necessary, LSM takes the required space (4096 blocks by default) from the swap space and relabels the disk partitions accordingly.
If there is no space (or not enough) on any unused partition for LSM to use, the encapsulation fails.
Mirror Disk
If the primary swap space is on the boot disk, you need one
separate disk to create the mirrors for the boot partition and swap space
volumes.
This disk cannot be under LSM control, must have a disk label with
all partitions marked
unused
, and must be as large as the
total of the root file system and swap partitions on the primary boot disk
(the partitions being mirrored), plus the size of the private region (4096
blocks by default).
If the primary swap space is on a separate disk, you need
two separate disks to create the mirrors for the boot partition and swap space
volumes.
These disks cannot be under LSM control and all partitions in their
disk labels must be marked
unused
.
For more information,
see
disklabel
(8)
The disk for the mirror for the boot partition volume must be as large as the total of the root file system partitions and the private region (4096 blocks by default).
The disk for the mirror for the swap volume must be as large as the total of the swap partition and the private region (4096 blocks by default).
3.4.1.2 Encapsulating System Partitions (Creating System Volumes)
When you encapsulate the system partitions, each partition is converted to an LSM volume with a single concatenated plex. The steps to encapsulate the system partitions are the same whether you are using the UNIX File System (UFS) or the Advanced File System (AdvFS).
Note
The encapsulation procedure requires that you restart the system.
The encapsulation process changes the following files:
For AdvFS, the links in
the
/etc/fdmns/*
directory for domains associated with
the root disk are changed to use LSM volumes instead of disk partitions.
For UFS, the
/etc/fstab
file is changed to use LSM volumes instead of disk partitions.
For swap space, in the
/etc/sysconfigtab
file the
swapdevice
entry is
changed to use the LSM
swapvol
volume and the
lsm_rootdev_is_volume
entry is set to 1.
In addition, LSM creates a private region and stores in it a copy of
the configuration database.
If the system partitions are on different disks
(for example, the boot partitions on
dsk0
and the swap
partition on
dsk1
), LSM creates a private region on each
disk.
Normally, when you encapsulate a disk or partition, LSM creates only
an LSM
nopriv
disk for the area being encapsulated.
However,
because of the need to be able to boot the system even if the rest of the
LSM configuration is corrupted or missing, LSM creates these special-case
private regions.
To encapsulate the system partitions:
Log in as
root
.
Identify the boot disk:
# consvar -l | grep boot boot_dev = dsk0 bootdef_dev = dsk0 booted_dev = dsk0 boot_file = booted_file = boot_osflags = A booted_osflags = A boot_reset = OFF
Identify the primary swap disk:
# swapon -s Swap partition /dev/disk/dsk0b (default swap):
.
.
.
Verify that there is at least one unused partition on the
boot disk other than
a
or
c
:
# disklabel dsk0 | grep -p '8 part'
8 partitions: # size offset fstype [fsize bsize cpg] # NOTE: values not exact a: 262144 0 AdvFS # (Cyl. 0 - 115*) b: 262144 262144 swap # (Cyl. 115*- 231*) c: 8380080 0 unused 0 0 # (Cyl. 0 - 3707) d: 4096 8375984 unused 0 0 # (Cyl. 3706*- 3707) e: 2618597 3142885 unused 0 0 # (Cyl. 1390*- 2549*) f: 2614502 5761482 unused 0 0 # (Cyl. 2549*- 3706*) g: 1433600 524288 AdvFS # (Cyl. 231*- 866*) h: 6418096 1957888 unused 0 0 # (Cyl. 866*- 3706*)
If the swap partition is on a separate disk, repeat step 4, specifying the swap disk name.
Encapsulate the boot disk and the swap disk, specifying both if different; for example:
# volencap dsk0 dsk4 Setting up encapsulation for dsk0. - Creating simple disk dsk0d for config area (privlen=4096). - Creating nopriv disk dsk0a for rootvol. - Creating nopriv disk dsk0b. - Creating nopriv disk dsk0g. Setting up encapsulation for dsk4. - Creating simple disk dsk4h for config area (privlen=4096). - Creating nopriv disk dsk4b for swapvol. The following disks are queued up for encapsulation or use by LSM: dsk0d dsk0a dsk0b dsk0g dsk4h dsk4b You must now run /sbin/volreconfig to perform actual encapsulations.
If appropriate, send a warning to your user community to alert them of the impending shutdown of the system.
When there are no users on the system, proceed with step 7.
Complete the encapsulation process.
Enter
now
when prompted to shut down the system:
# volreconfig The system will need to be rebooted in order to continue with LSM volume encapsulation of: dsk0d dsk0a dsk0b dsk0g dsk4h dsk4b Would you like to either quit and defer encapsulation until later or commence system shutdown now? Enter either 'quit' or time to be used with the shutdown(8) command (e.g., quit, now, 1, 5): [quit] now
The system shuts down, completes the encapsulation, and automatically reboots:
Encapsulating dsk0d. Encapsulating dsk0a. Encapsulating dsk0b. Encapsulating dsk0g. Encapsulating dsk4h. Encapsulating dsk4b. Shutdown at 14:36 (in 0 minutes) [pid 11708] *** FINAL System shutdown message from root@hostname *** System going down IMMEDIATELY ... Place selected disk partitions under LSM control. System shutdown time has arrived
3.4.1.3 Mirroring System Volumes
After you encapsulate the boot disk partitions and swap space to LSM volumes, mirror the volumes to provide redundancy. This process might take a few minutes, but the root file system and swap space are available during this time.
Mirror all the system volumes at the same time.
The
-a
option in the following procedure does this for you; to mirror only the volumes
for the root file system and primary swap but not any other volumes on the
boot disk (such as the volumes for the
/usr
and
/var
file systems), omit the
-a
option.
If the swap space is on the boot disk, enter:
# volrootmir -a target_disk
For example:
# volrootmir -a dsk4
This creates a mirror on
dsk4
for all the volumes
on the boot disk.
If the swap space is on a separate disk, enter:
# volrootmir -a swap=swap_target_disk boot_target_disk
For example:
# volrootmir -a swap=dsk5 dsk4
This creates a mirror on
dsk4
for all the volumes
on the boot disk and creates a mirror on
dsk5
for the swap
volume.
Note
This procedure does not add a log plex (DRL) to the root and swap volumes; logging is not supported on the root volume, and there is no need for a log on the swap volume. Attaching a log plex degrades
rootvol
andswapvol
performance and provides no benefit as the log is not used during system recovery.
3.4.1.4 Displaying Information for System Volumes
If you followed the steps in Section 3.4.1.3 and did not receive error messages, you can assume that the operation was successful. However, you can also display the results using the following commands:
Display simplified volume information:
# volprint -pt
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE pl rootvol-01 rootvol ENABLED ACTIVE 262144 CONCAT - RW pl rootvol-02 rootvol ENABLED ACTIVE 262144 CONCAT - RW pl swapvol-01 swapvol ENABLED ACTIVE 333824 CONCAT - RW pl swapvol-02 swapvol ENABLED ACTIVE 333824 CONCAT - RW pl vol-dsk0g-01 vol-dsk0g ENABLED ACTIVE 1450796 CONCAT - RW pl vol-dsk0g-02 vol-dsk0g ENABLED ACTIVE 1450796 CONCAT - RW
In this example, three volumes, named
rootvol
,
swapvol
, and
vol-dsk0g
, contain the
/usr
and
/var
file systems.
Each volume has two
plexes (listed in the column labeled
PL NAME
), indicating
that the volumes were successfully mirrored.
Confirm that the system recognizes both boot devices:
# consvar -l auto_action = HALT boot_dev = dsk0,dsk4 bootdef_dev = dsk0,dsk4 booted_dev = dsk0
.
.
.
If you are using AdvFS, confirm that the domains point to the LSM volumes instead of the original partitions by doing one of the following:
Display a simplified listing of the domains:
# ls -R /etc/fdmns/*
The following information is displayed, showing the volume name (rootdg.volume
) instead of the original
disk partition:
/etc/fdmns/root_domain: rootdg.rootvol /etc/fdmns/usr_domain: rootdg.vol-dsk0g
Display the full domain detail:
Change to the
fdmns
directory:
# cd /etc/fdmns
Display attributes of all AdvFS domains:
# showfdmn *
The following information is displayed, showing the volume name for each AdvFS domain:
Id Date Created LogPgs Version Domain Name 3a5e0785.000b567c Thu Jan 11 14:20:37 2001 512 4 root_domain Vol 512-Blks Free % Used Cmode Rblks Wblks Vol Name 1L 524288 339936 35% on 256 256 /dev/vol/rootdg/rootvol Id Date Created LogPgs Version Domain Name 3a5e078e.000880dd Thu Jan 11 14:20:46 2001 512 4 usr_domain Vol 512-Blks Free % Used Cmode Rblks Wblks Vol Name 1L 2879312 1703968 41% on 256 256 /dev/vol/rootdg/vol-dsk0g Id Date Created LogPgs Version Domain Name 3a5e0790.0005b501 Thu Jan 11 14:20:48 2001 512 4 var_domain Vol 512-Blks Free % Used Cmode Rblks Wblks Vol Name 1L 2879312 2842160 1% on 256 256 /dev/vol/rootdg/vol-dsk0h
If you are using UFS, enter:
# mount /dev/vol/rootdg/rootvol on / type ufs (rw) /dev/vol/rootdg/vol-dsk0g on /usr type ufs (rw) /proc on /proc type procfs (rw)
Display the swap volume information:
# swapon -s
The following information is displayed, showing the volume name (/dev/vol/rootdg/volume_name
) instead
of the original disk partition:
Swap partition /dev/vol/rootdg/swapvol (default swap): Allocated space: 25600 pages (200MB) In-use space: 426 pages ( 1%) Free space: 25174 pages ( 98%) Total swap allocation: Allocated space: 25600 pages (200.00MB) Reserved space: 9015 pages ( 35%) In-use space: 426 pages ( 1%) Available space: 16585 pages ( 64%)
3.4.2 Using LSM Volumes for Cluster Domains
In a TruCluster Server environment, you can use LSM volumes for AdvFS
domains, including the clusterwide root,
/usr
, and
/var
file system domains (cluster_root
,
cluster_usr
, and
cluster_var
), and for cluster
members' swap devices.
You cannot use LSM volumes for the quorum disk, or
for any partitions on members' private boot disks.
Note
Using an LSM volume for the
cluster_root
domain does not help with cluster recovery after a failure and does not create a disaster-tolerant cluster. The cluster does not boot from thecluster_root
file system.LSM provides high availability for data but does not provide overall cluster availability or reduce recovery time after a cluster failure. LSM is not a disaster-tolerant solution.
LSM provides the following methods for placing a cluster's AdvFS domains and swap devices under LSM control:
The
volmigrate
command creates LSM volumes
on the disks that you specify, moves the AdvFS domain to the volumes, and
removes the original disk from the domain and leaves it unused.
The advantage to migrating is that the migration occurs while the domain's filesets are mounted, and no reboot is required. The disadvantage is that the migration process temporarily uses additional disk space while the domain data is copied to the LSM volume.
Note
The
volmigrate
command uses the AdvFSaddvol
command. You need the AdvFS Utilities license PAK to use theaddvol
command.
The
volencap
command creates LSM volumes
on the same disks or disk partitions that the AdvFS domain or swap space is
currently using.
The advantage to encapsulating is that no extra disk space is required. The disadvantage is that if the domain is mounted and cannot be unmounted, you need to shut down and reboot the cluster or cluster member for the encapsulation to complete.
Table 3-2
lists when you can use each command.
Table 3-2: LSM Commands for Providing Cluster Redundancy
Domain or Partition | volmigrate Command | volencap Command |
cluster_root ,
cluster_usr ,
cluster_var |
Yes (Section 3.4.2.1) | No |
Other AdvFS domains (application data) | Yes (Section 4.6.2) | |
Member's swap partitions | No | Yes (Section 3.4.3) |
Member's private boot partitions (rootmemberID_domain#root ) |
No | No |
3.4.2.1 Migrating AdvFS Domains to LSM Volumes
In a cluster, you can use the
volmigrate
command
to migrate any AdvFS domain to an LSM volume except individual members' root
domains (rootmemberID_domain#root
).
For example, you can migrate the clusterwide root,
/usr
,
and
/var
file system domains to LSM volumes, allowing you
to mirror the volumes for high availability.
The
volmigrate
command operates on AdvFS domain names.
Within this procedure, the clusterwide file systems are referred to by the
default AdvFS domain names
cluster_root
,
cluster_usr
, and
cluster_var
.
The
volmigrate
command is a shell script that calls
several other commands to:
Create an LSM volume for the AdvFS domain on the LSM disk or disks that you specify.
You can specify properties for the LSM volume such as striping and mirroring.
Add the LSM volume to the domain being migrated, with the
AdvFS
addvol
command (requires the AdvFS Utilities license).
Migrate the data from the original disk partition to the LSM volume.
Remove the original disk partition from the domain with the
AdvFS
rmvol
command (requires the AdvFS Utilities license),
and set the disk label partition table entry for that partition to
unused
.
3.4.2.1.1 Disk Space Considerations
If you have limited available disks but want the benefits of mirroring, you can mirror the volumes for the AdvFS domains to the original disk or disks after the migration is complete; however, when you place the original disk under LSM control, its usable space is reduced by 4096 blocks (2 MB) for the LSM private metadata. Therefore, depending on the size and usage of the original disk, you might have to migrate one or more of the AdvFS domains to a volume smaller than the domain:
If one domain uses the whole disk (the
c
partition), migrate the domain to a volume 2 MB smaller than the domain.
If several domains are on the same disk, and if the disk is not at least 2 MB larger than the total size of the domains, migrate one domain to a volume 2 MB smaller than the domain.
Decide which domain to reduce based on disk usage and expected growth.
By default, the
volmigrate
command creates an LSM
volume the same size as the AdvFS domain.
However, you can specify a smaller
volume size, within the restrictions described in
volmigrate
(8)
The volume size you specify must be at least 10 percent larger than
the in-use portion of the domain.
Reduce the volume by only the 2 MB necessary.
Both LSM and AdvFS allow you to add space to an LSM volume or AdvFS domain
later.
3.4.2.1.2 Migrating AdvFS Domains
The general syntax for migrating an AdvFS domain to an LSM volume is as follows:
volmigrate [-g disk_group] [-m num_mirrors] [-s num_stripes] \ domain_name disk_media_name...
You can run the
volmigrate
command from any cluster
member.
Depending on the size of the domain, the migration might take several
minutes to complete.
Unless you see an error message, the migration is progressing.
If you plan to reuse the original disk or disk partition for the mirror, migrate the domain to an unmirrored volume (optionally specifying the volume size, as discussed in Section 3.4.2.1.1), place the original disk under LSM control, and then mirror the volume in a separate step.
If possible, do not use the same disk to support multiple volumes, because this increases the number of volumes at risk if the disk fails. Mirroring the volumes and configuring hot-spare disks reduces this risk. If you have available disks, consider migrating each AdvFS domain to its own disk, use the original disk to mirror the domain's volume, and use other disks to mirror the other domains' volumes.
As you migrate a domain, you can specify:
The disk group in which to create the volume.
The default
is
rootdg
.
The volume for the
cluster_root
domain must belong
to the
rootdg
disk group.
The name of the volume.
The default is the name of the domain with the suffix
vol
.
Stripe and mirror attributes.
LSM does not add a DRL to the volume for the
cluster_root
domain, if mirrored.
This volume does not need to be resynchronized after
a system failure, so does not need a DRL.
All other domain volumes, if mirrored,
will have a DRL enabled by default.
The size of the volume.
By default, the volume will be the same size as the domain.
If necessary,
you can create a volume smaller (or larger) than that, within the restrictions
described in
volmigrate
(8)
The following procedure migrates the
cluster_root
,
cluster_usr
, and
cluster_var
domains but can
apply to any other eligible AdvFS domain.
To migrate AdvFS domains to LSM volumes:
Display the attributes (specifically, sizes and names) of the AdvFS domains:
# cd /etc/fdmns # showfdmn *
Information similar to the following is displayed.
This example shows
only the
cluster_root
,
cluster_usr
and
cluster_var
domains, for simplicity.
Id Date Created LogPgs Version Domain Name 3c7d2bd9.0001fe4b Wed Feb 27 13:56:25 2002 512 4 cluster_root Vol 512-Blks Free % Used Cmode Rblks Wblks Vol Name 1L 401408 211072 47% on 256 256 /dev/disk/dsk3b Id Date Created LogPgs Version Domain Name 3c7d2bdb.0004b3c9 Wed Feb 27 13:56:27 2002 512 4 cluster_usr Vol 512-Blks Free % Used Cmode Rblks Wblks Vol Name 1L 1787904 204480 89% on 256 256 /dev/disk/dsk3g Id Date Created LogPgs Version Domain Name 3c7d2bdd.0008401f Wed Feb 27 13:56:29 2002 512 4 cluster_var Vol 512-Blks Free % Used Cmode Rblks Wblks Vol Name 1L 1790096 1678608 6% on 256 256 /dev/disk/dsk3h
The
cluster_root
domain is 196 MB (401408 blocks).
Each block on the Tru64 UNIX system is 512 bytes.
The total for all three
domains is 3979408 blocks (approximately 2 GB).
If you plan to migrate all
three domains to the same LSM disk, the disk must have that much free space.
Display the disks in the
rootdg
disk group
to find simple or sliced disks:
# voldisk -g rootdg list
DEVICE TYPE DISK GROUP STATUS dsk2 sliced dsk2 rootdg online dsk8 sliced dsk8 rootdg online dsk9 sliced dsk9 rootdg online dsk10 sliced dsk10 rootdg online dsk11 sliced dsk11 rootdg online dsk12 sliced dsk12 rootdg online dsk13 sliced dsk13 rootdg online dsk14 sliced dsk14 rootdg online dsk20 sliced dsk20 rootdg online dsk21 sliced dsk21 rootdg online dsk22 sliced dsk22 rootdg online dsk23 sliced dsk23 rootdg online dsk24 sliced dsk24 rootdg online dsk25 sliced dsk25 rootdg online
Display the free space available in the
rootdg
disk group:
# voldg -g rootdg free
DISK DEVICE TAG OFFSET LENGTH FLAGS dsk2 dsk2 dsk2 0 4106368 - dsk8 dsk8 dsk8 0 4106368 - dsk9 dsk9 dsk9 0 4106368 - dsk10 dsk10 dsk10 0 4106368 - dsk11 dsk11 dsk11 0 4106368 - dsk12 dsk12 dsk12 0 35552277 - dsk13 dsk13 dsk13 0 35552277 - dsk14 dsk14 dsk14 0 35552277 - dsk20 dsk20 dsk20 0 35552277 - dsk21 dsk21 dsk21 0 35552277 - dsk22 dsk22 dsk22 0 35552277 - dsk23 dsk23 dsk23 0 35552277 - dsk24 dsk24 dsk24 0 35552277 - dsk25 dsk25 dsk25 0 35552277 -
Choose sliced or simple disks with enough free space to create a volume with the characteristics you want (such as mirrored or striped).
If possible, choose disks with an offset of 0. Avoid using disks with an offset, because these are in use by another volume.
Confirm that each disk you want to use is accessible by all cluster members:
# hwmgr view devices -cluster
Information similar to the following is displayed (edited for brevity):
HWID: Device Name Mfg Model Hostname Location --------------------------------------------------------------------------
.
.
.
68: /dev/disk/dsk3c DEC RZ28D (C) DEC moe bus-2-targ-1-lun-0 68: /dev/disk/dsk3c DEC RZ28D (C) DEC larry bus-2-targ-1-lun-0 68: /dev/disk/dsk3c DEC RZ28D (C) DEC curly bus-2-targ-1-lun-0
.
.
.
73: /dev/disk/dsk8c DEC RZ28D (C) DEC moe bus-3-targ-10-lun-0 73: /dev/disk/dsk8c DEC RZ28D (C) DEC larry bus-3-targ-10-lun-0 73: /dev/disk/dsk8c DEC RZ28D (C) DEC curly bus-3-targ-10-lun-0 74: /dev/disk/dsk9c DEC RZ28D (C) DEC moe bus-3-targ-11-lun-0 74: /dev/disk/dsk9c DEC RZ28D (C) DEC larry bus-3-targ-11-lun-0 74: /dev/disk/dsk9c DEC RZ28D (C) DEC curly bus-3-targ-11-lun-0 75: /dev/disk/dsk10c DEC RZ28D (C) DEC moe bus-3-targ-12-lun-0 75: /dev/disk/dsk10c DEC RZ28D (C) DEC larry bus-3-targ-12-lun-0 75: /dev/disk/dsk10c DEC RZ28D (C) DEC curly bus-3-targ-12-lun-0 76: /dev/disk/dsk11c DEC RZ28D (C) DEC moe bus-3-targ-13-lun-0 76: /dev/disk/dsk11c DEC RZ28D (C) DEC larry bus-3-targ-13-lun-0 76: /dev/disk/dsk11c DEC RZ28D (C) DEC curly bus-3-targ-13-lun-0 77: /dev/disk/dsk12c DEC HSG80 moe bus-4-targ-0-lun-1 77: /dev/disk/dsk12c DEC HSG80 larry bus-4-targ-0-lun-1 77: /dev/disk/dsk12c DEC HSG80 curly bus-4-targ-0-lun-1 78: /dev/disk/dsk13c DEC HSG80 moe bus-4-targ-0-lun-2 78: /dev/disk/dsk13c DEC HSG80 larry bus-4-targ-0-lun-2 78: /dev/disk/dsk13c DEC HSG80 curly bus-4-targ-0-lun-2 79: /dev/disk/dsk14c DEC HSG80 moe bus-4-targ-0-lun-3 79: /dev/disk/dsk14c DEC HSG80 larry bus-4-targ-0-lun-3 79: /dev/disk/dsk14c DEC HSG80 curly bus-4-targ-0-lun-3
.
.
.
85: /dev/disk/dsk20c DEC HSG80 moe bus-4-targ-0-lun-9 85: /dev/disk/dsk20c DEC HSG80 larry bus-4-targ-0-lun-9 85: /dev/disk/dsk20c DEC HSG80 curly bus-4-targ-0-lun-9 86: /dev/disk/dsk21c DEC HSG80 moe bus-4-targ-0-lun-10 86: /dev/disk/dsk21c DEC HSG80 larry bus-4-targ-0-lun-10 86: /dev/disk/dsk21c DEC HSG80 curly bus-4-targ-0-lun-10 87: /dev/disk/dsk22c DEC HSG80 moe bus-4-targ-0-lun-11 87: /dev/disk/dsk22c DEC HSG80 larry bus-4-targ-0-lun-11 87: /dev/disk/dsk22c DEC HSG80 curly bus-4-targ-0-lun-11 88: /dev/disk/dsk23c DEC HSG80 moe bus-4-targ-0-lun-12 88: /dev/disk/dsk23c DEC HSG80 larry bus-4-targ-0-lun-12 88: /dev/disk/dsk23c DEC HSG80 curly bus-4-targ-0-lun-12 89: /dev/disk/dsk24c DEC HSG80 moe bus-4-targ-0-lun-13 89: /dev/disk/dsk24c DEC HSG80 larry bus-4-targ-0-lun-13 89: /dev/disk/dsk24c DEC HSG80 curly bus-4-targ-0-lun-13 90: /dev/disk/dsk25c DEC HSG80 moe bus-4-targ-0-lun-14 90: /dev/disk/dsk25c DEC HSG80 larry bus-4-targ-0-lun-14 90: /dev/disk/dsk25c DEC HSG80 curly bus-4-targ-0-lun-14
.
.
.
In this example:
There are three cluster members:
moe
,
larry
, and
curly
.
Disks
dsk3
through
dsk25
are shared by all members; each disk appears three times in the output, with
a different host name in each line.
Of these, disks
dsk8
through
dsk14
and
dsk20
through
dsk25
are under LSM control as members of
rootdg
(as shown in
the output from the
voldisk list
command in step 2).
All
are candidate disks to use for the migration.
Migrate the domains, specifying options such as the number of mirrors and the number of stripe columns.
Note
When you create a mirrored volume with the
volmigrate
command, LSM automatically adds a DRL (except forcluster_rootvol
). However, you cannot specify a separate disk for the DRL. If there is enough space, LSM puts the DRL on one of the disks you specified. (If there is not enough space for the mirrors and the DRL, the command fails with a message about insufficient space.)Even if you specify more disks than needed to create the mirrors, LSM does not put the DRL on the extra disk.
To improve this configuration, after you migrate the domain, you can immediately delete the volume's log and then add a new one, as shown in the following procedure.
To migrate a domain to an LSM volume with the default properties (concatenated, no mirror):
# volmigrate domain disk...
For example:
# volmigrate cluster_root dsk10
You can optionally mirror the volume to the original disk later.
To migrate a domain to an LSM volume of a specific size (for example, smaller than the domain by 2 MB):
# volmigrate -l sectors domain disk...
For example, to migrate the
cluster_var
domain, which
is 17581584 sectors (a little over 8 GB), to a volume that is 2 MB smaller
(17577488 sectors):
# volmigrate -l 17577488 cluster_var dsk11
To migrate a domain to a volume striped over four disks:
# volmigrate -s 4 cluster_root disk disk disk disk
For example:
# volmigrate -s 4 cluster_root dsk10 dsk11 dsk12 dsk13
To migrate a domain to a mirrored volume on two disks:
# volmigrate -m 2 domain disk disk
For example:
# volmigrate -m 2 cluster_usr dsk12 dsk13
To migrate a domain to a striped, mirrored volume on six disks (each mirror will be striped over three disks):
# volmigrate -m 2 -s 3 cluster_usr \ disk disk disk disk disk disk
For example:
# volmigrate -m 2 -s 3 cluster_usr \ dsk10 dsk11 dsk12 dsk13 dsk14 dsk15
If you migrated the domain to a mirrored volume and want to reconfigure the volume so that the DRL plex is on a disk not used by one of the data plexes:
Display the volume properties to identify the DRL plex:
# volprint volume
For example:
# volprint cluster_usrvol
Disk group: rootdg TY NAME ASSOC KSTATE LENGTH ... v cluster_usrvol fsgen ENABLED 1787904 ... pl cluster_usrvol-01 cluster_usrvol ENABLED 1787904 ... sd dsk22-01 cluster_usrvol-01 ENABLED 1787904 ... pl cluster_usrvol-02 cluster_usrvol ENABLED 1787904 ... sd dsk23-01 cluster_usrvol-02 ENABLED 1787904 ... pl cluster_usrvol-03 cluster_usrvol ENABLED LOGONLY ... sd dsk22-02 cluster_usrvol-03 ENABLED 65 ...
Dissociate the DRL plex:
# volplex -o rm dis cluster_usrvol-03
Add a new DRL plex, specifying a disk not already used in
the same volume (in this case, not
dsk22
or
dsk23
):
# volassist addlog cluster_usrvol disk
3.4.2.1.3 Mirroring Migrated Domain Volumes to the Original Disk (Optional)
The attributes you specify for a volume with the
volmigrate
command (such as the length and whether the volume is striped)
are applied to the mirror when you mirror that volume.
To mirror a volume that is striped over several disks, you must specify the same number of additional disks for the mirror. For example, if the volume is striped over four disks, you need four additional disks to create the mirror, one of which can be the original disk.
To mirror a volume to the original disk:
Confirm that all partitions on the original disk are unused
(in this example,
dsk6
):
# disklabel -r dsk6
Add the disk to LSM:
# voldisksetup -i dsk6
Add the disk to the
rootdg
disk group.
You do not need to specify a disk group because the
rootdg
disk group is the default:
# voldg adddisk dsk6
Verify that there is enough space in the public region of the LSM disk to mirror the volume:
# voldisk list dsk6 | grep public public: slice=6 offset=16 len=8375968
Display the cluster volume names to ensure you enter the correct names in the next step:
# volprint -vt | grep cluster
v cluster_rootvol fsgen ENABLED ACTIVE 557936 SELECT - v cluster_usrvol fsgen ENABLED ACTIVE 1684224 SELECT - v cluster_varvol fsgen ENABLED ACTIVE 1667024 SELECT -
Mirror the volumes. You must mirror each volume separately:
# volassist mirror volume disk
To mirror all the volumes to the same LSM disk:
# volassist mirror cluster_rootvol dsk6 # volassist mirror cluster_usrvol dsk6 # volassist mirror cluster_varvol dsk6
To mirror each volume to its own LSM disk:
# volassist mirror cluster_rootvol dsk4 # volassist mirror cluster_usrvol dsk8 # volassist mirror cluster_varvol dsk11
If the volume is striped over several disks, specify the same number of disks for the mirror plex.
For example, to mirror a volume that is striped over four disks:
# volassist mirror cluster_rootvol dsk6 dsk11 dsk12 dsk13
For all volumes except
cluster_rootvol
,
add a DRL log plex to the volume:
# volassist addlog volume
3.4.3 Encapsulating Cluster Members' Swap Devices
You can use LSM for the swap devices of cluster members whether or not
the clusterwide root,
/usr
, and
/var
file system domains also use LSM volumes.
Note
A cluster member must be running for you to encapsulate its swap devices.
In one command, you can encapsulate:
All the swap devices for one member at once
All the swap devices for one member plus swap devices for other cluster members
Only the swap devices you specify for one or more members
You can set up the encapsulation for several members at once with the
volencap
command, but you must complete the encapsulation procedure
on each member in turn with the
volreconfig
command.
To encapsulate all the swap devices for only the current member:
# volencap swap # volreconfig
The
swap
operand is a member-specific shortcut for
specifying all the swap devices in the cluster member's private
/etc/sysconfigtab
file.
After the cluster member reboots, its swap devices use separate LSM volumes.
To encapsulate the swap devices for multiple cluster members:
Identify the cluster members:
# clu_get_info
Display the swap devices for each cluster member using one of the following methods:
On each member, enter:
# swapon -s
On one member, display the swap devices listed in each cluster
member's
sysconfigtab
file:
# grep swap \ /cluster/members/membern/boot_partition/etc/sysconfigtab
Do one of the following:
To encapsulate all the swap devices for the current cluster member plus any number of other cluster members' swap devices:
# volencap swap dsknp dsknp...
To encapsulate only specific swap devices for the current cluster member plus any number of other cluster members' swap devices:
# volencap dsknp dsknp...
Complete the encapsulation on each cluster member separately:
# volreconfig
For more information on encapsulating swap devices in a cluster, see
volencap
(8)3.5 Enabling the Automatic Data Relocation (Hot-Sparing) Feature
To provide higher availability for LSM volumes, you can enable the hot-sparing feature and configure one or more hot-spare disks per disk group to enable LSM to automatically relocate data from a failing disk to a hot-spare disk.
Hot-sparing automatically attempts to relocate redundant data (in mirrored or RAID 5 volumes), and performs parity resynchronization for RAID5 volumes. For a failing RAID5 log plex, relocation occurs only if the log plex is mirrored. If so, the hot-spare feature also resynchronizes the RAID5 log plexes.
A hot-sparing operation:
Sends mail to the root user before and after the operation. See Section 3.5.2 for examples of mail messages.
Relocates the LSM objects from the failed disk to a hot-spare disk or to free disk space in the disk group, except if redundancy cannot be preserved. For example, LSM will not relocate data to a disk that contains a mirror of the data.
Initiates parity resynchronization for an affected RAID5 plex.
Updates the configuration database with the relocation information.
Ensures that the failed disk space is not recycled as free disk space.
The hot-sparing feature is part of the
volwatch
daemon.
The
volwatch
daemon has two modes:
Mail-only mode, which is the default. This setting notifies you of a problem but does not perform hot-sparing. You can reset the daemon to this mode with the -m option. Without hot-sparing enabled, you must investigate and resolve problems manually. For more information, see Section 6.4.
Mail-and-spare mode, which you set with the -s option.
You can specify mail addresses with either option.
Alternatively, use
the
rcmgr
command to set the
VOLWATCH_USERS
variable in the
/etc/rc.config.common
file.
For more information,
see
rcmgr
(8)
To enable the hot-sparing feature:
# volwatch -s [mail-address...]
To return the
volwatch
daemon to mail-only mode:
# volwatch -m [mail-address...]
Note
Only one
volwatch
daemon can run on a system or cluster member at any time. The daemon's setting applies to the entire LSM configuration; you cannot specify some disk groups to use hot-sparing but not others.In a cluster, entering the
volwatch -s
command on any member enables hot-sparing across the entire cluster, and it remains in effect until you disable it by enteringvolwatch -m
. The setting is persistent across cluster or member reboots.
3.5.1 Configuring and Deconfiguring Hot-Spare Disks
Configure at least one hot-spare disk in each disk group, but ideally, configure one hot-spare disk for every disk used in a mirrored or RAID5 (redundant) volume in the disk group. Each hot-spare disk should be large enough to replace any disk in a redundant volume in the disk group, because there is no way to assign specific hot-spare disks to specific volumes. After a hot-spare operation occurs, you can add more disks to the disk group and configure them as replacement hot-spare disks.
You can configure
sliced
or
simple
disks as hot-spare disks, but
sliced
disks are preferred.
If you use
simple
disks, try to use only those that you
initialized on the
c
partition of the disk (the entire
disk).
A
simple
disk that encompasses the entire disk has
only one private region.
This makes the maximum amount of disk space available
to LSM, providing the most flexibility in relocating redundant data off a
failing disk in a hot-spare operation.
LSM does not use a hot-spare disk for normal data storage (creating new volumes) unless you specify otherwise.
To configure a hot-spare disk:
# voledit [-g disk_group] set spare=on disk
For example, to configure
dsk5
as a hot-spare disk
in the
rootdg
disk group:
# voledit set spare=on dsk5
To deconfigure a hot-spare disk:
# voledit [-g disk_group] set spare=off disk
For example, to deconfigure
dsk5
as a hot-spare disk
in the
rootdg
disk group:
# voledit set spare=off dsk5
To display the hot-spare disks in a disk group, use the
voldisk
list
command.
In the
STATUS
column of the output,
all available hot-spare disks show a status of
online spare
.
3.5.2 Examples of Mail Notification for Exception Events
During a hot-sparing operation, LSM sends mail to the
root
account (and other specified accounts) to notify the recipients
about the failure and identify the affected LSM objects.
Afterward, LSM sends
another mail message if any action is taken.
There is a 15-second delay before the event is analyzed and the message is sent. This delay allows a group of related events to be collected and reported in a single mail message.
Example 3-1 shows a sample mail notification sent when LSM detects an exception event.
Example 3-2 shows the mail message sent if a disk completely fails.
Example 3-3 shows the mail message sent if a disk partially fails.
Example 3-4 shows the mail message sent if data relocation is successful.
Example 3-5 shows the mail message sent if relocation cannot occur, because there is no hot-spare disk or free disk space.
Example 3-6 shows the mail message sent if data relocation fails.
Example 3-7 shows the mail message sent if volumes using mirror plexes are made unusable due to disk failure.
Example 3-8 shows the mail message sent if volumes using RAID5 plexes are made unusable due to disk failure.
Example 3-1: Sample Mail Notification
Failures have been detected by the Logical Storage Manager: failed disks: disk [1]
.
.
.
failed plexes: plex [2]
.
.
.
failed log plexes: plex [3]
.
.
.
failing disks: disk [4]
.
.
.
failed subdisks: subdisk [5]
.
.
.
The Logical Storage Manager will attempt to find spare disks, relocate failed subdisks and then recover the data in the failed plexes.
The following describes the sections of the mail notification:
The
disk
under
failed disks
specifies a disk that appears to have failed completely.
[Return to example]
The
plex
under
failed plexes
shows a plex that is detached due to I/O failures to subdisks the
plex contains.
[Return to example]
The
plex
under
failed log
plexes
indicates a RAID5 or dirty region log (DRL)
plex that has experienced failures.
[Return to example]
The
disk
under
failing
disks
indicates a partial disk failure or a disk that is in the
process of failing.
When a disk has failed completely, the same
disk
appears under both
failed disks
and
failing disks
.
[Return to example]
The
subdisk
under
failed
subdisks
indicates a subdisk in a RAID5 volume
that has been detached due to I/O errors.
[Return to example]
Example 3-2
shows that a disk named
disk02
was failing, then detached by a failure and that plexes named
home-02
,
src-02
, and
mkting-01
were also detached (probably due to the disk failure).
Example 3-2: Complete Disk Failure Mail Notification
To: root Subject: Logical Storage Manager failures on hostname Failures have been detected by the Logical Storage Manager failed disks: disk02 failed plexes: home-02 src-02 mkting-01 failing disks: disk02
Example 3-3: Partial Disk Failure Mail Notification
To: root Subject: Logical Storage Manager failures on hostname Failures have been detected by the Logical Storage Manager: failed disks: disk02 failed plexes: home-02 src-02
Example 3-4: Successful Data Relocation Mail Notification
Volume home Subdisk home-02-02 relocated to dsk12-02, but not yet recovered.
If the data recovery is successful, the following message is sent:
Recovery complete for volume home in disk group dg03.
If the data recovery is unsuccessful, the following message is sent:
Failure recovering home in disk group dg03.
Example 3-5: No Hot-Spare or Free Disk Space Mail Notification
Relocation was not successful for subdisks on disk dsk3 in volume vol_02 in disk group rootdg. No replacement was made and the disk is still unusable. The following volumes have storage on dsk3: vol_02
.
.
.
These volumes are still usable, but the redundancy of those volumes is reduced. Any RAID5 volumes with storage on the failed disk may become unusable in the face of further failures.
Example 3-6: Data Relocation Failure Mail Notification
Relocation was not successful for subdisks on disk dsk14 in volume data02 in disk group data_dg. No replacement was made and the disk is still unusable.
.
.
.
The actual mail notification includes an error message indicating why the data relocation failed.
Example 3-7: Unusable Volume Mail Notification
The following volumes: finance
.
.
.
have data on dsk23 but have no other usable mirrors on other disks. These volumes are now unusable and the data on them is unavailable. These volumes must have their data restored.
Example 3-8: Unusable RAID 5 Volume Mail Notification
The following RAID5 volumes: vol_query
.
.
.
have storage on dsk43 and have experienced other failures. These RAID5 volumes are now unusable and data on them is unavailable. These RAID5 volumes must have their data restored.
3.6 LSM Files, Directories, Device Drivers, and Daemons
After you install and initialize LSM, several new files, directories,
device drivers, and daemons are present on the system.
These are described
in following sections.
3.6.1 LSM Files
The
/dev
directory contains the device special files
(Table 3-3) that LSM uses to communicate with the kernel.
Table 3-3: LSM Device Special Files
Device Special File | Function |
/dev/volconfig |
Allows the
vold
daemon to make configuration
requests to the kernel |
/dev/volevent |
Used by the
voliotrace
command to
view and collect events |
/dev/volinfo |
Used by the
volprint
command to collect
LSM object status information |
/dev/voliod |
Provides an interface between the volume extended I/O
daemon (voliod ) and the kernel |
The
/etc/vol
directory contains the
volboot
file and the subdirectories (Table 3-4) for
LSM use.
Table 3-4: LSM /etc/vol Subdirectories
Directory | Function |
reconfig.d |
Provides temporary storage during encapsulation of existing file systems. Instructions for the encapsulation process are created here and used during the reconfiguration. |
tempdb |
Used by the volume configuration daemon (vold ) while creating the configuration database during startup and while
updating configuration information.
|
vold_diag |
Creates a socket portal for diagnostic commands to communicate
with the
vold
daemon. |
vold_request |
Provides a socket portal for LSM commands to communicate
with the
vold
daemon. |
The
/dev
directory contains the subdirectories (Table 3-5)
for volume block and character devices.
Table 3-5: LSM Block and Character Device Subdirectories
Directory | Contains |
/dev/vol |
Block device interfaces for LSM volumes |
/dev/rvol |
Character device interfaces for LSM volumes |
There are two LSM device drivers:
volspec
The volume special device
driver communicates with the LSM device special files.
This is not a loadable
driver; it must be present at boot time.
voldev
The volume device driver
communicates with LSM volumes and provides an interface between LSM and the
physical disks.
There are two LSM daemons:
vold
The Volume Configuration Daemon
maintains configurations of disks and disk groups.
It also:
Takes requests from other utilities for configuration changes
Communicates change requests to the kernel
Modifies configuration information stored on disk
Initializes LSM when the system starts
voliod
The Volume Extended I/O Daemon performs the functions
of a utility and a daemon.
As a utility,
voliod
:
Returns the number of running volume I/O daemons
Starts more daemons when necessary
Removes some daemons from service when they are no longer needed
As a daemon,
voliod
:
Schedules I/O requests that must be retried
Schedules writes that require logging