This chapter describes how to manage LSM objects using LSM commands.
For more information on an LSM command, see the reference page corresponding
to its name.
For example, for more information on the
volassist
command, enter:
# man volassist
The following sections describe how to use LSM commands to manage LSM
disks.
5.1.1 Displaying LSM Disk Information
To display detailed information for an LSM disk:
# voldisk list disk
The following example contains information for an LSM disk named
dsk12
:
Device: dsk12 devicetag: dsk12 type: sliced hostid: hostname.com disk: name=dsk12 id=1012859934.2400.potamus.zk3.dec.com group: name=dg2 id=1012859945.2405.potamus.zk3.dec.com flags: online ready autoimport imported pubpaths: block=/dev/disk/dsk12g char=/dev/rdisk/dsk12g privpaths: block=/dev/disk/dsk12h char=/dev/rdisk/dsk12h version: 2.1 iosize: min=512 (bytes) max=2048 (blocks) public: slice=6 offset=16 len=2046748 private: slice=7 offset=0 len=4096 update: time=1012859947 seqno=0.1 headers: 0 248 configs: count=1 len=2993 logs: count=1 len=453 Defined regions: config priv 17- 247[ 231]: copy=01 offset=000000 enabled config priv 249- 3010[ 2762]: copy=01 offset=000231 enabled log priv 3011- 3463[ 453]: copy=01 offset=000000 enabled
When you initialize an LSM disk, you can assign it a disk media name or use the default disk media name, which is the same as the disk access name.
Caution
Each disk in a disk group must have a unique name. To avoid confusion, you might want to ensure that no two disk groups contain disks with the same name. For example, both the
rootdg
disk group and another disk group might contain disks with a disk media name ofdisk03
. Because most LSM commands operate on therootdg
disk group unless you specify otherwise, you can inadvertently perform operations on the wrong disk if multiple disk groups contain identically named disks.The
voldisk list
command displays a list of all the LSM disks in all disk groups on the system.
To rename an LSM disk:
# voledit rename old_disk_media_name new_disk_media_name
For example, to rename an LSM disk from
disk03
to
disk01
:
# voledit rename disk03 disk01
5.1.3 Placing LSM Disks Off Line
You can place an LSM disk off line to:
Prevent LSM from accessing it
Enable you to move the disk to a different physical location and have the disk retain its LSM identity
Placing a disk off line closes its device file. If a disk is in use, you cannot place it off line.
To place an LSM disk off line:
Remove the LSM disk from its disk group:
# voldg [-g disk_group] rmdisk disk
Place the LSM disk off line:
# voldisk offline disk
5.1.4 Placing LSM Disks On Line
To restore access to an LSM disk that you placed off line, place it on line. The LSM disk is placed in the free disk pool and is accessible to LSM again. After placing an LSM disk on line, you must add it to a disk group before an LSM volume can use it. If the disk belonged to a disk group previously, you can add it to the same disk group.
To place an LSM disk on line:
# voldisk online disk
For information on adding an LSM disk to a disk group, see
Section 5.2.2.
5.1.5 Moving Data Off an LSM Disk
You can move (evacuate) LSM volume data to other LSM disks in the same disk group if there is sufficient free space. If you do not specify a target LSM disk, LSM uses any available LSM disk in the disk group that has sufficient free space.
You might want to move data off an LSM disk in the following circumstances:
To move data off a
nopriv
disk (created
when you encapsulate a disk, disk partition, or AdvFS domain) to
sliced
or
simple
disks in the disk group.
This is recommended, because
nopriv
disks are more
complex to manage and are intended to be used only temporarily, as a way to
place existing data under LSM control.
The exception:
nopriv
disks created by encapsulating the boot disk partitions on a standalone system.
For information on verifying there is sufficient free space in the disk group to support the move, see Section 5.2.1.
To move LSM objects relocated by a hot-sparing operation. See Section 5.1.6 for specific information.
To redistribute one or more volumes that use space on the
same disk, if this is causing contention and slow performance.
You can determine
this by using the
volstat
command (Section 6.1.2).
To move data off a disk, use one of the following commands:
The
volassist move
command, which moves
a specific volume off a disk, but leaves objects from other volumes on the
disk.
The
volevac
command, which moves all LSM
objects off a particular disk (for example, multiple LSM volumes or logs).
Note
Do not move the contents of an LSM disk to another LSM disk that contains data from the same volume. The resulting layout might not preserve redundancy for volumes that use mirror plexes or a RAID5 plex.
To move all data (for example, multiple LSM volumes) off an LSM disk:
# volevac [-g disk_group] source_disk target_disk [target_disk...]
For example:
To move all the data from LSM disk
dsk8
to
dsk9
(in the
rootdg
disk group):
# volevac dsk8 dsk9
To move data from a
nopriv
disk to any
available
sliced
or
simple
disk:
# volevac dsk24c [disk...]
If you specify the target disks, make sure they have sufficient free space.
To move a volume off an LSM disk:
# volassist [-g disk_group] move volume [\]!source_disk \ [target_disk...]
For example, suppose three volumes in
rootdg
use
space on
dsk1
:
# volprint -s | grep dsk1
sd dsk1-04 rootvol-04 ENABLED 65 FPA - - - sd dsk1-01 vol_1-03 ENABLED 65 LOG - - - sd dsk1-02 vol_2-03 ENABLED 65 LOG - - -
.
.
.
The following commands move volumes
vol_1
and
vol_2
off
dsk1
(using the quoting convention
for the C shell to correctly interpret the
!
symbol):
# volassist move vol_1 \!dsk1 # volassist move vol_2 \!dsk1
The following command confirms that the only volume using
dsk1
after the operation is
rootvol
:
# volprint | grep dsk1-
sd dsk1-04 rootvol-04 ENABLED 65 FPA - - -
The following command displays the disks now used in volumes
vol_1
and
vol_2
:
# volprint vol_1 vol_2
Disk group: rootdg TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 v vol_1 fsgen ENABLED 262144 - ACTIVE - - pl vol_1-01 vol_1 ENABLED 262144 - ACTIVE - - sd dsk0-01 vol_1-01 ENABLED 262144 0 - - - pl vol_1-03 vol_1 ENABLED LOGONLY - ACTIVE - - sd dsk4-01 vol_1-03 DETACHED 65 LOG RECOVER - - v vol_2 fsgen ENABLED 262144 - ACTIVE - - pl vol_2-01 vol_2 ENABLED 262144 - ACTIVE - - sd dsk3-01 vol_2-01 ENABLED 262144 0 - - - pl vol_2-03 vol_2 ENABLED LOGONLY - ACTIVE - - sd dsk6-01 vol_2-03 ENABLED 65 LOG - - -
5.1.6 Moving LSM Objects Relocated by Hot-Sparing
When LSM objects are moved from a failing disk to a hot-spare disk by the hot-sparing feature, their new locations might not provide the same performance or have the same data layout that existed before. After hot-sparing occurs, you might want to move the relocated LSM objects to a different disk to improve performance, to keep the hot-spare disk space free for future hot-sparing needs, or to restore the LSM configuration to its previous state.
Note
The following procedure assumes you have initialized a new disk to replace the hot-spare disk. For more information on adding disks for LSM use, see Section 4.1. For more information on replacing a failed disk, see Section 6.4.5.
In the following procedure, the hot-spare disk to which LSM moved the data is called the relocation disk. The disk you choose to move the data to is called the target disk. Use the disk media name of both the relocation disk and the target disk.
To move LSM objects that were relocated by hot-sparing:
Note the characteristics of the LSM objects before they were relocated.
This information is available from the mail notification about the failure
that the
volwatch
daemon sent to the root account.
For
example, look for a message similar to the following:
To: root Subject: Logical Storage Manager failures on host teal Attempting to relocate subdisk disk02-03 from plex home-02. Dev_offset 0 length 1164 dm_name disk02 da_name dsk2. The available plex home-01 will be used to recover the data.
Note the new location of the relocated LSM object (the relocation disk).
This information is available from the mail notification about the failure
that the
volwatch
daemon sent to the root account.
For
example, look for a message similar to the following:
To: root Subject: Attempting LSM relocation on host teal Volume home Subdisk disk02-03 relocated to disk05-01, but not yet recovered.
Find a suitable target disk in the same disk group, and ensure that the target disk is not already in use by the same volume.
For example, as shown in the sample mail message in the previous step,
do not attempt to relocate the data to a disk in use by the volume
home
.
For more information on finding unused space in a disk group,
see
Section 5.2.1.
Move the objects from the relocation disk to the target disk:
# volevac [-g disk_group] dm_relocation_disk dm_target_disk
5.1.7 Removing Disks from LSM Control
You can remove a disk from LSM control if you no longer need it.
For information on removing an LSM disk from a disk group, see Section 5.2.3. For information on deporting a disk group, see Section 5.2.4.
To remove an LSM disk from LSM control:
Display the disk media name and disk access name of the disk:
# voldisk list
Information similar to the following is displayed (edited for brevity):
DEVICE TYPE DISK GROUP STATUS
.
.
.
dsk25 sliced - - unknown dsk26 sliced newdisk rootdg online dsk27 sliced dsk27 rootdg online
Remove the disk media name from the disk group:
# voldg [-g disk_group] rmdisk disk
For example, to remove
dsk26
, which has a disk media
name of
newdisk
, from the
rootdg
disk
group:
# voldg rmdisk newdisk
When you remove an LSM disk from a disk group, it no longer has a disk
media name.
The disk keeps its disk access name until you remove the disk
from LSM control.
The
voldisk list
command shows that
dsk26
does not belong to any disk group, and it has a status of
online
, indicating it is under LSM control:
DEVICE TYPE DISK GROUP STATUS
.
.
.
dsk25 sliced - - unknown dsk26 sliced - - online dsk27 sliced dsk27 rootdg online
Remove the disk access name from LSM control:
# voldisk rm disk
For example, to remove disk
dsk26
:
# voldisk rm dsk26
All the disk partition labels are changed to
unused
.
To use the disk for other purposes, reinitialize it using the
disklabel
command.
For more information, see
disklabel
(8)5.2 Managing Disk Groups
The following sections describe how to use LSM commands to manage disk
groups.
5.2.1 Displaying Disk Group Information
To display a list of all LSM disks and the disk group to which each belongs, and disks that LSM recognizes but which are not under LSM control:
# voldisk list
DEVICE TYPE DISK GROUP STATUS dsk0 sliced - - unknown dsk1 sliced - - unknown dsk2 sliced dsk2 rootdg online dsk3 sliced dsk3 rootdg online dsk4 sliced dsk4 rootdg online dsk5 sliced dsk5 rootdg online dsk6 sliced dsk6 dg1 online dsk7 sliced dsk7 dg1 online dsk8 sliced dsk8 dg1 online dsk9 sliced dsk9 dg2 online dsk10 sliced dsk10 dg2 online dsk11 sliced dsk11 dg2 online dsk12 sliced - - unknown dsk13 sliced - - unknown
To display the free space in one or all disk groups:
# voldg [-g disk_group] free
GROUP DISK DEVICE TAG OFFSET LENGTH FLAGS rootdg dsk2 dsk2 dsk2 2097217 2009151 - rootdg dsk3 dsk3 dsk3 2097152 2009216 - rootdg dsk4 dsk4 dsk4 0 4106368 - rootdg dsk5 dsk5 dsk5 0 4106368 - dg1 dsk6 dsk6 dsk6 0 2046748 - dg1 dsk8 dsk8 dsk8 0 2046748 -
The value in the
LENGTH
column indicates the amount
of free disk space in 512-byte blocks.
(2048 blocks equal 1 MB.)
To display the the largest volume you can create in the disk
group, use the
volassist maxsize
command.
The value returned will vary depending on the properties of the volume you want to create, such as striped or mirrored.
For example, to display the largest volume you can create that is striped over four disks, with a stripe width of 512K bytes:
# volassist [-g disk_group] maxsize stwidth=512k ncolumn=4 Maximum volume size: 160051200 (78150Mb)
You can then create a volume with those properties, of any size up through that value; for example:
# volassist -g dg1 make megavol 78150m stwidth=512k ncolumn=4
5.2.2 Adding LSM Disks to Disk Groups
You can add unassigned LSM disks to any disk group.
To display a list
of unassigned disks, enter the
voldisk list
command.
Unassigned
LSM disks are those with a status of
online
and dashes
(-
) in the
DISK
and
GROUP
columns of the output.
To add one or more LSM disks to an existing disk group:
# voldg -g disk_group adddisk disk [disk...]
For example, to add the disk
dsk10
to the disk group
dg1
:
# voldg -g dg1 adddisk dsk10
To initialize a disk for LSM use and either add it to a disk group or
use it to initialize a new disk group in one step, use the
voldiskadd
script (Section 4.2.1).
5.2.3 Removing LSM Disks from Disk Groups
You can remove an LSM disk from a disk group; however, you cannot remove:
The last disk in a disk group unless the disk group is deported. For information on deporting a disk group, see Section 5.2.4.
Any disk that is in use (for example, disks that contain active LSM volume data). If you attempt to remove a disk that is in use, LSM displays an error message and does not remove the disk.
For information on moving data off an LSM disk, see Section 5.1.5. For information on removing LSM volumes, see Section 5.4.6.
To remove an LSM disk from a disk group:
Verify that the LSM disk is not in use by listing all subdisks:
# volprint -st
Disk group: rootdg SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE sd dsk1-01 klavol-01 dsk1 0 1408 0/0 dsk1 ENA sd dsk2-02 klavol-03 dsk2 0 65 LOG dsk2 ENA sd dsk2-01 klavol-01 dsk2 65 1408 1/0 dsk2 ENA sd dsk3-01 klavol-01 dsk3 0 1408 2/0 dsk3 ENA sd dsk4-01 klavol-02 dsk4 0 1408 0/0 dsk4 ENA sd dsk5-01 klavol-02 dsk5 0 1408 1/0 dsk5 ENA sd dsk6-01 klavol-02 dsk6 0 1408 2/0 dsk6 ENA
The disks in the
DISK
column are currently in use
by an LSM volume, and therefore you cannot remove those disks from a disk
group.
Remove the LSM disk from the disk group:
# voldg -g disk_group rmdisk disk
For example, to remove the LSM disk
dsk8
from the
rootdg
disk group:
# voldg rmdisk dsk8
The disk remains under LSM control. You can:
Add the LSM disk to a different disk group (Section 5.2.2).
Use the disk to create a new disk group (Section 4.2).
Remove the disk from LSM control (Section 5.1.7).
Deporting a disk group makes its volumes inaccessible. You can deport a disk group to:
Rename the disk group.
Reuse the disks for other purposes.
Move the disk group to another system (Section 7.3.2).
You cannot deport the
rootdg
disk group.
Caution
The
voldisk list
command displays the disks in a deported disk group as available (with a status ofonline
). However, removing or reusing the disks in a deported disk group can result in data loss.
To deport a disk group:
If applicable, stop the volumes:
# volume -g disk_group stopall
Deport the disk group:
To deport the disk group with no changes:
# voldg deport disk_group
To deport the disk group and assign it a new name:
# voldg [-n newname] deport disk_group
For more information on assigning a new name to a disk group, see
voldg
(8)
You must import a disk group (Section 5.2.5) before you can use it.
If you no longer need the disk group, you can:
Add the disks to different disk groups (Section 5.2.2).
Use the disks to create new disk groups (Section 4.2).
Remove the disks from LSM control (Section 5.1.7).
Importing a disk group makes the disk group and its volumes accessible. You cannot import a disk group if you used any of its associated disks while it was deported.
To import a disk group and restart its volumes:
Import the disk group:
# voldg import disk_group
Start all volumes within the disk group:
# volume -g disk_group startall
Renaming a disk group involves deporting and then importing the disk group. You cannot rename a disk group while it is in use. All activity on all volumes in the disk group must stop, and the volumes in the disk group are inaccessible while the disk group is deported.
Because renaming a disk group involves an interruption of service to
the volumes, you should perform this task during a planned shutdown or maintenance
period.
Choose the new disk group name carefully, and ensure that the new
name is easy to remember and use.
Renaming a disk group updates the
/etc/fstab
file.
Note
You cannot rename the
rootdg
disk group.
To rename a disk group:
Deport the disk group, assigning it a new name (Section 5.2.4).
Import the disk group using its new name (Section 5.2.5).
5.2.7 Creating a Clone Disk Group
The
volclonedg
command lets you create a copy of
a disk group using disks that are hardware clones.
This command is available
on both standalone systems and clusters.
Note
You must create hardware clones of all the disks in a disk group before cloning the disk group. The disk group cannot contain any
nopriv
disks.
The
volclonedg
command creates a new disk group containing
the same configuration of LSM objects as the parent disk group, using the
the
volsave
and
volrestore
commands
to save the configuration of the parent disk group and create the same configuration
in the clone disk group.
LSM starts all possible volumes in the clone disk
group, performing recovery of mirrored volumes if necessary.
For more information,
see
volclonedg
(8)
To clone a disk group:
Display the disks in the disk group to be cloned:
# voldisk -g disk_group list
DEVICE TYPE DISK GROUP STATUS dsk10 sliced dsk10 dg1 online dsk11 sliced dsk11 dg1 online dsk12 sliced dsk12 dg1 online dsk13 sliced dsk13 dg1 online
Note
If the disk group contains any
nopriv
disks, add newsliced
orsimple
disks to the disk group, move the data from thenopriv
disks to the new disks (using thevolevac
command, for example), and remove thenopriv
disks from the disk group before you clone the disk group.
Create hardware clones of the disks. See your hardware documentation for more information.
Run the
hwmgr
command to update the system
or cluster with the new disk information.
For more information, see
hwmgr
(8)
Verify that LSM can access and display the cloned disks:
# voldisk list
Cloned disks show a status of
online aliased
.
In the following output, the original disks are
dsk10
,
dsk11
,
dsk12
, and
dsk13
.
The
hardware disk clones are
dsk14
,
dsk15
,
dsk16
, and
dsk17
.
DEVICE TYPE DISK GROUP STATUS dsk0 sliced dsk0 rootdg online dsk1 sliced dsk1 rootdg online dsk2 sliced dsk2 rootdg online spare dsk3 sliced dsk3 rootdg online spare dsk4 sliced dsk4 rootdg online dsk5 sliced dsk5 rootdg online spare dsk6 sliced dsk6 rootdg online dsk7 sliced dsk7 rootdg online dsk8 sliced dsk8 rootdg online dsk9 sliced dsk9 rootdg online dsk10 sliced dsk10 dg1 online dsk11 sliced dsk11 dg1 online dsk12 sliced dsk12 dg1 online dsk13 sliced dsk13 dg1 online dsk14 sliced - - online aliased dsk15 sliced - - online aliased dsk16 sliced - - online aliased dsk17 sliced - - online aliased
Use the names of the disk clones to clone the disk group
dg1
, optionally assigning a name other than the default (in this
case,
dg1_clone
):
# volclonedg -g dg1 [-N name] dsk14 dsk15 dsk16 dsk17
LSM creates the clone disk group and starts its volumes.
For more information, including an example of creating a clone disk
group on a different system from the parent disk group, see
volclonedg
(8)5.3 Managing the LSM Configuration Database
This section describes how to manage the LSM configuration database, including:
Backing up the configuration database (Section 5.3.1)
Restoring the configuration database from backup (Section 5.3.2)
Changing the size and number of configuration database copies (Section 5.3.3)
5.3.1 Backing Up the LSM Configuration Database
Use the
volsave
utility to periodically create a
copy of the LSM configuration.
You can then use the
volrestore
command to recreate the LSM configuration if you lose a disk group configuration.
The saved configuration database (also called a description set) is a record of the objects in the LSM configuration (the LSM disks, subdisks, plexes, and volumes) and the disk group to which each object belongs.
Whenever you make a change to the LSM configuration, the backup copy becomes obsolete. Like any backup, the content is useful only as long as it accurately represents the current information. Any time the number, nature, or name of LSM objects change, consider making a backup of the LSM configuration database.
The following list describes some of the changes that will invalidate a configuration database backup:
Creating disk groups
Adding or removing disks from disk groups or from LSM control
Creating or removing volumes
Changing the properties of volumes, such as the plex layout or number of logs
Note
Backing up the configuration database does not save the data in the volumes. For information on backing up volume data, see Section 5.4.2.
The
volsave
command does not save information relating to volumes used for the root,/usr
, or/var
file systems or for swap space.Depending on the nature of a boot disk failure, you might need to restore the system partitions from backups or installation media to return to a state where the system partitions are not under LSM control. From there, you can redo the procedures to encapsulate the boot disk partitions to LSM volumes and add mirror plexes to those volumes.
For more information about recovering from a boot disk failure under LSM control, see Section 6.4.6.
By default, LSM saves the entire configuration database to a time-stamped
directory called
/usr/var/lsm/db/LSM.date.hostname
.
You can specify a different directory for
the backup, but the directory must not already exist.
The backup directory contains the following files and directories:
A copy of the
volboot
file.
A file called
header
, which contains
host ID
and checksum information, and a list
of the other files in this directory.
A file called
voldisk.list
, which contains
a list of all LSM disks, their type (sliced
,
simple
,
nopriv
), the size of their private and
public regions, their disk group, and other information.
A subdirectory called
rootdg.d
, which contains
the
allvol.DF
file.
The
allvol.DF
file contains detailed descriptions of every LSM subdisk, plex, and volume,
describing all their properties and attributes.
To back up the LSM configuration database:
Enter the following command, optionally specifying a directory location other than the default to store the LSM configuration database:
# volsave [-d directory]
Save the backup to tape or other removable media.
The
volsave
command saves multiple versions of the
configuration database; each new backup is saved in the
/usr/var/lsm/db
directory with its own date and time stamp, as shown in the following
example:
dr-xr-x--- 3 root system 8192 May 5 09:36 LSM.20000505093612.hostname dr-xr-x--- 3 root system 8192 May 10 10:53 LSM.20000510105256.hostname
5.3.2 Restoring the LSM Configuration Database from Backup
The
volrestore
command restores an LSM configuration
database, provided you saved it with the
volsave
command
(Section 5.3.1).
You can restore the configuration database of a specific disk group
or volume or the entire configuration (all disk groups and volumes except
those associated with the boot disk).
If you have multiple backups of the
configuration database (a new one is created each time you run the
volsave
command), you can choose a specific one to restore.
Otherwise,
LSM restores the most recent version.
Note
Restoring the configuration database does not restore data in the LSM volumes. For information on restoring volumes, see Section 5.4.3.
The
volrestore
command does not restore volumes associated with the root (/
),/usr
, and/var
file systems and the primary swap area on a standalone system. If volumes for these partitions are corrupted or destroyed, you must reencapsulate the system partitions to use LSM volumes.
To restore a backed-up LSM configuration database:
Optionally, display a list of all available database backups:
# ls /usr/var/lsm/db
If you saved the configuration database to a different directory, specify that directory.
Restore the chosen configuration database:
To restore the entire configuration database:
# volrestore [-d directory]
To restore a specific disk group configuration database:
# volrestore [-d directory] -g disk_group
To restore a specific volume configuration:
# volrestore [-d directory] -v volume
To restore a configuration database interactively, enabling you to select or skip specific objects:
# volrestore [-d directory] -i
Start the restored LSM volumes:
# volume -g disk_group startall
If the volumes will not start, you might need to manually edit the plex state. See Section 6.5.2.2.
If necessary, restore the volume data from backup. For more information, see Section 5.4.3.
5.3.3 Changing the Size and Number of Configuration Database Copies
LSM maintains copies of the configuration database on separate physical disks within each disk group. When the disk group runs out of space in the configuration database, LSM displays the following message:
volmake: No more space in disk group configuration
This can happen if:
One or more disks in the disk group contain two copies of the configuration database. Whenever a configuration change occurs, all active copies are updated. If one disk's copies cannot be updated because they have grown too large, then none of the copies for the whole disk group can be updated.
The disk group came from a system running Tru64 UNIX Version 4.0, which uses a smaller default private region size.
If the configuration database runs out of space and you find that some disks have two copies of the configuration database, you can remove one copy from each disk that has two (Section 5.3.3.1). However, make sure that the disk group still has sufficient copies of the configuration database available for redundancy. For example, if the disk group has a total of four copies and two are on the same disk, remove one copy from that disk and enable a copy on another disk that does not have one.
If all copies of the configuration database are the same size and no
disk has more than one copy, this might indicate that the private regions
of the disks are too small.
For example, the disks were initialized on a system
running an earlier version of LSM, with a smaller default private region.
To resolve this problem, add new disks to LSM, which will have the larger
default private region size, add the new disks to the disk group, and delete
the copies of the configuration database on the other disks (Section 5.3.3.2).
5.3.3.1 Reducing the Number of Configuration Database Copies on an LSM Disk
To reduce the number of configuration database copies on an LSM disk:
Display information about the disk group's configuration database:
# voldg list disk_group
Group: rootdg dgid: 783105689.1025.lsm import-id: 0.1 flags: config: seqno=0.1112 permlen=173 free=166 templen=6 loglen=26 config disk dsk13 copy 1 len=173 state=clean online config disk dsk13 copy 2 len=173 state=clean online config disk dsk11g copy 1 len=347 state=clean online config disk dsk10g copy 1 len=347 state=clean online log disk dsk11g copy 1 len=52 log disk dsk13 copy 1 len=26 log disk dsk13 copy 2 len=26 log disk dsk10g copy 1 len=52
Identify a disk that has multiple copies of the configuration database.
For example, disk
dsk13
has two copies of the configuration
database.
This halves the total configuration space available in memory for
the disk group and is therefore the limiting factor.
Reduce the number of copies on a disk that has two copies:
# voldisk moddb disk nconfig=n
For example, to reduce the number of configuration copies on
dsk13
from two to one:
# voldisk moddb dsk13 nconfig=1
Display the new configuration:
# voldg list disk_group
If necessary, add a copy to another disk to maintain the appropriate number of copies for the disk group:
Display a list of all disks in the disk group:
# voldisk -g disk_group list
Compare the disks listed in the output of the
voldisk
list
command to those listed in the output of the
voldg
list
command to identify a disk in the disk group that does not
have a copy of the configuration database.
Enable a configuration database copy on a disk that does not have one, using the disk access name:
# voldisk moddb disk_access_name nconfig=1
5.3.3.2 Removing Configuration Database Copies on LSM Disks with Small Private Regions
To remove configuration database copies on an LSM disk with a small private region:
Display information about the disk group's configuration database:
# voldg list disk_group
Group: rootdg dgid: 921610896.1026.hostname import-id: 0.1 flags: copies: nconfig=default nlog=default config: seqno=0.1081 permlen=347 free=341 templen=3 loglen=52 config disk dsk7 copy 1 len=347 state=clean online config disk dsk8 copy 1 len=2993 state=clean online config disk dsk9 copy 1 len=2993 state=clean online config disk dsk10 copy 1 len=2993 state=clean online log disk dsk7 copy 1 len=52 log disk dsk8 copy 1 len=453 log disk dsk9 copy 1 len=453 log disk dsk10 copy 1 len=453
Disk
dsk7
has a smaller private region than the other
disks, as shown by the
len=
information in the lines beginning
with
config disk
and
log disk
, and therefore
has less space to store copies of the configuration database and log.
This
restricts the disk group's ability to store additional records, because the
smallest private region sets the limit for the group.
Remove all configuration database copies from the disk with the smallest private region:
# voldisk moddb disk nconfig=0
For example, to remove the copies on the disk with the smallest private
region,
dsk7
:
# voldisk moddb dsk7 nconfig=0
Display the new configuration:
# voldg list disk_group
If necessary, add a copy to another disk to maintain the appropriate number of copies for the disk group:
Display a list of all disks in the disk group:
# voldisk -g disk_group list
Compare the disks listed in the output of the
voldisk
list
command to those listed in the output of the
voldg
list
command to identify a disk in the disk group that does not
have a copy of the configuration database.
Enable a copy on a disk that does not have one, using the disk access name:
# voldisk moddb disk_access_name nconfig=1
The following sections describe how to use LSM commands to manage LSM
volumes.
For information on creating LSM volumes, see
Chapter 4.
5.4.1 Displaying LSM Volume Information
The
volprint
command displays information about LSM
objects, including LSM disks, subdisks, plexes, and volumes.
To display the complete hierarchy of objects in an LSM volume:
# volprint [-g disk_group] -ht volume
Disk group: rootdg [1] V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v data01 fsgen ENABLED ACTIVE 512000 SELECT - [2] pl data01-01 data01 ENABLED ACTIVE 512256 STRIPE 3/128 RW [3] sd dsk2-01 data01-01 dsk2 0 170752 0/0 dsk2 ENA [4] sd dsk5-01 data01-01 dsk5 0 170752 1/0 dsk5 ENA sd dsk6-01 data01-01 dsk6 0 170752 2/0 dsk6 ENA pl data01-02 data01 ENABLED ACTIVE 512256 STRIPE 3/128 RW sd dsk7-01 data01-02 dsk7 0 170752 0/0 dsk7 ENA sd dsk8-01 data01-02 dsk8 65 170752 1/0 dsk8 ENA sd dsk9-01 data01-02 dsk9 0 170752 2/0 dsk9 ENA pl data01-03 data01 ENABLED ACTIVE LOGONLY CONCAT - RW sd dsk8-02 data01-03 dsk8 0 65 LOG dsk8 ENA
The preceding example shows output for a volume with mirrored, three-column, striped plexes:
Disk group name. [Return to example]
Volume information: Name (data01
), usage
type (fsgen
), state (ENABLED ACTIVE
),
and size (51200 blocks, or 250 MB).
[Return to example]
Plex information: Two data plexes (data01-01
and
data01-02
) and a DRL plex (data01-03
).
[Return to example]
Subdisk information for each plex. [Return to example]
To display a listing of LSM volumes:
# volprint [-g disk_group] -vt
Disk group: rootdg V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX v rootvol root ENABLED ACTIVE 524288 ROUND - v swapvol swap ENABLED ACTIVE 520192 ROUND - v vol-dsk24c fsgen ENABLED ACTIVE 17773524 SELECT - v vol-dsk25g fsgen ENABLED ACTIVE 2296428 SELECT - v vol-dsk25h fsgen ENABLED ACTIVE 765476 SELECT - v vol-01 fsgen ENABLED ACTIVE 768000 ROUND - v vstripe fsgen ENABLED ACTIVE 256000 SELECT vstripe-01
One of the more common tasks of a system administrator is helping users recover lost or corrupted files. To perform that task effectively, establish procedures for backing up LSM volumes and the LSM configuration database at frequent and regular intervals. You will need the saved configuration database as well as the backed-up data if you need to restore a volume after a major failure. (For example, if multiple disks in the same volume failed, or all the disks containing the active configuration records for a disk group failed.)
For information on backing up the LSM configuration database, see Section 5.3.1.
For a thorough discussion of the backup and restore options available on a Tru64 UNIX system, see the System Administration manual. If you are using AdvFS, also see the AdvFS Administration manual.
LSM commands do not actually create the backup of the volume data, but they provide several ways for you to make volume data available to back up. In many cases, you do not have to unmount file systems or bring the system to single-user mode, which prevents possible corruption of the backup data if users write to a file system before the backup is complete.
To create a backup of LSM volume data, you can use the following commands:
volassist snapfast
and
volassist
snapback
These commands use the Fast Plex Attach feature to create a secondary
volume from an existing eligible plex in a mirrored volume.
They create a
log that keeps track of changes to the original (primary) volume from the
time a plex is detached with
snapfast
to the time the plex
is reattached to the original volume with
snapback
.
Advantages These commands save time by using an existing eligible plex and reduce the time required for the returning plex to resynchronize to the original volume.
Disadvantages
These commands
remove one plex to create the temporary volume; they might leave the primary
volume unmirrored until the
snapback
operation occurs.
For a detailed description of the Fast Plex Attach feature and information on how to use it, see Section 5.4.2.1 and Section 5.4.2.2.
volassist snapstart
and
volassist
snapshot
These commands attach a new plex to a volume for the express purpose of creating a temporary volume for backups.
The
snapstart
keyword uses available space in the
disk group to create and attach a new plex to the volume and performs a full
synchronization of the new plex to the volume's contents.
The
snapshot
keyword uses the new plex to create
a temporary volume with the name you specify.
The temporary volume is the
one you use to perform your backups; this leaves the original volume running
and available for use.
After performing the backup, you can optionally detach the plex from the temporary volume and reattach it to the original volume. This performs a full resynchronization.
Advantages These commands do not reduce the number of mirrors in a volume; they use a new plex created specifically for the backup. You can use this method to mirror a nonmirrored volume (except RAID5), if sufficient disk space is available.
Disadvantages These commands use additional disk space (equal to the size of the volume), and take time for the new plex to fully synchronize to the original volume and (optionally) for the reattached plex to resynchronize to the original volume.
For more information, see Section 5.4.2.3.
volplex det
,
volmake
,
volume start
,
voledit
, and
volplex att
(low-level commands)
These commands use an existing eligible plex from a mirrored volume to create the backup volume.
Advantages These commands give you control over which plex to use and save time by using an existing eligible plex.
Disadvantages These commands are considered low-level commands, introducing more chance of error. They remove one plex to create the temporary volume, which might leave the original volume unmirrored. If you reattach the plex to the original volume, it takes time for the plex to resynchronize.
For more information, see Section 5.4.2.4.
To back up a RAID 5 volume or a volume that use a single concatenated
or striped plex that you cannot or choose not to mirror, see
Section 5.4.2.5.
5.4.2.1 Overview of the Fast Plex Attach Feature
For mirrored LSM volumes, you can use the Fast Plex Attach feature to make a temporary copy of the volume data available for backup. You use the temporary volume to perform your backups, leaving the original volume running and available for use.
You can use the Fast Plex Attach feature on any mirrored volume on a
standalone system or a cluster, including the
rootvol
,
cluster_rootvol
, and other volumes for encapsulated standalone system
partitions and clusterwide file system domains.
The Fast Plex Attach feature
cannot be used on mirrored volumes used as swap space.
The
volassist
command provides two keywords (snapfast
and
snapback
) to create and remove a
backup volume using the Fast Plex Attach feature.
Figure 5-1
shows a three-way mirrored volume before
a Fast Plex Attach operation.
Figure 5-1: Volume Before Fast Plex Attach
A complete Fast Plex Attach operation goes through the following phases:
When you run the
volassist snapfast
command,
LSM examines the volume (known as the primary volume) to ensure it has at
least two complete, read-write plexes and arbitrarily selects one plex as
a candidate for Fast Plex Attach (FPA) support.
This plex becomes the migrant
plex.
If any plex has a state of
SNAPDONE
(the result of
a prior
snapstart
operation), LSM uses it as the migrant
plex.
If the volume has only one plex, the command fails. If the volume has only two plexes, the command exits and displays a message stating that you must use the force (-f) option, because the operation will leave the volume with only one plex and therefore unmirrored.
LSM creates an FPA subdisk and attaches it to a separate FPA plex (plex-05)
for the primary volume (Figure 5-2).
Figure 5-2: Process of the volassist snapfast Command: Phase 1
LSM creates an FPA subdisk for the migrant plex and attaches
this to the migrant plex as its FPA log.
LSM then detaches the migrant plex
from the primary volume and creates a secondary volume (Figure 5-3).
Figure 5-3: Process of the volassist snapfast Command: Phase 2
LSM uses available disk space in the disk group for both FPA logs, using
disks marked as hot-spares only if no other suitable space is available.
LSM
will not use space on disks marked as reserved or volatile.
(For a description
of these attributes, see
voledit
(8)voldisk
(8)
As writes occur to both volumes, their respective FPA logs keep track
of the regions that changed (Figure 5-4).
The FPA log
subdisk attached to the migrant plex keeps track of changes to the secondary
volume (such as the I/O that occurs when you mount the secondary volume so
you can back it up).
Figure 5-4: Writes Occurring to Primary and Secondary LSM Volumes
Back up the secondary volume.
When the backup is complete, reattach the migrant plex to
the primary volume with the
volassist snapback
command.
LSM removes the FPA log subdisk from the secondary volume
and merges it with the FPA log subdisk on the primary volume (Figure 5-5).
Figure 5-5: Process of volassist snapback Command: Phase 1
LSM reattaches the migrant plex to the primary volume (and
removes the secondary volume if it has no other data plexes) and starts the
process of resynchronizing the migrant plex to the primary volume according
to the regions marked in the merged FPA log (Figure 5-6).
Figure 5-6: Process of volassist snapback Command: Phase 2
Any writes that occurred to the secondary volume are ignored; the corresponding regions of the primary volume are written to the returning migrant plex. Also, only the regions of the primary volume that changed in the interim are written to the returning migrant plex, instead of resynchronizing the entire volume. This can greatly reduce the time it takes for a volume to resynchronize and thereby reduce the performance impact.
Depending on how long the migrant plex was away and how active the primary volume was during its absence, resynchronizing only the regions marked in each volume's logs is likely to be faster than resynchronizing the entire volume.
5.4.2.2 Creating a Backup Volume Using the Fast Plex Attach Feature (volassist snapfast and snapback)
You can use the Fast Plex Attach feature to create a temporary backup volume from one plex of a mirrored volume. This is sometimes called splitting a mirror, but is more accurately described as detaching a mirror. The volume must have at least two complete, enabled plexes before you begin. If the volume has only two plexes, the volume will not be mirrored during the time one plex is detached. You must use the force (-f) option to detach a plex in this case.
Note
When you use the Fast Plex Attach feature to create a backup volume for
rootvol
, which has the special usage type ofroot
, LSM creates the backup volume with the usage type ofgen
. Only therootvol
volume can have a usage type ofroot
. In output from thevolprint
command, you might notice the different usage type for the backup volume forrootvol
. The usage type does not affect backup operations.
The Fast Plex Attach feature does not let you specify a particular plex to use as the migrant plex. To control which plex is used, you can:
Run the
volassist snapstart
command.
When
complete, the volume will have a new plex with a state of
SNAPDONE
.
You can then run the
volassist snapfast
command
and LSM will use the plex marked
SNAPDONE
as the migrant
plex.
There must be available space in the disk group equal to the size of
the volume to use the
volassist snapstart
command.
Use the following commands to detach a specific plex from the primary volume, enable Fast Plex Attach logging on both the primary volume and the migrant plex, and create the secondary volume from that plex:
# volplex det -o fpa plex # volplex att plex secondary_volume
Back up the secondary volume as described in the following procedure.
When finished, you can use the
volassist snapback
command
as shown in step 4.
To back up a volume using the Fast Plex Attach feature:
Verify that the volume you want to back up has more than two complete, enabled plexes available; for example:
# volprint -vht 3wayvol
Disk group: rootdg V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v 3wayvol fsgen ENABLED ACTIVE 1024 SELECT - pl 3wayvol-01 3wayvol ENABLED ACTIVE 1024 CONCAT - RW sd dsk0-01 3wayvol-01 dsk0 0 1024 0 dsk0 ENA pl 3wayvol-02 3wayvol ENABLED ACTIVE 1024 CONCAT - RW sd dsk2-01 3wayvol-02 dsk2 0 1024 0 dsk2 ENA pl 3wayvol-03 3wayvol ENABLED ACTIVE 1024 CONCAT - RW sd dsk7-02 3wayvol-03 dsk7 128 1024 0 dsk7 ENA pl 3wayvol-04 3wayvol ENABLED ACTIVE LOGONLY CONCAT - RW sd dsk1-01 3wayvol-04 dsk1 0 65 LOG dsk1 ENA
Enter the following command to enable Fast Plex Attach logging on the primary volume, detach one plex using the force (-f) option if necessary, and create the secondary volume with the name of your choice:
# volassist [-f] snapfast primary_vol secondary_vol
For example:
# volassist -f snapfast 3wayvol 3wayvol_bk
LSM creates a secondary volume from one plex of the primary volume. The volumes look like the following:
v 3wayvol fsgen ENABLED ACTIVE 1024 SELECT - pl 3wayvol-01 3wayvol ENABLED ACTIVE 1024 CONCAT - RW sd dsk0-01 3wayvol-01 dsk0 0 1024 0 dsk0 ENA pl 3wayvol-02 3wayvol ENABLED ACTIVE 1024 CONCAT - RW sd dsk2-01 3wayvol-02 dsk2 0 1024 0 dsk2 ENA pl 3wayvol-04 3wayvol ENABLED ACTIVE LOGONLY CONCAT - RW sd dsk1-01 3wayvol-04 dsk1 0 65 LOG dsk1 ENA pl 3wayvol-05 3wayvol ENABLED ACTIVE FPAONLY CONCAT - RW sd dsk4-05 3wayvol-05 dsk4 524546 65 FPA dsk4 ENA v 3wayvol_bk fsgen ENABLED ACTIVE 1024 ROUND - pl 3wayvol-03 3wayvol_bk ENABLED ACTIVE 1024 CONCAT - RW sd dsk0-03 3wayvol-03 dsk0 1152 65 FPA dsk0 ENA sd dsk7-02 3wayvol-03 dsk7 128 1024 0 dsk7 ENA
Back up the secondary volume using your preferred backup method. For more information, see Section 5.4.2.6.
When the backup is complete, reattach the migrant plex to the primary volume; for example:
# volassist snapback 3wayvol_bk 3wayvol
If the secondary volume has no other plexes, it is removed completely.
The FPA log plex remains attached to the primary volume to support future
snapfast
operations, as shown:
# volprint
v 3wayvol fsgen ENABLED ACTIVE 1024 SELECT - pl 3wayvol-01 3wayvol ENABLED ACTIVE 1024 CONCAT - RW sd dsk0-01 3wayvol-01 dsk0 0 1024 0 dsk0 ENA pl 3wayvol-02 3wayvol ENABLED ACTIVE 1024 CONCAT - RW sd dsk2-01 3wayvol-02 dsk2 0 1024 0 dsk2 ENA pl 3wayvol-03 3wayvol ENABLED ACTIVE 1024 CONCAT - RW sd dsk7-02 3wayvol-03 dsk7 128 1024 0 dsk7 ENA pl 3wayvol-04 3wayvol ENABLED ACTIVE LOGONLY CONCAT - RW sd dsk1-01 3wayvol-04 dsk1 0 65 LOG dsk1 ENA pl 3wayvol-05 3wayvol ENABLED ACTIVE FPAONLY CONCAT - RW sd dsk4-05 3wayvol-05 dsk4 524546 65 FPA dsk4 ENA
5.4.2.3 Creating a Backup Volume by Attaching a New Plex (volassist snapstart and snapshot)
The following procedure describes how to add a plex to a volume (mirror the volume) and then detach the new plex to create a separate volume for backups. Detaching a plex is sometimes called splitting a mirror; in this case, you create the new mirror to detach.
This backup method uses available disk space in the disk group for the new plex, equal to the size of the volume, and requires extra time for the new plex to be fully synchronized to the volume before you can detach it to make the backup volume. However, this backup method eliminates the need to detach a plex from a volume and therefore preserves redundancy for a mirrored volume.
If you do not have sufficient disk space or do not want to mirror the volume, see Section 5.4.2.5.
To back up an LSM volume by adding and then detaching a new plex:
Display the size of the LSM volume and the disks it uses:
# volprint -v [-g disk_group] volume
Ensure there is enough free space in the disk group to add a plex to the LSM volume:
# voldg [-g disk_group] free
The amount of free space must be at least equal to the size of the volume and must be on disks that are not used in the volume you want to back up.
Add a new plex to the volume, optionally specifying the disks to use:
# volassist snapstart volume [disk...]
This step initiates a full synchronization, which might take several minutes or longer, depending on the size of the volume.
Create a temporary volume from the new plex.
(The
snapshot
keyword uses the plex created in step 4 to create the new
volume.)
# volassist snapshot volume temp_volume
The following example creates a temporary volume named
vol3_backup
for a volume named
vol3
:
# volassist snapshot vol3 vol3_backup
Start the temporary volume:
# volume start temp_volume
Back up the temporary volume using your preferred backup method. For more information, see Section 5.4.2.6.
When the backup is complete, do one of the following:
If you no longer need the backup volume, stop and remove the volume:
# volume stop temp_volume # voledit -r rm temp_volume
If you want to reattach the plex to the original volume:
Stop the backup volume:
# volume stop temp_volume
Dissociate the plex from the backup volume (requires the force [-f] option):
# voledit -f dis plex
Remove the now empty backup volume:
# voledit -o rm temp_volume
Attach the backup plex to the original volume:
# volplex att plex original_volume
This initiates a full resynchronization of the volume's contents to the plex.
5.4.2.4 Creating a Backup Volume by Detaching an Existing Plex
This procedure is recommended only for adminstrators who have experience with using the low-level LSM commands and who have particular reasons for using this method instead of those described in the previous sections.
If used on a volume with only two enabled plexes, this procedure leaves the original volume unmirrored while one of the plexes is detached.
To back up an LSM volume from one of its existing plexes using the low-level commands:
Dissociate one of the volume's complete, enabled plexes, which leaves the plex with an image of the LSM volume at the time of dissociation:
# volplex dis plex
For example:
# volplex dis data-02
Create a temporary LSM volume using the dissociated plex:
# volmake -U fsgen vol temp_volume plex=plex
For example:
# volmake -U fsgen vol data_temp plex=data-02
Start the temporary volume:
# volume start temp_volume
For example:
# volume start data_temp
Back up the temporary volume using your preferred backup method. For more information, see Section 5.4.2.6.
When the backup is complete, stop and remove the temporary volume:
# volume stop temp_volume # voledit -r rm temp_volume
Reattach the dissociated plex to the original volume.
# volplex att volume plex
LSM automatically resynchronizes the plexes when you reattach the dissociated plex. This operation might take a long time, depending on the size of the volume. Running this process in the background returns control of the system to you immediately instead of waiting until the resynchronization is complete.
5.4.2.5 Backing Up a Nonredundant or RAID 5 Volume
If the volume uses a RAID 5 plex layout or if you cannot add a mirror to a volume with a single striped or concatenated data plex, you must either stop all applications from using the volume while the backup is in process or allow the backup to occur while the volume is in use.
If the volume remains in use during the backup, the volume data might change before the backup completes, and therefore the backup data will not be an exact copy of the volume's contents. The following procedure stops the volume to eliminate the risk of data corruption in the backup.
To back up a nonredundant or RAID 5 volume:
If applicable, select a convenient time and inform users to save files and refrain from using the volume, the application, or file system that uses the volume while you back it up.
Stop the volume:
# volume stop volume
For example:
# volume stop r5_vol
Back up the volume using your preferred backup method. For more information, see Section 5.4.2.6.
When the backup is complete, restart the volume:
# volume start volume
For example:
# volume start r5_vol
If applicable, inform users that the volume is available.
You can use any of the following methods to back up the data in an LSM volume:
If the volume contains a UFS file system, you can use the
dump
command; for example:
# dump -0u /dev/rvol/rootdg/r5_vol
For more information, see
dump
(8)
If the volume contains raw data such as for a database application, use the application's built-in backup utility. Be careful to select the temporary volume name as the object to back up.
If the application has no backup utility, you can use the
dd
command to make an image copy of the volume's contents; for example:
# dd if=/dev/rvol/rootdg/3wayvol_bk of=/dev/tape/tape0_d0
For more information, , including options for specifying input and output
block sizes, see
dd
(8)
5.4.2.7 Special Case: Backing Up LSM Volumes in an AdvFS Domain
Because AdvFS domains can use storage on several devices, including several LSM volumes, it is critical for backups to capture the state of the domain, filesets, and the AdvFS metadata at the same point in time.
Caution
Before you begin, read the information on backing up and restoring data in the AdvFS Administration manual.
The following procedure involves temporarily freezing the domain to ensure the metadata is in a consistent state until after you create the backup LSM volumes. To minimize the time the domain must remain frozen, review Section 5.4.2.2, Section 5.4.2.3, Section 5.4.2.4, and Section 5.4.2.5, as appropriate, to determine what you must do to create each backup volume. You can perform some tasks ahead of time, such as finding free space in a disk group and adding a mirror to a volume.
To back up an AdvFS domain that uses several LSM mirrored volumes:
Freeze the domain. (The default freeze period is 60 seconds.) Freezing any mount point in the domain freezes the entire domain.
# /usr/sbin/freezefs /mount_point
For each LSM volume in the domain, create a backup volume using the appropriate procedure.
If the freeze time has not already elapsed, thaw the domain:
# /usr/sbin/thawfs /mount_point
Create a new domain directory and link the newly created LSM backup volumes to the temporary domain; for example:
# mkdir /etc/fdmns/my_dom_BK # ln -s /dev/vol/rootdg/vol_1_backup /etc/fdmns/my_dom_BK # ln -s /dev/vol/rootdg/vol_2_backup /etc/fdmns/my_dom_BK
Do not use the
mkfdmn
command to create the domain
on the LSM volumes; this initializes the backup LSM volumes and destroys the
existing data in the volumes.
Create a temporary backup directory at the root directory level:
# mkdir /backup
Display the filesets in the domain you are backing up:
# showfsets /etc/fdmns/my_dom
my_files Id : 3caa0e34.000ef531.1.8001 Files : 0, SLim= 0, HLim= 0 Blocks (512) : 0, SLim= 0, HLim= 0 Quota Status : user=off group=off Object Safety: off Fragging : on DMAPI : off
Dual-mount the filesets in the new domain to the temporary backup mount point:
# mount -o dual temp_domain#parent_fileset /backup
For example:
# mount -o dual my_dom_BK#my_files /backup
Perform the backup on the temporary domain:
# vdump [options] [backup_device] /backup
When the backup is complete, unmount the temporary domain, and stop and remove the LSM volumes:
# umount /backup # volume stop backup_volume... # voledit -fr rm backup_volume...
The following example shows the entire process:
The original domain is called
data_dmn
with a fileset called
data_files
, mounted on
/data
.
The domain uses two mirrored volumes called
vol_1
and
vol_2
.
The backup LSM volumes are called
vol_1_backup
and
vol_2_backup
.
The backup domain is called
data_dmn_bk
,
mounted on
/backup
.
# volprint -vht vol_1 vol_2 Disk group: rootdg V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v vol_1 fsgen ENABLED ACTIVE 262144 SELECT - pl vol_1-01 vol_1 ENABLED ACTIVE 262144 CONCAT - RW sd dsk0-01 vol_1-01 dsk0 0 262144 0 dsk0 ENA pl vol_1-02 vol_1 ENABLED ACTIVE 262144 CONCAT - RW sd dsk2-01 vol_1-02 dsk2 0 262144 0 dsk2 ENA pl vol_1-03 vol_1 ENABLED ACTIVE LOGONLY CONCAT - RW sd dsk1-01 vol_1-03 dsk1 0 65 LOG dsk1 ENA v vol_2 fsgen ENABLED ACTIVE 262144 SELECT - pl vol_2-01 vol_2 ENABLED ACTIVE 262144 CONCAT - RW sd dsk3-01 vol_2-01 dsk3 0 262144 0 dsk3 ENA pl vol_2-02 vol_2 ENABLED ACTIVE 262144 CONCAT - RW sd dsk5-01 vol_2-02 dsk5 0 262144 0 dsk5 ENA pl vol_2-03 vol_2 ENABLED ACTIVE LOGONLY CONCAT - RW sd dsk1-02 vol_2-03 dsk1 65 65 LOG dsk1 ENA # /usr/sbin/freezefs /data_1 # volassist -f snapfast vol_1 vol_1_backup # volassist -f snapfast vol_2 vol_2_backup # /usr/sbin/thawfs /ka # cd /etc/fdmns # mkdir data_dmn_bk # ln -s /dev/vol/rootdg/vol_1_backup /etc/fdmns/data_dmn_bk # ln -s /dev/vol/rootdg/vol_2_backup /etc/fdmns/data_dmn_bk # mkdir /backup # showfsets ka_dom data_files1 Id : 3cadbee2.000aca16.1.8001 Files : 5, SLim= 0, HLim= 0 Blocks (512) : 29340, SLim= 0, HLim= 0 Quota Status : user=off group=off Object Safety: off Fragging : on DMAPI : off # mount -o dual data_dmn_bk#data_files1 /backup # vdump [options] [backup_device] /backup
5.4.3 Restoring LSM Volumes from Backup
The way you restore an LSM volume depends on what the volume is used for, whether the volume is configured and active, and the method you used to back up volume data.
If you used the
vdump
command to back up
the volume (used by UFS file systems or AdvFS domains), use the
vrestore
command to restore the data.
If you used the
rvdump
command, use the
rvrestore
command to restore it.
If the volume is used for an application such as a database, see that application's documentation for the recommended method for restoring backed-up data.
To restore a volume for a UFS file system from a backup that you created
with the
dump
command:
# restore -Yf backup_volume
Note
Both the original volume and the backup volume must be mounted.
You can restore a volume and its contents even if the volume no longer exists, if you have a backup copy of the configuration database. (See Section 5.3.1.)
To restore a volume that no longer exists:
Recreate the volume:
# volrestore [-g disk_group] -v volume
This recreates the structure of the volume on the same disks it was
using at the time you saved the configuration with the
volsave
command.
You cannot assume the data on the disks is valid at this point.
Recreate the file system:
# newfs /dev/rvol/disk_group/volume
Mount the file system:
# mount /dev/vol/disk_group/volume /mount_point
Change directory to the recreated file system:
# cd /mount_point
Restore the volume data:
# restore -Yrf backup_volume
For more information, see
restore
(8)rrestore
(8)vrestore
(8)5.4.4 Starting LSM Volumes
LSM automatically starts all startable volumes when the system boots. You can manually start an LSM volume that:
You manually stopped
Belongs to a disk group that you manually imported
Stopped because of a disk failure or other problem that you have since resolved
To start an LSM volume:
# volume [-g disk_group] start volume
To start all volumes in a disk group (for example, after importing the disk group):
# volume -g disk_group startall
LSM automatically stops LSM volumes when the system shuts down. When you no longer need an LSM volume, you can stop it then remove it. You cannot stop an LSM volume if a file system is using it.
To stop an LSM volume:
If applicable, stop a file system from using the LSM volume.
For AdvFS, dissociate the volume from the domain:
# rmvol LSM_volume domain
Data on the volume is automatically migrated to other volumes in the
domain, if available.
For more information, see the
AdvFS Administration
manual
and
rmvol
(8)
For UFS, unmount the file system:
# umount /dev/rvol/volume
Stop the LSM volume:
# volume [-g disk_group] stop volume
For example, to stop an LSM volume named
vol01
in
the
dg1
disk group:
# volume -g dg1 stop vol01
To stop all volumes in a disk group:
# volume -g disk_group stopall
Removing an LSM volume destroys the data in that volume. Remove an LSM volume only if you are sure that you do not need the data in the LSM volume or the data is backed up elsewhere. When an LSM volume is removed, the space it occupied is returned to the free space pool.
Note
To remove a volume that was created by encapsulating an AdvFS domain, see Section 5.4.6.1.
To remove a swap volume for a cluster member, see Section 7.6.
The following procedure also unencapsulates UFS file systems.
To remove an LSM volume:
If applicable, stop a file system from using the LSM volume.
If the volume is part of an AdvFS domain, dissociate the volume from the domain:
# rmvol LSM_volume domain
Data on the volume is automatically migrated to other volumes in the
domain.
For more information, see the
AdvFS Administration
manual and
rmvol
(8)
If the volume is used by a UFS file system, unmount the file system:
# umount /dev/rvol/volume
Edit the necessary system files as follows:
If the volume was configured as secondary swap (for a standalone
system), remove references to the LSM volume from the
vm:swapdevice
entry in the
sysconfigtab
file.
If the swap space was configured using the
/etc/fstab
file, update this file to change the swap entries back to disk
partitions instead of LSM volumes.
These changes are effective the next time the system restarts.
For more information, see the
System Administration
manual and
swapon
(8)
Stop the LSM volume:
# volume [-g disk_group] stop volume
Remove the LSM volume:
# voledit -r rm volume
This step removes the plexes and subdisks and the volume itself.
If the volume contained an encapsulated UFS file system, edit
the
/etc/fstab
file to change the volume name to the disk
name.
For example, change
/dev/vol/rootdg/vol-dsk4g
to
/dev/dsk4g
.
5.4.6.1 Unencapsulating AdvFS Domains
You can stop using LSM volumes for an AdvFS domain and revert to using the physical disks or disk partitions directly. This is called unencapsulating a domain.
To unencapsulate the storage in an AdvFS domain, you can use the script
that LSM created when you originally encapsulated the domain.
The script contains
both LSM commands and general operating system commands and performs all the
steps necessary to remove the LSM volumes, remove the disks from LSM control,
and update the links in the
/etc/fdmns
directory.
One script for each disk or storage device in the domain is saved in a subdirectory named after the disk access name in the following format:
/etc/vol/reconfig.d/disk.d/dsknp.encapdone/recover.sh
Make sure you run the correct script to unencapsulate the storage in the appropriate domain.
Note
Unencapsulating storage in an AdvFS domain requires that you unmount the filesets.
To unencapsulate an AdvFS domain:
Identify the name of the LSM volume for the domain (which is typically derived from the disk name):
# showfdmn domain
For example:
Id Date Created LogPgs Version Domain Name 3a65b2a9.0004cb3f Wed Jan 17 09:56:41 2001 512 4 dom_1 Vol 512-Blks Free % Used Cmode Rblks Wblks Vol Name 1L 8380080 8371248 0% on 256 256 /dev/vol/tempdg/vol-dsk26c
Display and unmount all the filesets in the domain.
For example,
to unmount the filesets in the
dom_1
domain:
# mount root_domain#root on / type advfs (rw) /proc on /proc type procfs (rw) usr_domain#usr on /usr type advfs (rw) var_domain#var on /var type advfs (rw) mhs:/work on /work type nfs (v3, rw, udp, hard, intr) dom_1#junk on /junk type advfs (rw) dom_1#stuff on /stuff type advfs (rw) # umount /junk /stuff
Stop the LSM volume:
# volume stop volume
Find the name of the appropriate unencapsulation script, if one exists:
# ls /etc/vol/reconfig.d/disk.d/ dsk23c.encapdone dsk24c.encapdone dsk26c.encapdone dsk27g.encapdone
Run the appropriate unencapsulation script.
For example, to run the script for the
dom_1
domain
on disk
dsk26c
:
# sh /etc/vol/reconfig.d/disk.d/dsk26c.encapdone/recover.sh
If the script is not available, do the following:
Change directory to the domain directory:
# cd /etc/fdmns/domain
Remove the link to the volume:
# rm disk_group.volume
Replace the link to the disk device file:
# ln -s /dev/disk/dsknp
Remove the LSM volume:
# voledit [g disk_group] -r rm volume
Remove the disk media name from the disk group:
# voldg -g disk_group rmdisk dm_name
Remove the disk access name from LSM:
# voldisk rm da_name
Remount the filesets to the domain:
# mount dom_1#junk /junk # mount dom_1#stuff /stuff
The domain is available for use.
I/O to the domain goes through the
disk device path instead of the LSM volume.
You can confirm this by running
the
showfdmn
command again:
# showfdmn dom_1
Id Date Created LogPgs Version Domain Name 3a65b2a9.0004cb3f Wed Jan 17 09:56:41 2001 512 4 dom_1 Vol 512-Blks Free % Used Cmode Rblks Wblks Vol Name 1L 8380080 8371248 0% on 256 256 /dev/disk/dsk26c
5.4.7 Disabling the Fast Plex Attach Feature on LSM Volumes
If you no longer want to use FPA logging on a volume, you can turn the feature off for that volume. Turning off FPA logging does not remove the FPA log from the primary volume. You can turn the FPA feature back on later if you choose.
If the volume is actively logging to the FPA log plex (if a secondary
volume exists), turning off the FPA feature stops logging.
If you then return
the migrant plex to the primary volume (with the
volassist snapback
command), the plex undergoes complete resynchronization.
To turn off the FPA feature for a volume:
# volume set fpa=off volume
To remove an FPA log plex, see
Section 5.5.6.
5.4.8 Renaming LSM Volumes
You can rename an LSM volume.
The new LSM volume name must be unique
within the disk group.
If the LSM volume supports a file system or is part
of an AdvFS domain, you must also update the
/etc/fstab
file and the
/etc/fdmns
directory entry.
To rename an LSM volume:
# voledit rename old_volume new_volume
Note
If you do not update the relevant files in the
/etc
directory before the system is restarted, subsequent commands using a volume's previous name will fail.
You can increase the size of a volume (grow a volume) by specifying either an amount to grow by or a size to grow to. For example, you can increase the size of the primary swap space volume. The size of any log plexes remains unchanged.
Note
After you grow a volume that is used by an AdvFS file system, use the
mount -o extend
command to update the domain to include the additional space in the LSM volume. For more information on increasing the size of a domain, see the AdvFS Administration manual and. mount
(8)If the volume is used by a file system other than AdvFS, you must perform additional steps specific to the file system type for the file system to take advantage of increased space. For more information, see the System Administration manual,
, and extendfs
(8). mount
(8)
If an application other than a file system uses the volume, make any
necessary application modifications after the grow operation is complete.
5.4.9.1 Growing LSM Volumes by a Specific Amount
Use the -f option to grow a primary or secondary volume that is actively logging to an FPA log plex.
Note
Growing a primary or secondary volume with an active FPA log disables the FPA log. When you reattach a migrant plex to a primary volume in this case, regardless of which volume grew, a full resynchronization will occur as though no FPA log ever existed.
You can use the -b option to perform the operation in the background. This is helpful if the growby length specified is substantial and if the volume uses mirror plexes or a RAID5 plex, because it will undergo resynchronization as a result of the grow operation.
To grow a volume by a specific amount:
# volassist [-g disk_group] [-f] [-b] growby volume length_change
For example, to grow a volume by 100K bytes:
# volassist -g dg1 growby myVol 100k
The volume looks similar to the following, before and after the
growby
operation:
# volprint -g dg1 -vht dataVol
V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v dataVol fsgen ENABLED ACTIVE 400 SELECT dataVol-01 pl dataVol-01 dataVol ENABLED ACTIVE 512 STRIPE 2/128 RW sd dsk4-01 dataVol-01 dsk4 0 256 0/0 dsk4 ENA sd dsk5-01 dataVol-01 dsk5 0 256 1/0 dsk5 ENA
# volassist -g dg1 growby dataVol 100k # volprint -vht -g dg1 dataVol
V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v dataVol fsgen ENABLED ACTIVE 600 SELECT dataVol-01 pl dataVol-01 dataVol ENABLED ACTIVE 768 STRIPE 2/128 RW sd dsk4-01 dataVol-01 dsk4 0 384 0/0 dsk4 ENA sd dsk5-01 dataVol-01 dsk5 0 384 1/0 dsk5 ENA
In this case, LSM was able to grow the subdisks using contiguous space
on the same disks.
If the subdisks in a volume map to the end of the public
region on the disks, LSM uses available space in the disk group to create
and associate new subdisks to the volume's plex or plexes.
5.4.9.2 Growing LSM Volumes to a Specific Size
Use the -f option to grow a primary or secondary volume that is actively logging to an FPA log plex.
Note
Growing a primary or secondary volume with an active FPA log disables the FPA log. When you reattach a migrant plex to a primary volume in this case, regardless of which volume grew, a full resynchronization will occur as though no FPA log ever existed.
You can use the -b option to perform the operation in the background. This is helpful if the growto length specified is substantial and if the volume uses mirror plexes or a RAID5 plex, because it will undergo resynchronization as a result of the grow operation.
To grow a volume to a specific size:
# volassist [-g disk_group] [-f] [-b] growto volume new_length
For example, to grow a volume from 1MB to 2MB:
# volassist -g dg1 growto vol_A 2m
The volume looks similar to the following, before and after the
growto
operation:
# volprint -vht -g dg1 vol_A
V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v vol_A fsgen ENABLED ACTIVE 2048 SELECT vol_A-01 pl vol_A-01 vol_A ENABLED ACTIVE 2048 STRIPE 2/128 RW sd dsk6-01 vol_A-01 dsk6 0 1024 0/0 dsk6 ENA sd dsk7-01 vol_A-01 dsk7 0 1024 1/0 dsk7 ENA
# volassist -g dg1 growto vol_A 2m # volprint -vht -g dg1 vol_A
V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v vol_A fsgen ENABLED ACTIVE 4096 SELECT vol_A-01 pl vol_A-01 vol_A ENABLED ACTIVE 4096 STRIPE 2/128 RW sd dsk6-01 vol_A-01 dsk6 0 2048 0/0 dsk6 ENA sd dsk7-01 vol_A-01 dsk7 0 2048 1/0 dsk7 ENA
In this case, LSM was able to grow the subdisks using contiguous space on the same disks. If the subdisks in a volume already map to the end of the public region on the disks, LSM uses available space in the disk group to create and associate new subdisks to the volume's plex or plexes. Alternatively, you can specify which disks LSM can use to create the new subdisks.
The following example shows what happens when you specify disks for LSM to use:
# volassist -g dg1 growto dataVol 100k dsk9 dsk10 # volprint -vht -g dg1 dataVol
V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v dataVol fsgen ENABLED ACTIVE 800 SELECT dataVol-01 pl dataVol-01 dataVol ENABLED ACTIVE 1024 STRIPE 2/128 RW sd dsk4-01 dataVol-01 dsk4 0 384 0/0 dsk4 ENA sd dsk9-01 dataVol-01 dsk9 0 128 0/384 dsk9 ENA sd dsk5-01 dataVol-01 dsk5 0 384 1/0 dsk5 ENA sd dsk10-01 dataVol-01 dsk10 0 128 1/384 dsk10 ENA
Notice that LSM added the subdisk it created on
dsk9
to the end of the first column (column 0) and the subdisk it created on
dsk10
to the second column (column 1).
5.4.10 Shrinking LSM Volumes
You can decrease the size of a volume by specifying either an amount to shrink by or a size to shrink to. The size of any log plexes remains unchanged.
Cautions
If the volume is used for an AdvFS file system, do not decrease the space in the domain by shrinking an underlying LSM volume. Instead, remove a volume from the domain (in AdvFS, a volume can be a disk, disk partition, or an LSM volume). For more information on removing volumes from a domain, see the AdvFS Administration manual.
If the volume is used for a file system other than AdvFS, you must perform additional steps specific to the file system type before shrinking the volume, so that the file system can recognize and safely adjust to the decreased space.
There is no direct way to shrink a UFS file system other than backing up the data, destroying the original file system, creating a new file system of the smaller size, and restoring the data in the new file system.
For more information, see the System Administration manual.
If an application other than a file system uses the volume, make any necessary application modifications before shrinking the LSM volume.
Note
Shrinking a (primary or secondary) volume with an active FPA log disables the FPA log. When you reattach a migrant plex to a primary volume in this case, regardless of which volume shrank, a full resynchronization will occur as though no FPA log ever existed.
5.4.10.1 Shrinking LSM Volumes by a Specific Amount
To shrink a volume by a specific amount:
# volassist [-g disk_group] -f shrinkby volume length_change
For example, to shrink a volume by 100K bytes:
# volassist -g dg1 -f shrinkby dataVol 100k
5.4.10.2 Shrinking LSM Volumes to a Specific Size
To shrink a volume to a specific size:
# volassist [-g disk_group] -f shrinkto volume new_length
For example, to shrink a volume from 2MB to 1MB:
# volassist -g dg1 -f shrinkto vol_A 1m
5.4.11 Changing LSM Volume Permission, User, and Group Attributes
By default, the device special files for LSM volumes are created with read and write permissions granted only to the owner. Databases or other applications that perform raw I/O might require device special files to have other settings for the permission, user, and group attributes.
Note
Use LSM commands instead of the
chmod
,chown
, orchgrp
commands to change the permission, user, and group attributes for LSM volumes. The LSM commands ensure that settings for these attributes are stored in the LSM database, which keeps track of all settings for LSM objects.
To change Tru64 UNIX user, group, and permission attributes:
# voledit [-g disk_group] set \ user=username group=groupname mode=permission volume
The following example changes the user, group, and permission attributes
for an LSM volume named
vol01
in the
rootdg
disk group:
# voledit set user=new_user group=admin mode=0600 vol01
The following sections describe how to use LSM commands to manage plexes.
5.5.1 Displaying Plex Information
To display general information for all plexes:
# volprint -pt
Disk group: rootdg PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE pl ka1-01 ka1 ENABLED ACTIVE 2097152 CONCAT - RW pl ka2-01 ka2 ENABLED ACTIVE 2097152 CONCAT - RW pl ka3-01 ka3 ENABLED ACTIVE 2097152 CONCAT - RW pl ka4-01 ka4 ENABLED ACTIVE 2097152 CONCAT - RW pl rootvol-01 rootvol ENABLED ACTIVE 524288 CONCAT - RW pl swapvol-01 swapvol ENABLED ACTIVE 520192 CONCAT - RW pl tst-01 tst ENABLED ACTIVE 2097152 CONCAT - RW pl tst-02 tst ENABLED ACTIVE 2097152 CONCAT - RW pl tst-03 tst ENABLED ACTIVE LOGONLY CONCAT - RW pl vol-dsk25g-01 vol-dsk25g ENABLED ACTIVE 2296428 CONCAT - RW pl vol-dsk25h-01 vol-dsk25h ENABLED ACTIVE 765476 CONCAT - RW
To display detailed information about a specific plex:
# volprint -lp plex
Disk group: rootdg Plex: tst-01 info: len=2097152 type: layout=CONCAT state: state=ACTIVE kernel=ENABLED io=read-write assoc: vol=tst sd=dsk0-01 flags: complete
5.5.2 Adding a Data Plex (Mirroring LSM Volumes)
You can add a data plex to a volume to mirror the data in the volume. You cannot use a disk that is already used by the volume to create the mirror. A volume can have up to 32 plexes, which can be any combination of data and log plexes.
Adding a plex is one way to move the LSM volume to disks with better performance, if you specify the disks for LSM to use. You can also add a plex temporarily, so you can repair or replace disks in the original plex.
When you add a plex to a volume, the volume data is copied to the new plex. This process can take several minutes to several hours depending on the size of the volume.
Caution
To mirror the volumes for the boot disk and primary swap space (
rootvol
andswapvol
) on a standalone system, you must use thevolrootmir
command (Section 3.4.1). Thevolrootmir
command performs special operations that ensure the system can boot from either mirror in the root file system volumes.
To add a data plex:
# volassist mirror volume [disk]
Note
Adding a data plex does not add a DRL plex to the volume. Volumes with mirror plexes should have a DRL plex, except volumes for swap space. To add a DRL plex to a volume, see Section 5.5.3.
You can add a log plex (DRL or RAID5 log) to a volume that has mirrored data plexes or a RAID5 data plex. LSM automatically creates the log of a size appropriate to the size of the volume (65 blocks per gigabyte of volume size, by default).
Note
A DRL plex is not supported on
rootvol
(on a standalone system),cluster_rootvol
(in a cluster), or any swap volumes (standalone systems or clusters).
To improve performance and avoid the risk of losing both data and the log if the disk fails, create the log plex on a disk that is not in use by one of the volume's data plexes and, if possible, on a disk that does not already support the log for other volumes.
To add a log plex to a volume using any available space in the disk group:
# volassist [-g disk_group] addlog volume [disk]
To add a log to a volume by specifying a disk:
Make sure the disk is not in use by the same volume:
# volprint -vht volume
For example:
# volprint -vht genvol
Disk group: rootdg TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 v genvol gen ENABLED 204800 - ACTIVE - - pl genvol-01 genvol ENABLED 204800 - ACTIVE - - sd dsk19-02 genvol-01 ENABLED 204800 0 - - - pl genvol-02 genvol ENABLED 204800 - ACTIVE - - sd dsk5-01 genvol-02 ENABLED 204800 0 - - -
In the previous output, the volume
genvol
uses space
for its data plexes on disks
dsk19
and
dsk5
.
Optionally but recommended, make sure the disk does not already support log plexes for other volumes:
# volprint -ht | grep disk
For example:
# volprint -ht | grep dsk5
sd dsk5-03 fsvol-03 dsk5 322912 245760 0 dsk5 ENA sd dsk5-01 genvol-01 dsk5 118112 204800 0 dsk5 ENA sd dsk5-02 vol_r5-01 dsk5 32768 85344 4/0 dsk5 ENA
In the previous output, space on
dsk5
is used by
volumes
fsvol
,
genvol
, and
vol_r5
, but not for their logs.
(However,
dsk5
is already heavily used and would not be a good candidate for the new log
plex.)
Add the log to the volume, specifying an eligible disk:
# volassist [-g disk_group] addlog volume disk
For example, to add a log plex to volume
genvol
on
dsk11
, which is not used in any other volume in the
rootdg
disk group:
# volassist addlog genvol dsk11
You can add one or more Fast Plex Attach (FPA) logs to a mirrored volume to provide redundancy for the FPA logs, just as you can add multiple dirty-region logs (DRLs) to a volume.
FPA logging is supported on any mirrored volume on a standalone system
or a cluster, including the
rootvol
and
cluster_rootvol
volumes, but excluding volumes used for swap space.
Note
You cannot add an FPA log to a volume while a migrant plex is detached from the volume (attached to a secondary volume).
If the volume has a DRL log, the FPA log length will be the same as
the DRL log length.
If the volume has no DRL log, when you add the first FPA
log to a mirrored volume, you can specify the length of the log with the
loglen=length
attribute, the number of
FPA logs with the
nfpalog=count
attribute, and which disks it can or cannot use.
To exclude storage, use the
!
prefix (or
\!
in the C shell).
To add an FPA log plex to a volume, optionally specifying the number of FPA log plexes or the disk or disks to use:
# volassist addfpa volume [nfpalog=count] [disk...]
For example, to add two FPA log plexes on disks
dsk5
and
dsk6
to a volume named
dvol_01
:
# volassist addfpa dvol_1 [nfpalog=2] dsk5 dsk6
The volume looks like the following:
Disk group: rootdg TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 v dvol_1 fsgen ENABLED 245760 - ACTIVE - - pl dvol_1-01 dvol_1 ENABLED 245760 - ACTIVE - - sd dsk1-01 dvol_1-01 ENABLED 245760 0 - - - pl dvol_1-02 dvol_1 ENABLED 245760 - ACTIVE - - sd dsk2-01 dvol_1-02 ENABLED 245760 0 - - - pl dvol_1-03 dvol_1 ENABLED LOGONLY - ACTIVE - - sd dsk1-02 dvol_1-03 ENABLED 65 LOG - - - pl dvol_1-04 dvol_1 ENABLED FPAONLY - ACTIVE - - sd dsk5-01 dvol_1-04 ENABLED 65 FPA - - - pl dvol_1-05 dvol_1 ENABLED FPAONLY - ACTIVE - - sd dsk6-01 dvol_1-05 ENABLED 65 FPA - - -
When you create a backup volume using the
volassist snapfast
command (Section 5.4.2.2
), LSM tracks
the changes to the primary volume in all the FPA logs associated with the
volume.
5.5.5 Detaching a Plex
Detaching a plex preserves the plex's association to a volume. You can detach a plex to perform temporary operations with it, with the intent of reattaching the plex to the same volume later.
Caution
If you detach all but one data plex from a mirrored volume, the volume's data is no longer redundant.
Note the following:
If you detach the last log plex from a mirrored volume and the system fails, LSM resynchronizes the contents of all the plexes when the system restarts. The volume remains usable during this operation, but its performance might be significantly degraded.
If you detach the last RAID 5 log plex from a RAID5 volume and the system fails, LSM recalculates the parity for the entire volume; this involves reading back all the volume data, regenerating the parity for each stripe, and rewriting each stripe in the plex. The volume remains usable during this operation, but its performance might be significantly degraded.
If you detach the last FPA log plex from a primary volume, FPA logging is disabled. When you return the migrant plex to the primary volume, a full resynchronization occurs, as if FPA were never enabled. To disable FPA logging on a volume, see Section 5.4.7.
To detach a plex from a volume:
# volplex [-f] det plex
Use the force (-f) option to detach a volume's last
complete, enabled data plex or the FPA log plex from a primary volume.
5.5.6 Dissociating a Plex
Dissociating a plex removes its association to a volume. You can dissociate a plex from a volume to unmirror a volume (remove all but one data plex) or reduce the number of mirrors or logs in a volume, usually with the intent of completely removing the plex and reusing its disk space for another purpose. To support this, you can dissociate and recursively remove a plex and its components in one command. Alternatively, you can dissociate a plex and use it to create another volume; for example, to back up a volume.
Note
The recommended method for backing up a mirrored volume is with the
volassist snapfast
command (to use FPA logging) or thevolassist snapshot
command. For more information, see Section 5.4.2.
Note the following:
If you dissociate the last log plex from a mirrored volume and the system fails, LSM resynchronizes the contents of all the plexes when the system restarts. The volume remains usable during this operation, but its performance might be significantly degraded.
If you dissociate the last RAID 5 log plex from a RAID5 volume and the system fails, LSM recalculates the parity for the entire volume; this involves reading back all the volume data, regenerating the parity for each stripe, and rewriting each stripe in the plex. The volume remains usable during this operation, but its performance might be significantly degraded.
If you dissociate the last FPA log plex from a primary volume, FPA logging is disabled. When you return the migrant plex to the primary volume, a full resynchronization occurs, as if FPA were never enabled. To disable FPA logging on a volume, see Section 5.4.7.
Caution
If you dissociate all but one data plex from a mirrored volume, the volume's data is no longer redundant.
To remove the last data plex from a volume (remove a volume completely), see Section 5.4.6.
To dissociate a plex from a volume:
# volplex [-f] dis plex
Use the force (-f) option to dissociate a volume's last complete, enabled data plex or the FPA log plex from a primary volume.
To dissociate and recursively remove a plex from a volume:
# volplex [-f] -o rm dis plex
Recursively dissociating a plex removes both the plex and its subdisks.
The disks remain under LSM control.
5.5.7 Reattaching Plexes
If you detached or dissociated (but did not recursively remove) a data plex or log plex from a volume, you can reattach it to the volume. You can attach a detached plex only to its original volume.
To reattach a detached data or log plex to a volume:
# volrecover volume
To reattach a dissociated data or log plex to a volume:
# volplex att volume plex
5.5.8 Changing the Plex Layout of LSM Volumes
For volumes that use one or more concatenated or striped plexes, you can change the plex layout from concatenated to striped or vice versa. For example, you can change from a concatenated plex to a striped plex to improve performance.
The steps involved include adding a plex to the volume and then dissociating
the original plex or plexes from the volume.
5.5.8.1 Changing the Plex Layout from Concatenated to Striped
To change the plex layout of a volume from concatenated to striped:
Display the size of the volume whose plex layout you want to change:
# volprint [-g diskgroup] -ht volume
Disk group: rootdg V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v volC fsgen ENABLED ACTIVE 204800 SELECT - pl volC-01 volC ENABLED ACTIVE 204800 CONCAT - RW sd dsk2-01 volC-01 dsk2 0 204800 0 dsk2 ENA
In this example, the volume
volC
has one concatenated
data plex of 204800 sectors (100 MB).
Verify there is enough space in the same disk group to mirror the volume:
# voldg [-g disk_group] free
Add a new plex with the characteristics you want to the volume.
For example, to convert the volume
volC
to a striped
volume, add a striped plex to the volume, specifying the number of columns
and, optionally, the disks you want LSM to use:
# volassist -g dg1 mirror volC layout=stripe \ ncolumn=2 [disk...]
The volume now looks similar to the following, with the original concatenated plex and the new striped plex:
# volprint -g dg1 -ht volC
Disk group: dg1 V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v volC fsgen ENABLED ACTIVE 204800 SELECT volC-02 pl volC-01 volC ENABLED ACTIVE 204800 CONCAT - RW sd dsk2-01 volC-01 dsk2 0 204800 0 dsk2 ENA pl volC-02 volC ENABLED ACTIVE 204800 STRIPE 2/128 RW sd dsk3-01 volC-02 dsk3 0 102400 0/0 dsk3 ENA sd dsk5-01 volC-02 dsk5 0 102400 1/0 dsk5 ENA
Remove and dissociate the original plex from the volume:
# volplex [-g diskgroup] -o rm dis old_plex
For example, to remove the original concatenated plex
volC-01
from
volC
:
# volplex -g dg1 -o rm dis volC-01
If the volume has only one plex and you add another plex with the
volassist mirror
command, the new plex will have the same layout
as the current plex (subject to the free disk space in the disk group and
other restrictions).
If the volume still has plexes with the original layout type, repeat the process of adding new plexes with the layout you want and removing the original plexes until all plexes use the new layout.
You can also add a Dirty Region Log (Section 5.5.3) to the
volume if it does not have one.
5.5.8.2 Changing the Plex Layout from Striped to Concatenated
To change the plex layout of a volume from striped to concatenated:
Display the size of the volume whose plex layout you want to change:
# volprint [-g disk_group] -ht volume
Disk group: dg1 V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v volS fsgen ENABLED ACTIVE 204800 SELECT volS-01 pl volS-01 volS ENABLED ACTIVE 204800 STRIPE 4/128 RW sd dsk10-01 volS-01 dsk10 0 51200 0/0 dsk10 ENA sd dsk11-01 volS-01 dsk11 0 51200 1/0 dsk11 ENA sd dsk12-01 volS-01 dsk12 0 51200 2/0 dsk12 ENA sd dsk14-01 volS-01 dsk14 0 51200 3/0 dsk14 ENA
In this example, the volume
volS
has one striped
data plex of 204800 sectors (100 MB).
Verify there is enough space in the same disk group to mirror the volume:
# voldg [-g disk_group] free
Add a new plex with the characteristics you want to the volume.
For example, to convert the volume
volS
to a concatenated
volume, add a concatenated mirror to the volume, optionally specifying the
disks you want LSM to use:
# volassist -g dg1 mirror volS layout=nostripe [disk...]
The volume now looks similar to the following, with the original striped plex and the new concatenated plex:
# volprint -g dg1 -ht volS
Disk group: dg1 V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v volS fsgen ENABLED ACTIVE 204800 SELECT volS-01 pl volS-01 volS ENABLED ACTIVE 204800 STRIPE 4/128 RW sd dsk10-01 volS-01 dsk10 0 51200 0/0 dsk10 ENA sd dsk11-01 volS-01 dsk11 0 51200 1/0 dsk11 ENA sd dsk12-01 volS-01 dsk12 0 51200 2/0 dsk12 ENA sd dsk14-01 volS-01 dsk14 0 51200 3/0 dsk14 ENA pl volS-02 volS ENABLED ACTIVE 204800 CONCAT - RW sd dsk19-01 volS-02 dsk19 0 204800 0 dsk19 ENA
Remove and dissociate the original plex from the volume:
# volplex [-g disk_group] -o rm dis old_plex
For example, to remove the original striped plex
volS-01
from
volS
:
# volplex -g dg1 -o rm dis volS-01
If the volume has only one plex and you add another plex with the
volassist mirror
command, the new plex will have the same layout
as the current plex (subject to the free disk space in the disk group and
other restrictions).
If the volume still has plexes with the original layout type, repeat the process of adding new plexes with the layout you want and removing the original plexes until all plexes use the new layout.
You can also add a Dirty Region Log (Section 5.5.3) to the
volume if it does not have one.
5.6 Managing Subdisks
The following sections describe how to use LSM commands to manage subdisks.
5.6.1 Displaying Subdisk Information
To display general information for all subdisks:
# volprint -st
Disk group: rootdg SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE sd dsk2-01 vol_mir-01 dsk2 0 256 0 dsk2 ENA sd dsk3-02 vol_mir-03 dsk3 0 65 LOG dsk3 ENA sd dsk3-01 vol_mir-02 dsk3 65 256 0 dsk3 ENA sd dsk4-01 p1 dsk4 17 500 0 dsk4 ENA sd dsk4-02 p2 dsk4 518 1000 0 dsk4 ENA
To display detailed information about a specific subdisk:
# volprint -l subdisk
For example:
# volprint -l dsk12-01
Disk group: rootdg Subdisk: dsk12-01 info: disk=dsk12 offset=0 len=2560 assoc: vol=vol5 plex=vol5-02 (offset=0) flags: enabled device: device=dsk12 path=/dev/disk/dsk12g diskdev=82/838
You can join two or more subdisks to form a single, larger subdisk. Subdisks can be joined only if they belong to the same plex and occupy adjacent regions of the same disk. For a volume with striped plexes, the subdisks must be in the same column. The joined subdisk can have a new subdisk name or retain the name of one of the subdisks being joined.
To join subdisks:
# volsd join subdisk1 subdisk2 new_subdisk
You can divide a subdisk into two smaller subdisks. After the subdisk is split, you can move the data in the smaller subdisks to different disks. This is useful for reorganizing volumes or for improving performance. The new, smaller subdisks occupy adjacent regions within the same region of the disk that the original subdisk occupied.
You must specify a size for the first subdisk; the second subdisk consists of the rest of the space in the original subdisk.
If the subdisk to be split is associated with a plex, both of the resultant subdisks are associated with the same plex. You cannot split a log subdisk.
To split a subdisk and assign each subdisk a new name:
# volsd -s size split original_subdisk \ new_subdisk1 new_subdisk2
To split a subdisk and retain the original name for the first subdisk and assign a new name to the second subdisk:
# volsd -s size split original_subdisk new_subdisk
5.6.4 Moving Subdisks to a Different Disk
You can move the data in subdisks to a different disk to improve performance. The disk space occupied by the data in the original subdisk is returned to the free space pool.
Ensure that the following conditions are met before you move data in a subdisk:
Both source and destination subdisks must be the same size.
The source subdisk must be part of an active plex on an active volume.
The destination subdisk must not be associated with any other plex.
To move data from one subdisk to another:
Display the size of the subdisk you want to move:
# volprint subdisk
For example:
# volprint -l dsk20-01
Disk group: dg1 Subdisk: dsk20-01 info: disk=dsk20 offset=0 len=204800 assoc: vol=volS plex=volS-01 (offset=0) flags: enabled device: device=dsk20 path=/dev/disk/dsk20g diskdev=81/1350
Create a new subdisk of the same size on a different disk:
# volmake [-g diskgroup] sd subdisk_name disk len=length
For example:
# volmake -g dg1 sd dsk11-01 dsk11 len=204800
Move the data in the old subdisk to the new subdisk:
# volsd mv source_subdisk target_subdisk
For example:
# volsd -g dg1 mv dsk20-01 dsk11-01
This leaves the old subdisk unassociated with any volume. If it is not needed, you can remove the subdisk (Section 5.6.5). Removing the subdisk returns that space to the free pool.
You can remove a subdisk that is not associated with or needed by an LSM volume. Removing a subdisk returns the disk space to the free space pool in the disk group. To remove a subdisk, you must dissociate the subdisk from a plex or volume, then remove it.
To remove a subdisk:
Display information about the subdisk to identify any volume or plex associations:
# volprint -l subdisk
If the subdisk is associated with a volume, the following information is displayed:
Disk group: rootdg Subdisk: dsk9-01 info: disk=dsk9 offset=0 len=2048 assoc: vol=newVol plex=myplex (column=1 offset=0) flags: enabled device: device=dsk9 path=/dev/disk/dsk9g diskdev=82/646
If the subdisk has no associations to any plex or volume, the following information is displayed:
Disk group: dg1 Subdisk: dsk20-01 info: disk=dsk20 offset=0 len=204800 assoc: vol=(dissoc) plex=(dissoc) flags: enabled device: device=dsk20 path=/dev/disk/dsk20g diskdev=81/1350
Do one of the following to remove the subdisk:
If the subdisk is associated with a volume:
# volsd [-g disk_group] -o rm dis subdisk
If the subdisk is not part of a volume and has no associations:
# voledit [-g disk_group] rm subdisk