The following topics are covered in this chapter:
Upgrading a system or cluster with an LSM configuration from Tru64 UNIX Version 4.0 to Version 5.0 or higher (Section 7.1)
Adding a system with LSM to a cluster (Section 7.2)
Moving disk groups between systems (Section 7.3)
Unencapsulating the boot disk (standalone system) (Section 7.4)
Migrating AdvFS domains from LSM volumes to physical storage (Section 7.5)
Unencapsulating cluster members' swap devices (Section 7.6)
Uninstalling the LSM software (Section 7.7)
7.1 Upgrading an LSM Configuration
If you are currently using LSM on a system running Tru64 UNIX Version 4.0 and you want to preserve your current LSM configuration for use with Tru64 UNIX Version 5.0 or higher, you must:
Increase the size of any block-change logs (BCLs) to at least two blocks per gigabyte of volume size for a standalone system or at least 65 blocks per gigabyte of volume size for a TruCluster Server environment (Section 7.1.1).
Back up the current LSM configuration (Section 7.1.2).
Optionally, deport any disk groups that you do not want to upgrade (Section 7.1.3).
Upgrade the LSM software (Section 7.1.4).
Manually convert any Version 4.0 disk groups that you deported before the upgrade (Section 7.1.5).
Optimize the restored LSM configuration databases (Section 7.1.6).
7.1.1 Increasing the Size of BCLs
The block-change logging (BCL) feature supported in LSM in Tru64 UNIX Version 4.0 was replaced with the dirty-region logging (DRL) feature in Version 5.0.
When you perform an upgrade installation, BCLs are automatically converted to DRLs if the BCL subdisk is at least two blocks. If the BCL subdisk is one block, logging is disabled after the upgrade installation.
Note
The conversion of BCLs to DRLs is not reversible.
Before you upgrade, increase the size of the BCLs to at least two blocks
per gigabyte of volume size for standalone systems or at least 65 blocks per
gigabyte of volume size for a TruCluster environment.
If this is not possible,
then after the upgrade, you can add a new log to those volumes using
volassist addlog, which creates a DRL of the appropriate size by
default.
For information on increasing the size of BCLs, see the LSM documentation
for your current operating system version.
7.1.2 Backing Up the LSM Configuration
Backing up the LSM configuration creates a file that describes all the LSM objects in all disk groups. In case of a catastrophic failure, LSM can use this file to restore the LSM configuration.
Caution
The following procedure backs up only the configuration, not the volume data. You might also want to back up the volume data before performing the upgrade.
To back up the LSM configuration:
Enter the following command:
# volsave [-d dir]
LSM configuration being saved to /usr/var/lsm/db/LSM.20020312143345.hostname
volsave does not save configuration for volumes used for
root, swap, /usr or /var.
LSM configuration for following system disks not saved:
dsk3 dsk0a dsk2a dsk0b dsk2b dsk0g dsk0g
LSM Configuration saved successfully to /usr/var/lsm/db/LSM.20020312143345.hostname
By default, LSM configuration information is saved to a time-stamped
file called a
description set
in the
/usr/var/lsm/db
directory.
Make a note of the location and name
of the file.
You will need this information to restore the LSM configuration
after you upgrade the Tru64 UNIX operating system software.
Optionally, confirm that the LSM configuration was saved:
# ls /usr/var/lsm/db/LSM.date.hostname dg1.d newdg.d volboot header rootdg.d voldisk.list
Save the LSM configuration to tape or other removable media.
7.1.3 Deporting Disk Groups (Optional)
The internal metadata format of LSM in Tru64 UNIX Version 5.0 and higher is not compatible with the metadata format of LSM in Tru64 UNIX Version 4.0. If an older metadata format is detected during the upgrade procedure, LSM automatically upgrades the old format to the new format. If you do not want certain disk groups to be upgraded, deport them before you upgrade LSM.
You cannot deport the
rootdg
disk group;
rootdg
must be converted to the new format to allow use of the LSM
configuration on the upgraded system.
After
rootdg
is converted,
it cannot be used again on a system running Version 4.0.
To deport a disk group:
# voldg deport disk_group
If you later import a deported disk group, LSM upgrades the metadata
format.
7.1.4 Upgrading the LSM Software
LSM comprises three software subsets, which are located on the CD-ROM containing the base operating system software for the Tru64 UNIX product kit.
Depending on the operating system versions you are upgrading from and to, you might have to perform a full installation instead of an update installation or a succession of update installations. (For a description of the supported update paths, see the Installation Guide.)
During an update installation, you do not need to specify the LSM subsets if they are already installed on the system you are upgrading. The upgrade installation process automatically upgrades all subsets installed on the system.
If LSM is not already installed on the system you are upgrading from Tru64 UNIX Version 4.0x to Version 5.0x, you cannot install it during an update installation. To install LSM, you must do a full operating system installation.
During a full installation, you have the option to install the base
system's root (/),
/usr, and
/var
file systems and swap space to LSM volumes.
If you will either
use this system to create a cluster or add this system to a cluster, skip
this option.
The base system's root LSM volumes are not used in a cluster.
Caution
Be careful that you do not install the operating system on one of the disks currently part of your LSM configuration; the installation process completely overwrites anything on the disk and you will lose your configuration.
After the update or full installation, the
rootdg
disk group is converted and ready to use.
Any disk groups that remained connected
to the system (and that were not deported) are also converted and available.
7.1.5 Manually Converting Version 4.0 Disk Groups
If you deported disk groups before upgrading a system or cluster from Version 4.0 to Version 5.0 or higher, you can manually import and convert those disk groups.
Disk groups that are connected to a system before it restarts, or before
cluster creation, are automatically imported.
Their metadata format is updated,
and the
vollogcnvt
utility converts BCLs to DRLs where
possible.
(For more information, see
vollogcnvt(8)
The following procedure applies only to disk groups that you deported before upgrading the operating system and have decided to import and convert.
This procedure:
Converts the disk group's internal metadata format to the format used in Version 5.0 and higher.
Converts BCL logs to DRL logs for all volumes with a BCL of at least two blocks.
Notifies you about volumes whose BCL logs could not be converted to DRLs.
The volumes are usable but are not logging. To enable logging, you must manually remove the BCL subdisk and add a new DRL log.
To manually import and convert disk groups:
Physically connect the storage to the system or cluster.
In a cluster, connect the storage so that it is accessible by all cluster members.
Run the
hwmgr
command to update the system
or cluster with the new disk information.
For more information, see
hwmgr(8)
Import and convert the disk group.
# voldg -o convert_old import disk_group
The disk group is imported and the following information is displayed for volumes that use BCLs:
lsm:voldg:WARNING:Logging disabled on volume. Need to convert to DRL. lsm:voldg:WARNING:Run the vollogcnvt command to automatically convert logging.
Convert any BCLs to DRLs for each disk group:
# vollogcnvt -g disk_group
If a BCL cannot be converted to a DRL and you want to restore logging for the volume:
Identify the disabled BCL subdisk:
# volprint [-g disk_group] volume
Remove the BCL subdisk:
# volsd [-g disk_group] -o rm dis subdisk
Add a new log to the volume:
# volassist [-g disk_group] addlog volume
Start the volumes in each newly imported disk group:
# volrecover -g disk_group
7.1.6 Optimizing Restored LSM Configuration Databases (Optional)
If you restored an LSM configuration on a system that you upgraded from Tru64 UNIX Version 4.0 to Tru64 UNIX Version 5.0 or higher, you can modify the configuration databases to allow LSM to automatically manage their number and placement.
Note
This procedure is an optimization and is not required.
On systems running Tru64 UNIX Version 4.0 and using LSM, you had to explicitly configure between four and eight disks per disk group to have enabled databases. In Version 5.0 and higher, by default all LSM disks are configured to contain copies of the database, and LSM automatically maintains the appropriate number of enabled copies. The distinction between an enabled and disabled copy is as follows:
Disabled The disk's private region is configured to contain a copy of the configuration database, but this copy might be dormant (inactive). LSM enables a copy as needed; for example, when a disk with an enabled copy is removed or fails.
Enabled The disk's private region is configured to contain a copy of the configuration database, and this copy is active. All LSM configuration changes are recorded in each enabled copy of the configuration database as they occur.
Configure the private regions on all your LSM disks to contain one copy of the configuration database, unless you have a specific reason for not doing so, such as:
The disk is old or slow.
The disk is on a bus that is heavily used.
The private region is too small (less than 4096 blocks) to contain a copy of the configuration database (such as disks that have been migrated from earlier releases of LSM).
There is some other significant reason why the disk should not contain a copy.
Enabling the configuration database does not use additional space on the disk; it merely sets the number of enabled copies in the private region to 1.
To set the number of configuration database copies to 1:
# voldisk moddb disk nconfig=1
For disk groups containing three or fewer disks, each disk should have
two copies of the configuration database to provide sufficient redundancy.
This is especially important for systems with a small
rootdg
disk group and one or more larger secondary disk groups.
For more information on modifying the LSM configuration databases, see
Section 5.3.3.
7.2 Adding a System with LSM to a Cluster
You can add a standalone system with LSM volumes to an existing cluster and incorporate its LSM volumes in the cluster, whether or not the cluster is also using LSM. Alternatively, you can move only the disk groups from one system to another or from a system to a cluster.
Note
If the standalone system is not running at least Tru64 UNIX Version 5.0, see Section 7.1 to upgrade the system and its LSM configuration before adding the system to a cluster.
Before you begin, decide what you want to do with the standalone system's
rootdg
disk group.
There can be only one
rootdg
disk group in an LSM configuration.
If LSM is running on the cluster
and you want to use the disks in the standalone system's
rootdg
disk group, you must either rename the former
rootdg
as
you import it to the cluster or add the LSM disks to different disk groups.
If LSM is not running on the cluster
and you do not want to reuse the
rootdg
on
the cluster, then you need to find one or more unused disks to create the
rootdg
for the cluster.
For information on identifying unused disks, see Section 2.3.
Review the following information and recommendations, and take any necessary actions:
There is no relationship between the base system LSM volumes
for the root (/),
/usr, and
/var
file systems and the cluster's
cluster_root,
cluster_usr, and
cluster_var
domains.
Likewise, there is no relationship between the primary swap volume (swapvol) on a standalone system and the private swap space for each
member.
There is no primary swap space in a cluster; each member has its own
swap space.
The base system volumes are not used, and the file systems using those volumes are not available until explicitly mounted. If you halt the member and boot the base operating system again, these file systems are available.
All other LSM volumes in imported disk groups are available to all cluster members, as long as the storage is accessible by all members. (They should be shared.)
Before adding the system to the cluster you can optionally unmirror any mirrored system volumes to make that disk space available for other uses in the cluster.
You do not need to completely unencapsulate the root file system (remove the boot partitions from LSM volumes entirely); you can remove just the mirrors and return that disk space to LSM's free space pool, then create the cluster.
However, if the standalone system will not be used again as such, you
can delete all the base system volumes and reinitialize the disks for LSM
as
sliced
or
simple
disks, before adding
the system to a cluster.
If any of the standalone system's LSM disks are internal, they will be private to that member. If that member crashes, the cluster loses access to those volumes. If you want that data to be available to the rest of the cluster, move the data on those disks to storage that is shared across the cluster.
Likewise, consider making any external storage connected to the standalone system available to the whole cluster, or use those disks only for data that is member specific.
For performance reasons, the LSM configuration on the standalone system might use nondefault values and attributes.
For example, if mirrored volumes on the standalone system have log subdisks that are less than 65 blocks per gigabyte of volume size, remove the old logs and add new ones, which will be sized appropriately by default, to support migration to a cluster environment. Otherwise, logging is disabled on these volumes, but the volumes themselves are usable.
To add a standalone system using LSM to a cluster not running LSM:
If applicable, reconfigure the log subdisks on all mirrored volumes to use the default DRL size.
Identify mirrored volumes with nondefault-size log plexes:
# volprint -pht | grep -p LOGONLY
Information similar to the following is displayed.
In this example,
the log plex
vol1-03
is only 2 blocks long, but the log
plex
vol2-03
is 65 blocks:
pl vol1-03 vol1 ENABLED ACTIVE LOGONLY CONCAT - RW sd dsk27a-01 vol1-03 dsk27a 0 2 LOG dsk27a ENA pl vol2-03 vol2 ENABLED ACTIVE LOGONLY CONCAT - RW sd dsk27a-02 vol2-03 dsk27a 2 65 LOG dsk27a ENA
Delete the nondefault-size DRL plex from its volume:
# volplex [-g disk_group] -o rm dis log_plex
For example:
# volplex -o rm dis vol1-03
Add a new DRL plex to the volume, which will automatically be sized correctly:
# volassist addlog volume
For example:
# volassist addlog vol1
Stop all volumes in each disk group:
# volume -g disk_group stopall
Deport each disk group except
rootdg:
# voldg deport disk_group
Display the
disk group
ID
for
rootdg:
# voldg list rootdg | grep id dgid: 1007697459.1026.hostname
Make a note of the disk group ID.
You will need this information
to import the
rootdg
disk group on the cluster.
Halt the system and add it to the cluster. Make sure all its storage is connected to the cluster (preferably as shared storage).
This step involves using the
clu_add_member
command
and possibly other hardware-specific or cluster-specific operations that are
not covered here.
Run the
hwmgr
command to update the cluster
with the new disk information.
For more information, see
hwmgr(8)
Initialize LSM using one of the following methods:
To reuse the standalone system's
rootdg
as the
rootdg
for the cluster:
Set up the LSM device special files:
# volinstall
Start LSM in disabled mode:
# vold -m disable
Initialize the LSM daemons with the source system's host name:
# voldctl init hostname
Make sure the source system's
rootdg
storage
is attached to the cluster, preferably as shared storage.
Initialize the source system's
rootdg
disk
group, making it the
rootdg
disk group for the cluster:
# voldg init rootdg
This allows the
rootdg
disk group to be autoimported,
but it still has the source system's host name.
Restart LSM in enabled mode:
# vold -k
This automatically imports all the other disk groups and, if necessary, converts their metadata to the latest format and sets them to be shared.
Reset the host name in the
/etc/vol/volboot
file (and consequently on all disks in the new
rootdg
disk
group) to the cluster alias name:
# voldctl hostid cluster_alias
To create a new
rootdg
for the cluster:
Specify at least two disks or disk partitions; for example:
# volsetup dsk19 dsk20
Import and rename the old
rootdg
using
its disk group ID:
# voldg -o shared -n newname import id=rootdg_dgid
Import the other disk groups and set them to be shared:
# voldg -o shared import disk_group
Synchronize LSM throughout the cluster by entering the following command on all members except the member where you performed step 8:
# volsetup -s
7.3 Moving Disk Groups Between Systems
You can move an LSM disk group between standalone systems, between clusters, from a standalone system to a cluster, and vice versa and retain the LSM objects and data on those disks, as long as either of the following is true:
The disk group was deported with the
voldg
command.
A disk group that requires recovery (for example, due to a system crash) cannot be moved between systems. The disk group must first be recovered in the environment where it was last used.
The system originally using the disk group was cleanly shut
down.
(For
rootdg, this must be true because you cannot
deport
rootdg.)
Moving a disk group between systems causes the new host system to assign
new
disk access names
to the disks.
For
LSM
nopriv
disks (created when you encapsulate disks or
partitions), the association between the original disk access name and its
disk media name
might be lost or might be reassociated
incorrectly.
To prevent this, you must manually reassociate the disk media
names with the new disk access names.
For LSM
sliced
and
simple
disks, LSM manages this reassociation.
If possible, before moving the disk group, migrate the data from
nopriv
disks to
sliced
or
simple
disks, which have a private region and will be reassociated automatically.
For more information on moving data to a different disk, see
Section 5.1.5.
If you cannot move the data to
sliced
or
simple
disks, see
Section 7.3.3.
You can change the disk group's name or host ID when you move it to the new host; for example, to reduce the chance for confusion if the new host has a disk group with a similar name. You must change the disk group's name if the new host has a disk group with the same name.
You can change the disk group's host ID to that of the receiving system
as you deport it from the original system.
This allows the system receiving
the disk group to import it automatically when it starts.
If the new host
is already running, the disk group's host ID is changed when you import the
disk group on the new host.
7.3.1 Moving the rootdg Disk Group to Another System
You can move
the
rootdg
disk group from one standalone system to another
with the following restrictions:
If the system's root disk and swap space are encapsulated to LSM volumes, remove them from LSM control. The root file system cannot be reused on another system.
For information on removing the system volumes, see Section 7.4.
If other system-specific file systems use LSM volumes, also remove them from LSM control.
There can be no duplication of file systems on one system. Only file systems or applications that are not critical to the system's operation, or that do not exist on the target system, can be moved between systems.
For more information on unencapsulating AdvFS domains or UFS file systems, see Section 5.4.6.1.
If
rootdg
contains any internal system
disks, remove those disks from
rootdg
(and, if necessary,
from LSM control).
You cannot deport
rootdg; to move it, either
shut down the system, or stop running LSM on the system.
This involves stopping
all volumes (and stopping access from any file systems and applications that
use those volumes) and stopping the LSM daemons.
After the
rootdg
disk group is removed from a system, the system can no longer run
LSM (unless you create a new
rootdg).
If you do not plan to run LSM again on the system, edit the
/etc/inittab
file to remove the LSM startup routines:
lsmr:s:sysinit:/sbin/lsmbstartup -b </dev/console >/dev/console 2>&1 ##LSM lsm:23:wait:/sbin/lsmbstartup </dev/console >/dev/console 2>&1 ##LSM vol:23:wait:/sbin/vol-reconfig -n </dev/console >/dev/console 2>&1 ##LSM
Optionally, recursively delete the
/dev/vol/
and
/etc/vol/
directories.
7.3.2 Moving Disk Groups to Another System
To move a disk group other than
rootdg
to another
system:
Stop all activity on the volumes in the disk group and unmount any file systems.
Deport the disk group from the originating system:
To deport the disk group with no changes:
# voldg deport disk_group
To deport the disk group and assign it a new host ID or a new name:
# voldg [-n new_name] [-h new_hostID] deport disk_group
Physically move the disks to the new host system.
Enter the following command on the new host system to scan for the disks:
# hwmgr scan scsi
The
hwmgr
command returns the prompt before it completes
the scan.
You need to confirm that the system has discovered the new disk
before continuing, such as by entering the
hwmgr show scsi
command until you see the new device.
Make LSM aware of the newly added disks:
# voldctl enable
Import the disk group to the new host:
If the disk group contains
nopriv
disks
whose disk media names no longer correspond to their original disk access
names, you might need to use the force (-f) option.
If the disk group is moving from a standalone system to a cluster, use the -o shared option.
If the disk group is moving from a cluster to a standalone system, use the -o private option.
If the disk group has the same name as another disk group on the system, rename it as you import it.
# voldg [-f] [-o shared|private] [-n new_name] import \ disk_group
If applicable, associate the disk media names for the
nopriv
disks to their new disk access names:
# voldg -g disk_group -k adddisk \ disk_media_name=disk_access_name...
Recover and start all startable volumes in the imported disk group. The following command performs any necessary recovery operations as a background task after starting the volumes:
# volrecover -g disk_group -sb
Optionally, identify any detached plexes.
# volinfo -p
If the output lists any volumes as
Unstartable, see
Section 6.5.2.2
for information on how to proceed.
If necessary, start the remaining
Startable
volumes:
# volume -g disk_group start volume1 volume2...
7.3.3 Moving Disk Groups with nopriv Disks to Another System
When LSM disks are moved to a different system or added to a cluster,
the operating system assigns them new device names (LUNs) that are not likely
to be the same as their previous device names.
LSM bases the
disk access name
on the device name and maintains an association between
the disk access name and the
disk media name, which can be anything you assign, such as
big_disk.
If you move a disk group to another system, LSM uses this association (stored
in in the configuration database) to remap the disk media names to the new
device names (disk access names) for
sliced
and
simple
disks, but not for
nopriv
disks.
Moving a disk group with multiple
nopriv
disks involves
a lengthy, careful process that includes identifying the disks while they
are connected to the original system or cluster and making a detailed record
of the
nopriv
disks with enough information to help you
properly identify them in the new environment.
You might create a list of the disk access name, disk media name, and
disk group name for each
nopriv
disk and physically label
each disk (with a sticker or adhesive tape, for example) with this same information.
Then you can move the disks to the new environment and use commands such as
hwmgr flash light
to physically locate the disks.
When you determine
their new disk access names, you can import the disk group and then associate
the old disk media names to the new disk access names for the
nopriv
disks.
Because
nopriv
disks require additional effort to
manage, we strongly advise that you use them only to place data under LSM
control (through encapsulation) and then immediately move those volumes to
sliced
or
simple
disks.
Before you begin, you might want to review the syntax for
hwmgr flash
by displaying its online help:
# hwmgr -h flash light
Usage: hwmgr flash light
[ -dsf <device-special-filename> ]
[ -bus <scsi-bus> -target <scsi-target> -lun <scsi-lun> ]
[ -seconds <number-of-seconds> ] (default is 30 seconds)
[ -nopause ] (do not pause between flashes)
The "flash light" operation works only on SCSI disks.
You can use the -seconds option with the -nopause option to cause the disk's light to remain on constantly for the length of time you specify. Without the -nopause option, the light flashes on and off for the specified duration. In a busy environment, you might not be able to tell whether a light is flashing because of your command or because of I/O.
If there is only one
nopriv
disk in the disk group,
there is only one device to reassociate.
As long as you are not connecting
other devices to the new host at the same time, you might not need this information.
For two or more
nopriv
disks, having precise identification
beforehand is crucial.
To move a disk group with multiple
nopriv
disks to
a different system or cluster:
On the original host, use the following commands to identify
all the
nopriv
disks in the disk group by their current
disk access name and disk media name and a unique identifier (such as the
disk's SCSI world-wide identifier) that will not change or can be tracked
when the disk is connected to the new system.
Create a list (or a printable
file) containing this information.
List the disks in the disk group:
# voldisk -g disk_group list
DEVICE TYPE DISK GROUP STATUS dsk21 sliced dsk21 datadg online dsk22 sliced dsk22 datadg online dsk23c nopriv dsk23c-4.2BSD datadg online dsk24c nopriv dsk24c-database datadg online dsk26c nopriv dsk26c-AdvFS datadg online dsk27g nopriv dsk27g-raw datadg online
Find the hardware IDs (HWIDs) of the disks:
# hwmgr show scsi
SCSI DEVICE DEVICE DRIVER NUM DEVICE FIRST
HWID: DEVICEID HOSTNAME TYPE SUBTYPE OWNER PATH FILE VALID PATH
-------------------------------------------------------------------------
.
.
.
88: 22 lsmtemp disk none 2 1 dsk21 [5/3/0]
89: 23 lsmtemp disk none 2 1 dsk22 [5/4/0]
90: 24 lsmtemp disk none 2 1 dsk23 [5/5/0]
91: 25 lsmtemp disk none 2 1 dsk24 [5/6/0]
92: 26 lsmtemp disk none 2 1 dsk25 [6/1/0]
93: 27 lsmtemp disk none 2 1 dsk26 [6/3/0]
94: 28 lsmtemp disk none 2 1 dsk27 [6/5/0]
Use the HWID value for each
nopriv
disk
to find its world-wide ID (WWID):
# hwmgr show scsi -full -id HWID
For example:
# hwmgr show scsi -full -id 90
SCSI DEVICE DEVICE DRIVER NUM DEVICE FIRST
HWID: DEVICEID HOSTNAME TYPE SUBTYPE OWNER PATH FILE VALID PATH
-------------------------------------------------------------------------
90: 24 lsmtemp disk none 2 1 dsk23 [5/5/0]
WWID:04100024:"DEC RZ1CF-CF (C) DEC 50060037"
BUS TARGET LUN PATH STATE
------------------------------
5 5 0 valid
Physically label each
nopriv
disk with
its disk access name, disk media name, and WWID.
Deport the disk group on the original host:
# voldg deport disk_group
Physically connect the disk group to the new environment.
Keep track of the before-and-after bus locations of each
nopriv
disk as you move it between systems.
Then when you scan for the
disks on the new host, you will know which new disk access name associated
with the new bus location belongs to which disk media name.
You can move each
disk individually and use the
hwmgr
command to scan for
it each time to be sure.
On the new system or cluster, enter the following command to discover and assign device names to the newly attached storage:
# hwmgr scan scsi
Import the disk group using the force (-f)
option, which forces LSM to import the disk group despite not being able to
import the
nopriv
disks.
# voldg -f [-o shared|private] import disk_group
If the disk group is moving from a standalone system to a cluster, mark it for use in a cluster with the -o shared option.
If the disk group is moving from a cluster to a standalone system, mark it as private with the -o private option.
Make a note of the disks that LSM reports were not found.
Display the disks in the imported disk group:
# voldisk -g disk_group list
The output shows only
sliced
and
simple
disks.
The
nopriv
disks are still not imported.
Compare the disk access names with the output of the following command:
# hwmgr show scsi
The new device names that appear in the output from step 9 but not in
the output from step 8 are probably the
nopriv
disks.
The device special file name for each device name appears in the
DEVICE FILE
column; use that identifier in step 10.
For each suspect device name, run the following command:
# hwmgr flash light -dsf device_special_filename \ -seconds duration -nopause
Find the disk with the constantly-on light.
If the disk is one of the labeled
nopriv
disks that
came from the other system, write down the disk media name and correlate it
to the new device name.
For example, write the new device name next to the
old disk media name on your list from step 1.
Add each
nopriv
disk to the disk group,
associating its disk media name with its new device (disk access) name:
# voldg -g disk_group -k adddisk media_name=device_name
Start, and if necessary, recover the volumes on the
nopriv
disks:
# voldg -g disk_group startall # volrecover -g disk_group -sb
7.4 Unencapsulating the Boot Disk (Standalone System)
If you encapsulated the root file systems (/,
/usr, and
/var) and the
primary swap partition on a standalone system (Section 3.4.1)
and later decide you want to stop using LSM volumes and return to using physical
disk partitions, you can do so by unencapsulating the boot disk and primary
swap space.
This process involves restarting the system.
Note
To stop using LSM volumes for the clusterwide root,
/usr, and/varfile system domains, use thevolunmigratecommand. For more information, see Section 7.5 and. volunmigrate(8)
The unencapsulation process changes the following files:
If the root file system is AdvFS, the links in the
/etc/fdmns/*
directory for domains associated with the boot disk are changed
to point to disk partitions instead of LSM volumes.
If the
root file system is UFS, the
/etc/fstab
file is changed
to use disk partitions instead of LSM volumes.
In the
/etc/sysconfigtab
file, the
swapdevice
entry is changed to use the original swap partition instead of
the LSM
swapvol
volume and the
lsm_rootdev_is_volume
entry is set to 0.
To unencapsulate the system partitions:
If the system volumes (root, swap,
/usr,
and
/var) are mirrored, do the following.
If not, go to
step 2.
Display detailed volume information for the boot disk volumes:
# volprint -g rootdg -vht
V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE v rootvol root ENABLED ACTIVE 524288 ROUND - pl rootvol-02 rootvol ENABLED ACTIVE 524288 CONCAT - RW sd root02-02p rootvol-02 root02 0 16 0 dsk16a ENA sd root02-02 rootvol-02 root02 16 524272 16 dsk16a ENA pl rootvol-01 rootvol ENABLED ACTIVE 524288 CONCAT - RW sd root01-01p rootvol-01 root01 0 16 0 dsk14a ENA sd root01-01 rootvol-01 root01 16 524272 16 dsk14a ENA v swapvol swap ENABLED ACTIVE 520192 ROUND - pl swapvol-02 swapvol ENABLED ACTIVE 520192 CONCAT - RW sd swap02-02 swapvol-02 swap02 0 520192 0 dsk16b ENA pl swapvol-01 swapvol ENABLED ACTIVE 520192 CONCAT - RW sd swap01-01 swapvol-01 swap01 0 520192 0 dsk14b ENA v vol-dsk14g fsgen ENABLED ACTIVE 2296428 SELECT - pl vol-dsk14g-02 vol-dsk14g ENABLED ACTIVE 2296428 CONCAT - RW sd dsk16g-01 vol-dsk14g-02 dsk16g-AdvFS 0 2296428 0 dsk16g ENA pl vol-dsk14g-01 vol-dsk14g ENABLED ACTIVE 2296428 CONCAT - RW sd dsk14g-01 vol-dsk14g-01 dsk14g-AdvFS 0 2296428 0 dsk14g ENA v vol-dsk14h fsgen ENABLED ACTIVE 765476 SELECT - pl vol-dsk14h-02 vol-dsk14h ENABLED ACTIVE 765476 CONCAT - RW sd dsk16h-01 vol-dsk14h-02 dsk16h-AdvFS 0 765476 0 dsk16h ENA pl vol-dsk14h-01 vol-dsk14h ENABLED ACTIVE 765476 CONCAT - RW sd dsk14h-01 vol-dsk14h-01 dsk14h-AdvFS 0 765476 0 dsk14h ENA
Examine the output and decide which plexes you want to remove based
on which disk each plex uses.
Typically, the plexes with the
-01
suffix are those using the original disk or disk partition and
therefore are the ones you want to unencapsulate.
Note
In the previous example, the
rootvolvolume contains subdisks labeledroot01-01pandroot02-02p. These are phantom subdisks, and each is 16 sectors long. They provide write-protection for block 0, which prevents accidental destruction of the boot block and disk label. These subdisks are removed in the course of this procedure.
If the root file system and the primary swap space originally used different
disks, the plexes you want to unencapsulate can be on different disks; for
example, the
rootvol-01
plex can be on
dsk14
but the
swapvol-01
plex can be on
dsk16.
Remove all plexes except the one using the disk you want to unencaspulate. The remaining plex should be on the disk that you want to unencapsulate. This is the disk the system partitions will use after the unencapsulation completes.
# volplex -o rm dis plex-nn
For example, to remove secondary plexes for the volumes
rootvol,
swapvol, and
vol-dsk0g:
# volplex -o rm dis rootvol-02 # volplex -o rm dis swapvol-02 # volplex -o rm dis vol-dsk14g-02 # volplex -o rm dis vol-dsk14h-02
Change the boot disk environment variable to
point to the physical boot disk; in this case, the disk for plex
rootvol-01:
# consvar -s bootdef_dev boot_disk
For example:
# consvar -s bootdef_dev dsk14 set bootdef_dev = dsk14
Unencapsulate the boot disk and primary swap disk (if different).
# volunroot -a -A
This command also removes the LSM private region from the system disks and prompts you to restart the system.
Information similar to the following is displayed.
Enter
now
at the prompt.
This operation will convert the following file systems on the
system/swap disk dsk14 from LSM volumes to regular disk partitions:
Replace volume rootvol with dsk14a.
Replace volume swapvol with dsk14b.
Replace volume vol-dsk14g with dsk14g.
Replace volume vol-dsk14h with dsk14h.
Remove configuration database on dsk14f.
This operation will require a system reboot. If you choose to
continue with this operation, your system files will be updated
to discontinue the use of the above listed LSM volumes.
/sbin/volreconfig should be present in /etc/inittab to remove
the named volumes during system reboot.
Would you like to either quit and defer volunroot until later
or commence system shutdown now? Enter either 'quit' or time to be
used with the shutdown(8) command (e.g., quit, now, 1, 5): [quit] now
When the system restarts, the root file system and primary swap space use the original, unencapsulated disks or disk partitions.
If the system volumes were mirrored, the LSM disks that the mirror plexes
used remain under LSM control as members of the
rootdg
disk group.
To reuse these LSM disks within LSM or for other purposes:
Display the LSM disks in the
rootdg
disk
group:
# voldisk -g rootdg list
DEVICE TYPE DISK GROUP STATUS
.
.
.
dsk16a nopriv root02 rootdg online dsk16b nopriv swap02 rootdg online dsk16f simple dsk16f rootdg online dsk16g nopriv dsk16g-AdvFS rootdg online dsk16h nopriv dsk16h-AdvFS rootdg online
.
.
.
In this case, the LSM disks for the system volume mirror plexes have
the disk media names
root02,
swap02,
dsk16g-AdvFS, and
dsk16h-AdvFS.
All these LSM
disks are on the same physical disk
dsk16.
The private
region for
dsk16
has the disk media name
dsk16f.
Remove these LSM disks from the
rootdg
disk group using their disk media names; for example:
# voldg rmdisk root02 swap02 dsk16g-AdvFS dsk16h-AdvFS dsk16f
Remove the disks from LSM control using their disk access
names (in the
DEVICE
column); for example:
# voldisk rm dsk16a dsk16b dsk16f dsk16g dsk16h
The physical disk (in this case,
dsk16) is no longer
under LSM control and its disk label shows all partitions marked unused.
7.5 Migrating AdvFS Domains from LSM Volumes to Physical Storage
You can stop using LSM volumes for AdvFS domains and return to using
physical disks or disk partitions with the
volunmigrate
command.
This command works on both standalone systems and clusters.
The domains
remain mounted and in use during this process; no reboot is required.
You must specify one or more disk partitions that are not under LSM
control, ideally on a shared bus, for the domain to use after the migration.
These partitions must be large enough to accommodate the domain plus at least
10 percent additional space for file system overhead.
The
volunmigrate
command examines the partitions that you specify to ensure they
meet both qualifications and returns an error if either or both is not met.
For more information, see
volunmigrate(8)
To migrate an AdvFS domain from an LSM volume to physical storage:
Display the size of the domain volume:
# volprint -vt domain_vol
Find one or more disk partitions on a shared bus that are not under LSM control and are large enough to accommodate the domain plus file system overhead of at least 10 percent:
# hwmgr view devices -cluster
Migrate the domain, specifying the target disk partitions:
# volunmigrate domain_name dsknp [dsknp...]
After migration, the domain uses the specified disks; the LSM volume
no longer exists.
7.6 Unencapsulating a Cluster Member's Swap Devices
You can remove a cluster member's swap devices from LSM volumes and resume using physical disk partitions. This process is called unencapsulation and requires that you reboot the member.
When you originally encapsulated the swap device, LSM created two separate
LSM disks: a
nopriv
disk for the swap partition itself,
and a
simple
disk for LSM private data on another partition
of the disk.
The unencapsulation process removes only the
nopriv
disk.
To unencapsulate a member's swap devices:
Display the names of LSM volumes in the
rootdg
disk group.
(All swap volumes must belong to
rootdg.)
# volprint -g rootdg -vht
TY NAME ASSOC KSTATE LENGTH ... v hughie-swap01 swap ENABLED 16777216 ... pl hughie-swap01-01 hughie-swap01 ENABLED 16777216 ... sd dsk4b-01 hughie-swap01-01 ENABLED 16777216 ...
In the output (edited for brevity), look for the following:
The name of the member's swap volume in the form
nodename-swapnn; for example,
hughie-swap01.
The disk partition (subdisk) used by the swap volume in the
form
dsknp; for example,
dsk4b.
Edit the
/cluster/members/member{n}/boot_partition/etc/sysconfigtab
file for the member to remove the
/dev/vol/rootdg/nodename-swapnn
entry
from the
swapdevice=
line.
Reboot the member:
# shutdown -r now
When the member starts again, it no longer uses the LSM swap volume.
Log back in to the same member.
Remove the swap volume:
# voledit -rf rm nodename-swapnn
Find the LSM
simple
disk associated with
the encapsulated swap device; for example:
# voldisk -g rootdg list | grep dsk4 dsk4b nopriv dsk4b rootdg online dsk4f simple dsk4f rootdg online
Remove the LSM
simple
disk and the
nopriv
disk from the
rootdg
disk group and from
LSM control; for example:
# voldg -g rootdg rmdisk dsk4b dsk4f # voldisk rm dsk4b dsk4f
Set the cluster member to swap on the original disk partition
(the former
nopriv
disk); for example:
# swapon /dev/disk/dsk4b
Edit the
/etc/sysconfigtab
file as follows:
Add the
/dev/disk/dsknp
entry to the line
swapdevice=
so that the line reads:
swapdevice=/dev/disk/dsknp
For example:
swapdevice=/dev/disk/dsk4b
If you removed the last LSM swap device for this member, set
the value for
lsm_root_dev_is_volume=
to 0.
The cluster member uses the specified disk partition for its swap device
and the LSM swap volume no longer exists.
7.7 Uninstalling the LSM Software
This section describes how to completely remove the LSM software from a standalone system or a cluster. This process involves:
Backing up user data
Unencapsulating disks or data
Removing LSM objects and the software subsets
Reconfiguring the kernel and restarting the system or cluster member
Caution
Uninstalling LSM causes any current data in LSM volumes to be lost. Before proceeding, back up any needed data.
To uninstall the LSM software:
Reconfigure any system-specific file systems and swap space, so they no longer use an LSM volume.
On a standalone system, unencapsulate the root file systems and primary swap partition (Section 7.4).
If additional (secondary) swap space uses LSM volumes, remove those volumes (Section 5.4.6).
In a cluster, migrate all AdvFS domains that use LSM volumes,
including
cluster_root,
cluster_usr,
and
cluster_var
from the LSM volumes to disk partitions
(Section 7.5).
Unencapsulate all cluster members' swap devices (Section 7.6).
Unmount any other file systems that are using LSM volumes, so all LSM volumes can be closed.
Update the
/etc/fstab
file if necessary,
so that it no longer mounts any file systems on an LSM volume.
Stop applications that are using raw LSM volumes and reconfigure them, so that they no longer use LSM volumes.
Identify the disks that are currently configured under LSM:
# voldisk list
Restart LSM in disabled mode (in a cluster, on only one member):
# vold -k -r reset -d
This command fails if any volumes are open.
Stop all LSM volume and I/O daemons (in a cluster, on every member):
# voliod -f set 0 # voldctl stop
Update the disk labels for the disks under LSM control (in the output from step 3).
For each LSM
sliced
disk, apply a default
disk label to the entire disk:
# disklabel -rw dskn
For each LSM
simple
disk, change the partition's
fstype
field to unused:
# disklabel -s dsknP unused
For each LSM
nopriv
disk, change the partition's
fstype
field to either unused or the appropriate value, depending
on whether the partition still contains valid data.
For example:
To change the
fstype
field for partition
dsk2h, which contains no valid data:
# disklabel -s dsk2h unused
To change the
fstype
field for partition
dsk2g, which contains a valid UFS file system:
# disklabel -s dsk2g 4.2BSD
Remove the LSM directories:
# rm -r /etc/vol /dev/vol /dev/rvol /etc/vol/volboot
Delete the following LSM entries in the
/etc/inittab
file (in a cluster, for every member):
lsmr:s:sysinit:/sbin/lsmbstartup -b </dev/console >/dev/console 2>&1 ##LSM lsm:23:wait:/sbin/lsmbstartup </dev/console >/dev/console 2>&1 ##LSM vol:23:wait:/sbin/vol-reconfig -n </dev/console >/dev/console 2>&1 ##LSM
Display the installed LSM subsets:
# setld -i | grep LSM
Delete the installed LSM subsets:
# setld -d OSFLSMBASEnnn OSFLSMBINnnn OSFLSMCLSMTOOLSnnn
In the
/sys/conf/hostname
file (in a cluster, for every member), change the value of the
pseudo-device
lsm
entry from 1 to 0.
In a cluster, the hostname is the member name, not the cluster alias.
You can make this change either before or while running the
doconfig
command; for example:
# doconfig -c hostname
Copy the new kernel to the root (/) directory
(in a cluster, on every member):
# cp /sys/hostname/vmunix /
Restart the system or cluster member.
For information the appropriate way to restart each member, see the Cluster Administration manual.
When the system restarts, or after every cluster member restarts, LSM will no longer be installed.