7    Special Cases

The following topics are covered in this chapter:

7.1    Upgrading an LSM Configuration

If you are currently using LSM on a system running Tru64 UNIX Version 4.0 and you want to preserve your current LSM configuration for use with Tru64 UNIX Version 5.0 or higher, you must:

  1. Increase the size of any block-change logs (BCLs) to at least two blocks per gigabyte of volume size for a standalone system or at least 65 blocks per gigabyte of volume size for a TruCluster Server environment (Section 7.1.1).

  2. Back up the current LSM configuration (Section 7.1.2).

  3. Optionally, deport any disk groups that you do not want to upgrade (Section 7.1.3).

  4. Upgrade the LSM software (Section 7.1.4).

  5. Manually convert any Version 4.0 disk groups that you deported before the upgrade (Section 7.1.5).

  6. Optimize the restored LSM configuration databases (Section 7.1.6).

7.1.1    Increasing the Size of BCLs

The block-change logging (BCL) feature supported in LSM in Tru64 UNIX Version 4.0 was replaced with the dirty-region logging (DRL) feature in Version 5.0.

When you perform an upgrade installation, BCLs are automatically converted to DRLs if the BCL subdisk is at least two blocks. If the BCL subdisk is one block, logging is disabled after the upgrade installation.

Note

The conversion of BCLs to DRLs is not reversible.

Before you upgrade, increase the size of the BCLs to at least two blocks per gigabyte of volume size for standalone systems or at least 65 blocks per gigabyte of volume size for a TruCluster environment. If this is not possible, then after the upgrade, you can add a new log to those volumes using volassist addlog, which creates a DRL of the appropriate size by default.

For information on increasing the size of BCLs, see the LSM documentation for your current operating system version.

7.1.2    Backing Up the LSM Configuration

Backing up the LSM configuration creates a file that describes all the LSM objects in all disk groups. In case of a catastrophic failure, LSM can use this file to restore the LSM configuration.

Caution

The following procedure backs up only the configuration, not the volume data. You might also want to back up the volume data before performing the upgrade.

To back up the LSM configuration:

  1. Enter the following command:

    # volsave [-d dir]
    

     LSM configuration being saved to /usr/var/lsm/db/LSM.20020312143345.hostname
     
     volsave does not save configuration for volumes used for
                     root, swap, /usr or /var.
                     LSM configuration for following system disks not saved:
                    dsk3 dsk0a dsk2a dsk0b dsk2b dsk0g dsk0g
     
     LSM Configuration saved successfully to /usr/var/lsm/db/LSM.20020312143345.hostname
     
    

    By default, LSM configuration information is saved to a time-stamped file called a description set in the /usr/var/lsm/db directory. Make a note of the location and name of the file. You will need this information to restore the LSM configuration after you upgrade the Tru64 UNIX operating system software.

  2. Optionally, confirm that the LSM configuration was saved:

    # ls /usr/var/lsm/db/LSM.date.hostname
    dg1.d         newdg.d       volboot
    header        rootdg.d      voldisk.list
    

  3. Save the LSM configuration to tape or other removable media.

7.1.3    Deporting Disk Groups (Optional)

The internal metadata format of LSM in Tru64 UNIX Version 5.0 and higher is not compatible with the metadata format of LSM in Tru64 UNIX Version 4.0. If an older metadata format is detected during the upgrade procedure, LSM automatically upgrades the old format to the new format. If you do not want certain disk groups to be upgraded, deport them before you upgrade LSM.

You cannot deport the rootdg disk group; rootdg must be converted to the new format to allow use of the LSM configuration on the upgraded system. After rootdg is converted, it cannot be used again on a system running Version 4.0.

To deport a disk group:

# voldg deport disk_group

If you later import a deported disk group, LSM upgrades the metadata format.

7.1.4    Upgrading the LSM Software

LSM comprises three software subsets, which are located on the CD-ROM containing the base operating system software for the Tru64 UNIX product kit.

Depending on the operating system versions you are upgrading from and to, you might have to perform a full installation instead of an update installation or a succession of update installations. (For a description of the supported update paths, see the Installation Guide.)

After the update or full installation, the rootdg disk group is converted and ready to use. Any disk groups that remained connected to the system (and that were not deported) are also converted and available.

7.1.5    Manually Converting Version 4.0 Disk Groups

If you deported disk groups before upgrading a system or cluster from Version 4.0 to Version 5.0 or higher, you can manually import and convert those disk groups.

Disk groups that are connected to a system before it restarts, or before cluster creation, are automatically imported. Their metadata format is updated, and the vollogcnvt utility converts BCLs to DRLs where possible. (For more information, see vollogcnvt(8).)

The following procedure applies only to disk groups that you deported before upgrading the operating system and have decided to import and convert.

This procedure:

To manually import and convert disk groups:

  1. Physically connect the storage to the system or cluster.

    In a cluster, connect the storage so that it is accessible by all cluster members.

  2. Run the hwmgr command to update the system or cluster with the new disk information. For more information, see hwmgr(8).

  3. Import and convert the disk group.

    # voldg -o convert_old import disk_group
    

    The disk group is imported and the following information is displayed for volumes that use BCLs:

    lsm:voldg:WARNING:Logging disabled on volume. Need to convert to DRL.
    lsm:voldg:WARNING:Run the vollogcnvt command to automatically convert logging.
     
    

  4. Convert any BCLs to DRLs for each disk group:

    # vollogcnvt -g disk_group
    

  5. If a BCL cannot be converted to a DRL and you want to restore logging for the volume:

    1. Identify the disabled BCL subdisk:

      # volprint [-g disk_group] volume
      

    2. Remove the BCL subdisk:

      # volsd [-g disk_group] -o rm dis subdisk
      

    3. Add a new log to the volume:

      # volassist [-g disk_group] addlog volume
      

  6. Start the volumes in each newly imported disk group:

    # volrecover -g disk_group
    

7.1.6    Optimizing Restored LSM Configuration Databases (Optional)

If you restored an LSM configuration on a system that you upgraded from Tru64 UNIX Version 4.0 to Tru64 UNIX Version 5.0 or higher, you can modify the configuration databases to allow LSM to automatically manage their number and placement.

Note

This procedure is an optimization and is not required.

On systems running Tru64 UNIX Version 4.0 and using LSM, you had to explicitly configure between four and eight disks per disk group to have enabled databases. In Version 5.0 and higher, by default all LSM disks are configured to contain copies of the database, and LSM automatically maintains the appropriate number of enabled copies. The distinction between an enabled and disabled copy is as follows:

Configure the private regions on all your LSM disks to contain one copy of the configuration database, unless you have a specific reason for not doing so, such as:

Enabling the configuration database does not use additional space on the disk; it merely sets the number of enabled copies in the private region to 1.

To set the number of configuration database copies to 1:

# voldisk moddb disk nconfig=1

For disk groups containing three or fewer disks, each disk should have two copies of the configuration database to provide sufficient redundancy. This is especially important for systems with a small rootdg disk group and one or more larger secondary disk groups.

For more information on modifying the LSM configuration databases, see Section 5.3.3.

7.2    Adding a System with LSM to a Cluster

You can add a standalone system with LSM volumes to an existing cluster and incorporate its LSM volumes in the cluster, whether or not the cluster is also using LSM. Alternatively, you can move only the disk groups from one system to another or from a system to a cluster.

Note

If the standalone system is not running at least Tru64 UNIX Version 5.0, see Section 7.1 to upgrade the system and its LSM configuration before adding the system to a cluster.

Before you begin, decide what you want to do with the standalone system's rootdg disk group. There can be only one rootdg disk group in an LSM configuration.

Review the following information and recommendations, and take any necessary actions:

To add a standalone system using LSM to a cluster not running LSM:

  1. If applicable, reconfigure the log subdisks on all mirrored volumes to use the default DRL size.

    1. Identify mirrored volumes with nondefault-size log plexes:

      #  volprint -pht | grep -p LOGONLY
      

      Information similar to the following is displayed. In this example, the log plex vol1-03 is only 2 blocks long, but the log plex vol2-03 is 65 blocks:

      pl vol1-03     vol1       ENABLED  ACTIVE   LOGONLY  CONCAT    -        RW
      sd dsk27a-01   vol1-03    dsk27a   0        2        LOG       dsk27a   ENA
       
      pl vol2-03     vol2       ENABLED  ACTIVE   LOGONLY  CONCAT    -        RW
      sd dsk27a-02   vol2-03    dsk27a   2        65       LOG       dsk27a   ENA
       
      

    2. Delete the nondefault-size DRL plex from its volume:

      # volplex [-g disk_group] -o rm dis log_plex
      

      For example:

      # volplex -o rm dis vol1-03
      

    3. Add a new DRL plex to the volume, which will automatically be sized correctly:

      # volassist addlog volume
      

      For example:

      # volassist addlog vol1
      

  2. Stop all volumes in each disk group:

    # volume -g disk_group stopall
    

  3. Deport each disk group except rootdg:

    # voldg deport disk_group
    

  4. Display the disk group ID for rootdg:

    # voldg list rootdg | grep id
    dgid:      1007697459.1026.hostname
    

  5. Make a note of the disk group ID. You will need this information to import the rootdg disk group on the cluster.

  6. Halt the system and add it to the cluster. Make sure all its storage is connected to the cluster (preferably as shared storage).

    This step involves using the clu_add_member command and possibly other hardware-specific or cluster-specific operations that are not covered here.

  7. Run the hwmgr command to update the cluster with the new disk information. For more information, see hwmgr(8).

  8. Initialize LSM using one of the following methods:

  9. Synchronize LSM throughout the cluster by entering the following command on all members except the member where you performed step 8:

    # volsetup -s
    

7.3    Moving Disk Groups Between Systems

You can move an LSM disk group between standalone systems, between clusters, from a standalone system to a cluster, and vice versa and retain the LSM objects and data on those disks, as long as either of the following is true:

Moving a disk group between systems causes the new host system to assign new disk access names to the disks. For LSM nopriv disks (created when you encapsulate disks or partitions), the association between the original disk access name and its disk media name might be lost or might be reassociated incorrectly. To prevent this, you must manually reassociate the disk media names with the new disk access names. For LSM sliced and simple disks, LSM manages this reassociation.

If possible, before moving the disk group, migrate the data from nopriv disks to sliced or simple disks, which have a private region and will be reassociated automatically. For more information on moving data to a different disk, see Section 5.1.5.

If you cannot move the data to sliced or simple disks, see Section 7.3.3.

You can change the disk group's name or host ID when you move it to the new host; for example, to reduce the chance for confusion if the new host has a disk group with a similar name. You must change the disk group's name if the new host has a disk group with the same name.

You can change the disk group's host ID to that of the receiving system as you deport it from the original system. This allows the system receiving the disk group to import it automatically when it starts. If the new host is already running, the disk group's host ID is changed when you import the disk group on the new host.

7.3.1    Moving the rootdg Disk Group to Another System

You can move the rootdg disk group from one standalone system to another with the following restrictions:

  1. If the system's root disk and swap space are encapsulated to LSM volumes, remove them from LSM control. The root file system cannot be reused on another system.

    For information on removing the system volumes, see Section 7.4.

  2. If other system-specific file systems use LSM volumes, also remove them from LSM control.

    There can be no duplication of file systems on one system. Only file systems or applications that are not critical to the system's operation, or that do not exist on the target system, can be moved between systems.

    For more information on unencapsulating AdvFS domains or UFS file systems, see Section 5.4.6.1.

  3. If rootdg contains any internal system disks, remove those disks from rootdg (and, if necessary, from LSM control).

  4. You cannot deport rootdg; to move it, either shut down the system, or stop running LSM on the system. This involves stopping all volumes (and stopping access from any file systems and applications that use those volumes) and stopping the LSM daemons. After the rootdg disk group is removed from a system, the system can no longer run LSM (unless you create a new rootdg).

  5. If you do not plan to run LSM again on the system, edit the /etc/inittab file to remove the LSM startup routines:

    lsmr:s:sysinit:/sbin/lsmbstartup -b </dev/console >/dev/console 2>&1 ##LSM 
    lsm:23:wait:/sbin/lsmbstartup </dev/console >/dev/console 2>&1 ##LSM
    vol:23:wait:/sbin/vol-reconfig -n </dev/console >/dev/console 2>&1 ##LSM
     
    

  6. Optionally, recursively delete the /dev/vol/ and /etc/vol/ directories.

7.3.2    Moving Disk Groups to Another System

To move a disk group other than rootdg to another system:

  1. Stop all activity on the volumes in the disk group and unmount any file systems.

  2. Deport the disk group from the originating system:

  3. Physically move the disks to the new host system.

  4. Enter the following command on the new host system to scan for the disks:

    # hwmgr scan scsi
    

    The hwmgr command returns the prompt before it completes the scan. You need to confirm that the system has discovered the new disk before continuing, such as by entering the hwmgr show scsi command until you see the new device.

  5. Make LSM aware of the newly added disks:

    # voldctl enable
    

  6. Import the disk group to the new host:

    # voldg [-f] [-o shared|private] [-n new_name] import \
    disk_group
    

  7. If applicable, associate the disk media names for the nopriv disks to their new disk access names:

    # voldg -g disk_group -k adddisk \
    disk_media_name=disk_access_name...
    

  8. Recover and start all startable volumes in the imported disk group. The following command performs any necessary recovery operations as a background task after starting the volumes:

    # volrecover -g disk_group -sb
    

  9. Optionally, identify any detached plexes.

    # volinfo -p
    

    If the output lists any volumes as Unstartable, see Section 6.5.2.2 for information on how to proceed.

  10. If necessary, start the remaining Startable volumes:

    # volume -g disk_group start volume1 volume2...
    

7.3.3    Moving Disk Groups with nopriv Disks to Another System

When LSM disks are moved to a different system or added to a cluster, the operating system assigns them new device names (LUNs) that are not likely to be the same as their previous device names. LSM bases the disk access name on the device name and maintains an association between the disk access name and the disk media name, which can be anything you assign, such as big_disk. If you move a disk group to another system, LSM uses this association (stored in in the configuration database) to remap the disk media names to the new device names (disk access names) for sliced and simple disks, but not for nopriv disks.

Moving a disk group with multiple nopriv disks involves a lengthy, careful process that includes identifying the disks while they are connected to the original system or cluster and making a detailed record of the nopriv disks with enough information to help you properly identify them in the new environment.

You might create a list of the disk access name, disk media name, and disk group name for each nopriv disk and physically label each disk (with a sticker or adhesive tape, for example) with this same information. Then you can move the disks to the new environment and use commands such as hwmgr flash light to physically locate the disks. When you determine their new disk access names, you can import the disk group and then associate the old disk media names to the new disk access names for the nopriv disks.

Because nopriv disks require additional effort to manage, we strongly advise that you use them only to place data under LSM control (through encapsulation) and then immediately move those volumes to sliced or simple disks.

Before you begin, you might want to review the syntax for hwmgr flash by displaying its online help:

# hwmgr -h flash light

Usage: hwmgr flash light
        [ -dsf <device-special-filename>                        ]
        [ -bus <scsi-bus> -target <scsi-target> -lun <scsi-lun> ]
        [ -seconds <number-of-seconds> ] (default is 30 seconds)
        [ -nopause ] (do not pause between flashes)
        The "flash light" operation works only on SCSI disks.

You can use the -seconds option with the -nopause option to cause the disk's light to remain on constantly for the length of time you specify. Without the -nopause option, the light flashes on and off for the specified duration. In a busy environment, you might not be able to tell whether a light is flashing because of your command or because of I/O.

If there is only one nopriv disk in the disk group, there is only one device to reassociate. As long as you are not connecting other devices to the new host at the same time, you might not need this information. For two or more nopriv disks, having precise identification beforehand is crucial.

To move a disk group with multiple nopriv disks to a different system or cluster:

  1. On the original host, use the following commands to identify all the nopriv disks in the disk group by their current disk access name and disk media name and a unique identifier (such as the disk's SCSI world-wide identifier) that will not change or can be tracked when the disk is connected to the new system. Create a list (or a printable file) containing this information.

    1. List the disks in the disk group:

      # voldisk -g disk_group list
      

      DEVICE       TYPE      DISK            GROUP        STATUS
      dsk21        sliced    dsk21           datadg       online
      dsk22        sliced    dsk22           datadg       online
      dsk23c       nopriv    dsk23c-4.2BSD   datadg       online
      dsk24c       nopriv    dsk24c-database datadg       online
      dsk26c       nopriv    dsk26c-AdvFS    datadg       online
      dsk27g       nopriv    dsk27g-raw      datadg       online
      

    2. Find the hardware IDs (HWIDs) of the disks:

      # hwmgr show scsi
      

              SCSI                DEVICE    DEVICE  DRIVER NUM  DEVICE FIRST
       HWID:  DEVICEID HOSTNAME   TYPE      SUBTYPE OWNER  PATH FILE   VALID PATH
      -------------------------------------------------------------------------
       
      .
      .
      .
      88: 22 lsmtemp disk none 2 1 dsk21 [5/3/0] 89: 23 lsmtemp disk none 2 1 dsk22 [5/4/0] 90: 24 lsmtemp disk none 2 1 dsk23 [5/5/0] 91: 25 lsmtemp disk none 2 1 dsk24 [5/6/0] 92: 26 lsmtemp disk none 2 1 dsk25 [6/1/0] 93: 27 lsmtemp disk none 2 1 dsk26 [6/3/0] 94: 28 lsmtemp disk none 2 1 dsk27 [6/5/0]

    3. Use the HWID value for each nopriv disk to find its world-wide ID (WWID):

      # hwmgr show scsi -full -id HWID
      

      For example:

      # hwmgr show scsi -full -id 90
      

              SCSI                DEVICE    DEVICE  DRIVER NUM  DEVICE FIRST
       HWID:  DEVICEID HOSTNAME   TYPE      SUBTYPE OWNER  PATH FILE   VALID PATH
      -------------------------------------------------------------------------
         90:  24       lsmtemp    disk      none    2      1    dsk23  [5/5/0]
       
            WWID:04100024:"DEC     RZ1CF-CF (C) DEC    50060037"
       
       
            BUS   TARGET  LUN   PATH STATE
            ------------------------------
            5     5       0     valid
      

  2. Physically label each nopriv disk with its disk access name, disk media name, and WWID.

  3. Deport the disk group on the original host:

    # voldg deport disk_group 
     
    

  4. Physically connect the disk group to the new environment.

    Keep track of the before-and-after bus locations of each nopriv disk as you move it between systems. Then when you scan for the disks on the new host, you will know which new disk access name associated with the new bus location belongs to which disk media name. You can move each disk individually and use the hwmgr command to scan for it each time to be sure.

  5. On the new system or cluster, enter the following command to discover and assign device names to the newly attached storage:

    # hwmgr scan scsi
    

  6. Import the disk group using the force (-f) option, which forces LSM to import the disk group despite not being able to import the nopriv disks.

    # voldg -f [-o shared|private] import disk_group
    

  7. Make a note of the disks that LSM reports were not found.

  8. Display the disks in the imported disk group:

    # voldisk -g disk_group list
    

    The output shows only sliced and simple disks. The nopriv disks are still not imported.

  9. Compare the disk access names with the output of the following command:

    # hwmgr show scsi
    

    The new device names that appear in the output from step 9 but not in the output from step 8 are probably the nopriv disks.

    The device special file name for each device name appears in the DEVICE FILE column; use that identifier in step 10.

  10. For each suspect device name, run the following command:

    # hwmgr flash light -dsf device_special_filename \
    -seconds duration -nopause
    

  11. Find the disk with the constantly-on light.

    If the disk is one of the labeled nopriv disks that came from the other system, write down the disk media name and correlate it to the new device name. For example, write the new device name next to the old disk media name on your list from step 1.

  12. Add each nopriv disk to the disk group, associating its disk media name with its new device (disk access) name:

    # voldg -g disk_group -k adddisk media_name=device_name
    

  13. Start, and if necessary, recover the volumes on the nopriv disks:

    # voldg -g disk_group startall
    # volrecover -g disk_group -sb
    

7.4    Unencapsulating the Boot Disk (Standalone System)

If you encapsulated the root file systems (/, /usr, and /var) and the primary swap partition on a standalone system (Section 3.4.1) and later decide you want to stop using LSM volumes and return to using physical disk partitions, you can do so by unencapsulating the boot disk and primary swap space. This process involves restarting the system.

Note

To stop using LSM volumes for the clusterwide root, /usr, and /var file system domains, use the volunmigrate command. For more information, see Section 7.5 and volunmigrate(8).

The unencapsulation process changes the following files:

To unencapsulate the system partitions:

  1. If the system volumes (root, swap, /usr, and /var) are mirrored, do the following. If not, go to step 2.

    1. Display detailed volume information for the boot disk volumes:

      # volprint -g rootdg -vht
      

      V  NAME          USETYPE       KSTATE   STATE    LENGTH   READPOL   PREFPLEX
      PL NAME          VOLUME        KSTATE   STATE    LENGTH   LAYOUT    NCOL/WID MODE
      SD NAME          PLEX          DISK     DISKOFFS LENGTH   [COL/]OFF DEVICE   MODE
       
      v  rootvol       root          ENABLED  ACTIVE   524288   ROUND     -
      pl rootvol-02    rootvol       ENABLED  ACTIVE   524288   CONCAT    -        RW
      sd root02-02p    rootvol-02    root02   0        16       0         dsk16a   ENA
      sd root02-02     rootvol-02    root02   16       524272   16        dsk16a   ENA
      pl rootvol-01    rootvol       ENABLED  ACTIVE   524288   CONCAT    -        RW
      sd root01-01p    rootvol-01    root01   0        16       0         dsk14a   ENA
      sd root01-01     rootvol-01    root01   16       524272   16        dsk14a   ENA
       
      v  swapvol       swap          ENABLED  ACTIVE   520192   ROUND     -
      pl swapvol-02    swapvol       ENABLED  ACTIVE   520192   CONCAT    -        RW
      sd swap02-02     swapvol-02    swap02   0        520192   0         dsk16b   ENA
      pl swapvol-01    swapvol       ENABLED  ACTIVE   520192   CONCAT    -        RW
      sd swap01-01     swapvol-01    swap01   0        520192   0         dsk14b   ENA
       
      v  vol-dsk14g    fsgen         ENABLED  ACTIVE   2296428  SELECT    -
      pl vol-dsk14g-02 vol-dsk14g    ENABLED  ACTIVE   2296428  CONCAT    -        RW
      sd dsk16g-01     vol-dsk14g-02 dsk16g-AdvFS 0    2296428  0         dsk16g   ENA
      pl vol-dsk14g-01 vol-dsk14g    ENABLED  ACTIVE   2296428  CONCAT    -        RW
      sd dsk14g-01     vol-dsk14g-01 dsk14g-AdvFS 0    2296428  0         dsk14g   ENA
       
      v  vol-dsk14h    fsgen         ENABLED  ACTIVE   765476   SELECT    -
      pl vol-dsk14h-02 vol-dsk14h    ENABLED  ACTIVE   765476   CONCAT    -        RW
      sd dsk16h-01     vol-dsk14h-02 dsk16h-AdvFS 0    765476   0         dsk16h   ENA
      pl vol-dsk14h-01 vol-dsk14h    ENABLED  ACTIVE   765476   CONCAT    -        RW
      sd dsk14h-01     vol-dsk14h-01 dsk14h-AdvFS 0    765476   0         dsk14h   ENA
       
      

      Examine the output and decide which plexes you want to remove based on which disk each plex uses. Typically, the plexes with the -01 suffix are those using the original disk or disk partition and therefore are the ones you want to unencapsulate.

      Note

      In the previous example, the rootvol volume contains subdisks labeled root01-01p and root02-02p. These are phantom subdisks, and each is 16 sectors long. They provide write-protection for block 0, which prevents accidental destruction of the boot block and disk label. These subdisks are removed in the course of this procedure.

      If the root file system and the primary swap space originally used different disks, the plexes you want to unencapsulate can be on different disks; for example, the rootvol-01 plex can be on dsk14 but the swapvol-01 plex can be on dsk16.

    2. Remove all plexes except the one using the disk you want to unencaspulate. The remaining plex should be on the disk that you want to unencapsulate. This is the disk the system partitions will use after the unencapsulation completes.

      # volplex -o rm dis plex-nn
      

      For example, to remove secondary plexes for the volumes rootvol, swapvol, and vol-dsk0g:

      # volplex -o rm dis rootvol-02
      # volplex -o rm dis swapvol-02
      # volplex -o rm dis vol-dsk14g-02
      # volplex -o rm dis vol-dsk14h-02
      

  2. Change the boot disk environment variable to point to the physical boot disk; in this case, the disk for plex rootvol-01:

    # consvar -s bootdef_dev boot_disk
    

    For example:

    # consvar -s bootdef_dev dsk14
    set bootdef_dev = dsk14
    

  3. Unencapsulate the boot disk and primary swap disk (if different).

    # volunroot -a -A
    

    This command also removes the LSM private region from the system disks and prompts you to restart the system.

    Information similar to the following is displayed. Enter now at the prompt.

    This operation will convert the following file systems on the
    system/swap disk dsk14 from LSM volumes to regular disk partitions:
     
            Replace volume rootvol with dsk14a.
            Replace volume swapvol with dsk14b.
            Replace volume vol-dsk14g with dsk14g.
            Replace volume vol-dsk14h with dsk14h.
            Remove configuration database on dsk14f.
     
    This operation will require a system reboot.  If you choose to
    continue with this operation, your system files will be updated
    to discontinue the use of the above listed LSM volumes.
    /sbin/volreconfig should be present in /etc/inittab to remove
    the named volumes during system reboot.
     
     
    Would you like to either quit and defer volunroot until later
    or commence system shutdown now? Enter either 'quit' or time to be
    used with the shutdown(8) command (e.g., quit, now, 1, 5): [quit] now
    

    When the system restarts, the root file system and primary swap space use the original, unencapsulated disks or disk partitions.

If the system volumes were mirrored, the LSM disks that the mirror plexes used remain under LSM control as members of the rootdg disk group.

To reuse these LSM disks within LSM or for other purposes:

  1. Display the LSM disks in the rootdg disk group:

    # voldisk -g rootdg list
    

    DEVICE       TYPE      DISK         GROUP        STATUS
     
    .
    .
    .
    dsk16a nopriv root02 rootdg online dsk16b nopriv swap02 rootdg online dsk16f simple dsk16f rootdg online dsk16g nopriv dsk16g-AdvFS rootdg online dsk16h nopriv dsk16h-AdvFS rootdg online
    .
    .
    .

    In this case, the LSM disks for the system volume mirror plexes have the disk media names root02, swap02, dsk16g-AdvFS, and dsk16h-AdvFS. All these LSM disks are on the same physical disk dsk16. The private region for dsk16 has the disk media name dsk16f.

  2. Remove these LSM disks from the rootdg disk group using their disk media names; for example:

    # voldg rmdisk root02 swap02 dsk16g-AdvFS dsk16h-AdvFS dsk16f
    

  3. Remove the disks from LSM control using their disk access names (in the DEVICE column); for example:

    # voldisk rm dsk16a dsk16b dsk16f dsk16g dsk16h
    

    The physical disk (in this case, dsk16) is no longer under LSM control and its disk label shows all partitions marked unused.

7.5    Migrating AdvFS Domains from LSM Volumes to Physical Storage

You can stop using LSM volumes for AdvFS domains and return to using physical disks or disk partitions with the volunmigrate command. This command works on both standalone systems and clusters. The domains remain mounted and in use during this process; no reboot is required.

You must specify one or more disk partitions that are not under LSM control, ideally on a shared bus, for the domain to use after the migration. These partitions must be large enough to accommodate the domain plus at least 10 percent additional space for file system overhead. The volunmigrate command examines the partitions that you specify to ensure they meet both qualifications and returns an error if either or both is not met. For more information, see volunmigrate(8).

To migrate an AdvFS domain from an LSM volume to physical storage:

  1. Display the size of the domain volume:

    # volprint -vt domain_vol
    

  2. Find one or more disk partitions on a shared bus that are not under LSM control and are large enough to accommodate the domain plus file system overhead of at least 10 percent:

    # hwmgr view devices -cluster
    

  3. Migrate the domain, specifying the target disk partitions:

    # volunmigrate domain_name dsknp [dsknp...]
    

After migration, the domain uses the specified disks; the LSM volume no longer exists.

7.6    Unencapsulating a Cluster Member's Swap Devices

You can remove a cluster member's swap devices from LSM volumes and resume using physical disk partitions. This process is called unencapsulation and requires that you reboot the member.

When you originally encapsulated the swap device, LSM created two separate LSM disks: a nopriv disk for the swap partition itself, and a simple disk for LSM private data on another partition of the disk. The unencapsulation process removes only the nopriv disk.

To unencapsulate a member's swap devices:

  1. Display the names of LSM volumes in the rootdg disk group. (All swap volumes must belong to rootdg.)

    # volprint -g rootdg -vht
    

    TY NAME              ASSOC            KSTATE   LENGTH    ...
    v  hughie-swap01     swap             ENABLED  16777216  ...
    pl hughie-swap01-01  hughie-swap01    ENABLED  16777216  ...
    sd dsk4b-01          hughie-swap01-01 ENABLED  16777216  ...
    

    In the output (edited for brevity), look for the following:

  2. Edit the /cluster/members/member{n}/boot_partition/etc/sysconfigtab file for the member to remove the /dev/vol/rootdg/nodename-swapnn entry from the swapdevice= line.

  3. Reboot the member:

    # shutdown -r now
    

    When the member starts again, it no longer uses the LSM swap volume.

  4. Log back in to the same member.

  5. Remove the swap volume:

    # voledit -rf rm nodename-swapnn
    

  6. Find the LSM simple disk associated with the encapsulated swap device; for example:

    # voldisk -g rootdg list | grep dsk4
    dsk4b        nopriv    dsk4b        rootdg       online
    dsk4f        simple    dsk4f        rootdg       online
    

  7. Remove the LSM simple disk and the nopriv disk from the rootdg disk group and from LSM control; for example:

    # voldg -g rootdg rmdisk dsk4b dsk4f
    # voldisk rm dsk4b dsk4f
    

  8. Set the cluster member to swap on the original disk partition (the former nopriv disk); for example:

    # swapon /dev/disk/dsk4b
    

  9. Edit the /etc/sysconfigtab file as follows:

The cluster member uses the specified disk partition for its swap device and the LSM swap volume no longer exists.

7.7    Uninstalling the LSM Software

This section describes how to completely remove the LSM software from a standalone system or a cluster. This process involves:

Caution

Uninstalling LSM causes any current data in LSM volumes to be lost. Before proceeding, back up any needed data.

To uninstall the LSM software:

  1. Reconfigure any system-specific file systems and swap space, so they no longer use an LSM volume.

  2. Unmount any other file systems that are using LSM volumes, so all LSM volumes can be closed.

    1. Update the /etc/fstab file if necessary, so that it no longer mounts any file systems on an LSM volume.

    2. Stop applications that are using raw LSM volumes and reconfigure them, so that they no longer use LSM volumes.

  3. Identify the disks that are currently configured under LSM:

    # voldisk list
    

  4. Restart LSM in disabled mode (in a cluster, on only one member):

    # vold -k -r reset -d 
    

    This command fails if any volumes are open.

  5. Stop all LSM volume and I/O daemons (in a cluster, on every member):

    # voliod -f set 0
    # voldctl stop
    

  6. Update the disk labels for the disks under LSM control (in the output from step 3).

  7. Remove the LSM directories:

    # rm -r /etc/vol /dev/vol /dev/rvol /etc/vol/volboot
    

  8. Delete the following LSM entries in the /etc/inittab file (in a cluster, for every member):

    lsmr:s:sysinit:/sbin/lsmbstartup -b </dev/console >/dev/console 2>&1 ##LSM
    lsm:23:wait:/sbin/lsmbstartup </dev/console >/dev/console 2>&1 ##LSM
    vol:23:wait:/sbin/vol-reconfig -n </dev/console >/dev/console 2>&1 ##LSM
     
    

  9. Display the installed LSM subsets:

    # setld -i | grep LSM
    

  10. Delete the installed LSM subsets:

    # setld -d OSFLSMBASEnnn OSFLSMBINnnn OSFLSMCLSMTOOLSnnn
    

  11. In the /sys/conf/hostname file (in a cluster, for every member), change the value of the pseudo-device lsm entry from 1 to 0.

    In a cluster, the hostname is the member name, not the cluster alias.

    You can make this change either before or while running the doconfig command; for example:

    # doconfig -c hostname
    

  12. Copy the new kernel to the root (/) directory (in a cluster, on every member):

    # cp /sys/hostname/vmunix /
    

  13. Restart the system or cluster member.

    For information the appropriate way to restart each member, see the Cluster Administration manual.

    When the system restarts, or after every cluster member restarts, LSM will no longer be installed.