This appendix contains information for sites that are upgrading from TruCluster Software Version 1.5 or Version 1.6 to TruCluster Server Version 5.1B but that either cannot or choose not to use the Option 2 or Option 3 storage mapping and configuration scripts described in Chapter 8.
In general and where possible, we recommend that you use the
procedures and scripts described in
Chapter 8.
However, if your storage topology, system
configurations, or site policy make it impossible to do so, this
appendix describes how to manually gather storage configuration
information in the Available Server Environment (ASE) and how to
manually configure storage on the new Tru64 UNIX system or
single-member cluster.
You are responsible for mapping old-style
(rz*
) device names to new-style
(dsk*
) device names.
You must create your own upgrade procedure.
Read all of
Chapter 8
and this appendix, decide which upgrade option is a
reasonable starting point for your upgrade, and then modify that option's
procedure.
F.1 Manually Gathering Device and Storage Configuration Information
This section replaces the steps in Option 2 and Option 3 that use the
clu_migrate_save
script to gather information about the current ASE
configuration and storage environment.
First, read Section 8.3.2 and create an up-to-date configuration map for the current ASE.
In
Chapter 8, the
clu_migrate_save
script captures
the current shared storage configuration including LSM and AdvFS
configuration information.
The
clu_migrate_configure
script reads the
gathered information and configures storage on the new Tru64 UNIX
system or single-member cluster.
However, if you plan to manually
configure storage on the new cluster after connecting the physical
devices, you will not run
clu_migrate_configure
and therefore need to
manually gather storage configuration information for the members of
the ASE.
Note
In addition to manually gathering information, we recommend running
clu_migrate_save
on the current ASE members. Other than gathering storage configuration information,clu_migrate_save
writes each device's special file name (rz*
) to thelabel:
field of that device's disk label. (The script also saves the the original device label as described in Chapter 8; you can restore the original labels after upgrading.)We also recommend running
clu_migrate_configure -x
on the new Tru64 UNIX system or TruCluster Server cluster. Theclu_migrate_configure -x
command does not configure storage; it lists the commands it would run if invoked without the -x option and displays a mapping of old-stylerz*
device names to new-style (dsk*
) device names. However, you must have runclu_migrate_save
on the ASE members in order forclu_migrate_configure -x
to provide this mapping.
If you do not use the scripts to help map old-style device names to
new-style device names, exercise extreme care when manually mapping
device names.
You must know the physical location of each
device, and be able to use this knowledge, and utilities like
hwmgr
and
scu
, to determine which
dsk*
name is assigned to each device during the
upgrade.
Determining which
dsk*
the new system or
cluster assigns to a device previously known as
rz*
is not a trivial task; it is the primary reason for providing the
scripts.
Although you do not have to use the migration scripts to
configure storage on the new Tru64 UNIX system or TruCluster Server
cluster, they are highly recommended for making sure that you know
which
rz*
device is now known as
dsk*
.
On each member of the ASE:
Run the
sys_check -all
command (Version 114 or
higher) to save system information and create a storage map.
For example:
# /usr/sbin/sys_check -all > file.html
Note
If you do not have the
sys_check
utility on the system, you can get it from the following URL:ftp://ftp.digital.com/pub/DEC/IAS/sys_check
Save both the
/var/ase/config/asecdb
database and
a text copy of the database.
For example:
# cp /var/ase/config/asecdb asecdb.copy # asemgr -d -C > asecdb.txt
Save information about AdvFS domains and file sets.
For example,
change directory to
/etc/fdmns
and capture the
output of the following commands:
Use the
ls -lR
command to list all domains and
associated devices.
Use the
showfdmn *
command to display information
about file domains and volumes.
Use the
showfsets
command to display fileset
information for each domain.
If you are using LSM, run the
volsave
command to
save the LSM configuration information for all disk groups.
(All ASE
services must be on line before you run the
volsave
command.)
# volsave -d volsave.output
If you are not using the
clu_migrate_save
and
clu_migrate_configure -x
commands
to map device names, you must manually map old-style
(rz*
) device names to their new-style
(dsk*
) counterparts in order to configure storage
on the new system.
To help with the mapping, use the
scu
command to create a list of the old-style
device names and their attributes.
Some suggested attributes are:
vendor, serial number, and bus/target/LUN (not applicable if
ase_fix_config
was used to renumber SCSI buses).
Enter the following
scu
commands to display these
attributes, and save the output to files:
# scu -f device show device # scu -f device show inq page serial # scu -f device show nexus
For example:
# scu -f /dev/rrz28g show device | grep -E "Vendor|Product|Firmware" Vendor Identification: DEC Product Identification: RZ26L (C) DEC Firmware Revision Level: 440C # scu -f /dev/rrz28g show inq page serial | grep "Product Serial" Product Serial Number: PCB=420240831056(ZG40831056 ?); \ HDA=0000000042181869 # scu -f /dev/rrz28g show nexus Device: RZ26L, Bus: 3, Target: 4, Lun: 0, Type: Direct Access
Copy the files that contain saved configuration information to the new Tru64 UNIX system or single-member cluster.
F.2 Manually Configuring Storage on the New Tru64 UNIX System or TruCluster Server Cluster
This section replaces the steps in Option 2 and Option 3 that use the
clu_migrate_configure
script to configure storage on the new Tru64 UNIX
system or single-member cluster.
Note
If you did not run
clu_migrate_save
on the ASE members, you cannot useclu_migrate_configure -x
to display device-name mappings. Before continuing, perform a manual mapping of all old-stylerz*
device names to the new-styledsk*
device names. In the following procedure, substitute the results of your own mapping for those provided byclu_migrate_configure -x
.
If you ran
clu_migrate_save
on the ASE members, on the Tru64 UNIX system
or the single-member cluster, run
clu_migrate_configure -x
:
# /usr/opt/TruCluster/tools/migrate/clu_migrate_configure -x
The
clu_migrate_configure -x
command displays a mapping of old-style device
names to new-style device names.
Use this information when configuring
the storage that was controlled by the ASE.
The following steps provide some guidance when configuring storage:
If you are not using the
clu_migrate_save
and
clu_migrate_configure -x
commands
to map device names, manually map old-style (rz*
)
device names to new-style (dsk
) device names.
Use
the
scu
command on the new system to
help map devices to their new device names.
For example:
# scu -f /dev/rdisk/dsk5g show device | grep -E "Vendor|Product|Firmware" Vendor Identification: DEC Product Identification: RZ26L (C) DEC Firmware Revision Level: 440C # scu -f /dev/rdisk/dsk5g show inq page serial | grep "Product Serial" Product Serial Number: PCB=420240831056(ZG40831056 ?); \ HDA=0000000042181869 # scu -f /dev/rdisk/dsk5g show nexus Device: RZ26L, Bus: 1, Target: 4, Lun: 0, Type: Direct Access
Using the
scu
information that you collected from
the ASE members, create a map of old-style device names to new-style
device names.
The
hwmgr
command is also a useful tool
when manually mapping device names.
The
scu
examples in
Section F.1, show that this device was known as
rz28
in the ASE.
The bus numbers in the
show nexus
output are not the same.
Because
ase_fix_config
was run in the ASE, bus numbers are
not the same on both systems, and are not a reliable piece of
information for mapping devices.
Using the LSM information from the saved
asecdb
database, the output from
sys_check -all
, and the
device mapping table that you created manually or with
clu_migrate_configure -x
,
configure each device and LSM disk group.
Note
The LSM configuration information on disk has changed format. If you need to revert to the ASE from this point on, you will need to retore the LSM information when you import on an ASE system.
For every new LSM device, enter:
# voldisk define device # voldisk online device
For every disk group, enter:
# voldg import disk_group
For every volume, enter:
# volume -g disk_group start volume
For example:
# voldisk define dsk5g # voldisk online dsk5g # voldg import toolsdg lsm:voldg: WARNING: Volume vol01: \ Temporarily renumbered due to conflict # volume -g toolsdg start vol01 # volume -g toolsdg start vol02
You can ignore the warnings.
To prepare LSM to update names following the next reboot, enter the
lsmupdate_setup
command:
# /sbin/lsm.d/bin/lsmupdate_setup
Using the AdvFS information from the saved
asecdb
database, the output from
sys_check -all
, and the
device mapping table that you manually created or with
clu_migrate_configure -x
,
manually re-create the AdvFS domains that were on the ASE, mapping the
old-style
rz
device names to new-style
dsk
device names and creating the appropriate
/etc/fdmns
entries.
For example:
# mkdir /etc/fdmns/data1_domain # cd /etc/fdmns/data1_domain # ln -s /dev/disk/dsk6g # mkdir /etc/fdmns/tools_dmn # cd /etc/fdmns/tools_dmn # ln -s /dev/vol/toolsdg/vol01 toolsdg.vol01
For each domain, enter the
showfsets
domain
command
and verify that the filesets are correct for the domain.
Mount the domains.
For each domain, run the
showfdmn
domain
command.
Add file systems to the
/etc/fstab
file.
Also update any other configuration files that contain
storage information, such as
/etc/exports
.
Reboot the system to configure LSM with the new device names:
# shutdown -r now
Enter the following LSM commands to examine the LSM configuration:
# voldisk list # volprint -thA