D    Installation Examples

This chapter provides samples of the logs written by:

D.1    clu_create Logs

Each time you run clu_create, it writes log messages to /cluster/admin/clu_create.log. Example D-1 shows a sample clu_create log file for a cluster with a Memory Channel interconnect. Example D-2 shows a sample clu_create log file for a cluster with a LAN interconnect.

Example D-1:  Sample clu_create Log File for Memory Channel Interconnect

Do you want to continue creating the cluster? [yes]: [Return]
 
Each cluster has a unique cluster name, which is a hostname
used to identify the entire cluster.
 
Enter a fully-qualified cluster name []:deli.zk3.dec.com
Checking cluster name: deli.zk3.dec.com.
 
You entered 'deli.zk3.dec.com' as your cluster name.
Is this correct? [yes]: [Return]
 
The cluster alias IP address is the IP address associated with the
default cluster alias.  (192.168.168.1 is an example of an IP address.)
 
Enter the cluster alias IP address []: 16.140.160.124
Checking cluster alias IP address: 16.140.160.124
 
You entered '16.140.160.124' as the IP address for the default cluster alias.
Is this correct? [yes]: [Return]
 
The cluster root partition is the disk partition (for example, dsk4b)
that will hold the clusterwide root (/) file system.
 
    Note: The default 'a' partition on most disks is not large
    enough to hold the clusterwide root AdvFS domain.
 
Enter the device name of the cluster root partition []:dsk2g
Checking the cluster root partition: dsk2g.
 
You entered 'dsk2g' as the device name of the cluster root partition.
Is this correct? [yes]: [Return]
 
The cluster usr partition is the disk partition (for example, dsk4g)
that will contain the clusterwide usr (/usr) file system.
 
    Note: The default 'g' partition on most disks is usually
    large enough to hold the clusterwide usr AdvFS domain.
 
Enter the device name of the cluster usr partition []:dsk6c
Checking the cluster usr partition: dsk6c.
 
You entered 'dsk6c' as the device name of the cluster usr partition.
Is this correct? [yes]: [Return]
 
The cluster var device is the disk partition (for example, dsk4h)
that will hold the clusterwide var (/var) file system.
 
    Note: The default 'h' partition on most disks is usually
    large enough to hold the clusterwide var AdvFS domain.
 
Enter the device name of the cluster var partition []:dsk2h
Checking the cluster var partition: dsk2h.
 
You entered 'dsk2h' as the device name of the cluster var partition.
Is this correct? [yes]: [Return]
 
Do you want to define a quorum disk device at this time? [yes]: [Return]
The quorum disk device is the name of the disk (for example, 'dsk5')
that will be used as this cluster quorum disk.
 
Enter the device name of the quorum disk []:dsk3
Checking the quorum disk device: dsk3.
 
You entered 'dsk3' as the device name of the quorum disk device.
Is this correct? [yes]: [Return]
 
By default the quorum disk is assigned '1' vote(s).
To use this default value, press Return at the prompt.
 
The number of votes for the quorum disk is an integer, usually 0 or 1.
If you select 0 votes then the quorum disk will not contribute votes to the
cluster. If you select 1 vote then the quorum disk must be accessible to
boot and run a single member cluster.
 
Enter the number of votes for the quorum disk [1]: [Return]
Checking number of votes for the quorum disk: 1.
 
You entered '1' as the number votes for the quorum disk.
Is this correct? [yes]: [Return]
 
The default member ID for the first cluster member is '1'.
To use this default value, press Return at the prompt.
 
A member ID is used to identify each member in a cluster.
Each member must have a unique member ID, which is an integer in
the range 1-63, inclusive.
 
Enter a cluster member ID [1]: [Return]
Checking cluster member ID: 1.
 
You entered '1' as the member ID.
Is this correct? [yes]: [Return]
 
By default the 1st member of a cluster is assigned '1' vote(s).
Checking number of votes for this member: 1.
 
Each member has its own boot disk, which has an associated
device name; for example, 'dsk5'.
 
Enter the device name of the member boot disk []:dsk4
Checking the member boot disk: dsk4.
 
You entered 'dsk4' as the device name of this member's boot disk.
Is this correct? [yes]: [Return]
 
Device 'ics0' is the default virtual cluster interconnect device.
Checking virtual cluster interconnect device: ics0.
 
The virtual cluster interconnect IP name 'swiss-ics0' was formed by
appending '-ics0' to the system's hostname.
To use this default value, press Return at the prompt.
 
Each virtual cluster interconnect interface has a unique IP name (a 
hostname) associated with it.
 
Enter the IP name for the virtual cluster interconnect [swiss-ics0]: [Return]
Checking virtual cluster interconnect IP name: swiss-ics0.
 
You entered 'swiss-ics0' as the IP name for the virtual cluster interconnect.
Is this name correct? [yes]: [Return]
 
The virtual cluster interconnect IP address '10.0.0.1' was created by
replacing the last byte of the default virtual cluster interconnect network
address '10.0.0.0' with the previously chosen member ID '1'.
To use this default value, press Return at the prompt.
 
The virtual cluster interconnect IP address is the IP address
associated with the virtual cluster interconnect IP name.  (192.168.168.1 
is an example of an IP address.)
 
Enter the IP address for the virtual cluster interconnect [10.0.0.1]: [Return]
Checking virtual cluster interconnect IP address: 10.0.0.1.
 
You entered '10.0.0.1' as the IP address for the virtual cluster interconnect.
Is this address correct? [yes]: [Return]
 
What type of cluster interconnect will you be using?
 
    Selection   Type of Interconnect
----------------------------------------------------------------------
         1      Memory Channel
         2      Local Area Network
         3      None of the above
         4      Help
         5      Display all options again
----------------------------------------------------------------------
Enter your choice [1]:4
A cluster must have a dedicated cluster interconnect to which all
members are connected.  The cluster interconnect serves as the primary
communications channel between cluster members. For hardware, the cluster
interconnect can use either Memory Channel or a private LAN. For
more information about cluster interconnect hardware, see the
Cluster Hardware Configuration manual.
 
What type of cluster interconnect will you be using?
 
    Selection   Type of Interconnect
----------------------------------------------------------------------
         1      Memory Channel
         2      Local Area Network
         3      None of the above
         4      Help
         5      Display all options again
----------------------------------------------------------------------
Enter your choice [1]: [Return]
You selected option '1' for the cluster interconnect.
Is that correct? (y/n) [y]: [Return]
 
 
Device 'mc0' is the default physical cluster interconnect interface device.
Checking physical cluster interconnect interface device name(s): mc0.
 
 
You entered the following information:
 
    Cluster name:                                            deli.zk3.dec.com
    Cluster alias IP Address:                                16.140.160.124
    Clusterwide root partition:                              dsk2g
    Clusterwide usr  partition:                              dsk6c
    Clusterwide var  partition:                              dsk2h
    Clusterwide i18n partition:                              Directory-In-/usr
    Quorum disk device:                                      dsk3
    Number of votes assigned to the quorum disk:             1
    First member's member ID:                                1
    Number of votes assigned to this member:                 1
    First member's boot disk:                                dsk4
    First member's virtual cluster interconnect device name: ics0
    First member's virtual cluster interconnect IP name:     swiss-ics0
    First member's virtual cluster interconnect IP address:  10.0.0.1
    First member's physical cluster interconnect devices:    mc0
    First member's NetRAIN device name:                      Not-Applicable
    First member's physical cluster interconnect IP address: Not-Applicable
 
If you want to change any of the above information, answer 'n' to the
following prompt. You will then be given an opportunity to change your
selections.
Do you want to continue to create the cluster? [yes]: [Return]
 
Creating required disk labels.
  Creating disk label on member disk: dsk4.
  Initializing cnx partition on member disk: dsk4h.
  Creating disk label on quorum disk: dsk3.
  Initializing cnx partition on quorum disk: dsk3h.
 
Creating AdvFS domains:
  Creating AdvFS domain 'root1_domain#root' on partition '/dev/disk/dsk4a'.
  Creating AdvFS domain 'cluster_root#root' on partition '/dev/disk/dsk2g'.
  Creating AdvFS domain 'cluster_usr#usr' on partition '/dev/disk/dsk6c'.
  Creating AdvFS domain 'cluster_var#var' on partition '/dev/disk/dsk2h'.
 
Populating clusterwide root, usr, and var file systems:
  Copying root file system to 'cluster_root#root'.
....
  Copying usr file system to 'cluster_usr#usr'.
.........................................
  Copying var file system to 'cluster_var#var'.
...
 
Creating Context Dependent Symbolic Links (CDSLs) for file systems:
  Creating CDSLs in root file system.
  Creating CDSLs in usr  file system.
  Creating CDSLs in var  file system.
  Creating links between clusterwide file systems.
 
Populating member's root file system.
 
Modifying configuration files required for cluster operation:
  Creating /etc/fstab file.
  Configuring cluster alias.
  Updating /etc/hosts - adding IP address '16.140.160.124' and hostname 'deli.zk3.dec.com'.
  Updating member-specific /etc/inittab file with 'cms' entry.
  Updating /etc/hosts - adding IP address '10.0.0.1' and hostname 'swiss-ics0'.
  Updating /etc/rc.config file.
  Updating /etc/sysconfigtab file.
  Retrieving cluster_root major and minor device numbers.
Warning: The following sysconfig variables are being set:
 kmem_debug=0xe
 kmem_audit_count=5000
 lockmode=4
 rt_preempt_opt=1
  Creating cluster device file CDSLs.
  Updating /.rhosts - adding hostname 'deli.zk3.dec.com'.
  Updating /etc/hosts.equiv - adding hostname 'deli.zk3.dec.com'.
  Updating /.rhosts - adding hostname 'swiss-ics0'.
  Updating /etc/hosts.equiv - adding hostname 'swiss-ics0'.
  Updating /etc/ifaccess.conf - adding deny entry for 'sl0'.
  Updating /etc/ifaccess.conf - adding deny entry for 'tu0'.
  Updating /etc/ifaccess.conf - adding deny entry for 'tun0'.
  Updating /etc/ifaccess.conf - adding deny entry for 'tun624'.
  Updating /etc/cfgmgr.auth - adding hostname 'swiss.zk3.dec.com'.
  Finished updating member1-specific area.
 
Building a kernel for this member.
  Saving kernel build configuration.
  The kernel will now be configured using the doconfig program.
 
*** Warning ***
  File in /usr/sys/BINARY found as a file, expected symlink: GENERIC.mod.
 
*** Warning ***
  File in /usr/sys/BINARY found as a file, expected symlink: GENERIC_EXTRAS.mod.
 
*** KERNEL CONFIGURATION AND BUILD PROCEDURE ***
 
Saving /sys/conf/SWISS as /sys/conf/SWISS.bck
 
 
*** PERFORMING KERNEL BUILD ***
	Working....Wed Feb 20 14:37:35 EST 2002
	Working....Wed Feb 20 14:39:40 EST 2002
 
The new kernel is /sys/SWISS/vmunix
  Finished running the doconfig program.
 
  The kernel build was successful and the new kernel
   has been copied to this member's boot disk.
  Restoring kernel build configuration.
 
Updating console variables.
  Setting console variable 'bootdef_dev' to dsk4.
  Setting console variable 'boot_dev' to dsk4.
  Setting console variable 'boot_reset' to ON.
  Saving console variables to non-volatile storage.
 
clu_create: Cluster created successfully.
 
To run this system as a single member cluster it must be rebooted.
If you answer yes to the following question clu_create will reboot the
system for you now. If you answer no, you must manually reboot the
system after clu_create exits.
Would you like clu_create to reboot this system now? [yes]: [Return]
Shutdown at 14:46 (in 0 minutes) [pid 26664]
 

Example D-2:  Sample clu_create Log File for LAN Interconnect

Do you want to continue creating the cluster? [yes]: [Return]
 
*** Info ***
 Memory Channel hardware not found in system; continuing ...
 
Each cluster has a unique cluster name, which is a hostname
used to identify the entire cluster.
 
Enter a fully-qualified cluster name []:deli.zk3.dec.com
Checking cluster name: deli.zk3.dec.com.
 
You entered 'deli.zk3.dec.com' as your cluster name.
Is this correct? [yes]: [Return]
 
The cluster alias IP address is the IP address associated with the
default cluster alias.  (192.168.168.1 is an example of an IP address.)
 
Enter the cluster alias IP address []: 16.140.112.209
Checking cluster alias IP address: 16.140.112.209
 
You entered '16.140.112.209' as the IP address for the default cluster alias.
Is this correct? [yes]: [Return]
 
The cluster root partition is the disk partition (for example, dsk4b)
that will hold the clusterwide root (/) file system.
 
The cluster root partition is the disk partition (for example, dsk4b)
that will hold the clusterwide root (/) file system.
 
    Note: The default 'a' partition on most disks is not large
    enough to hold the clusterwide root AdvFS domain.
 
Enter the device name of the cluster root partition []:dsk7b
Checking the cluster root partition: dsk7b.
 
You entered 'dsk7b' as the device name of the cluster root partition.
Is this correct? [yes]: [Return]
 
The cluster usr partition is the disk partition (for example, dsk4g)
that will contain the clusterwide usr (/usr) file system.
 
    Note: The default 'g' partition on most disks is usually
    large enough to hold the clusterwide usr AdvFS domain.
 
Enter the device name of the cluster usr partition []:dsk7g
Checking the cluster usr partition: dsk7g.
 
You entered 'dsk7g' as the device name of the cluster usr partition.
Is this correct? [yes]: [Return]
 
The cluster var device is the disk partition (for example, dsk4h)
that will hold the clusterwide var (/var) file system.
 
    Note: The default 'h' partition on most disks is usually
    large enough to hold the clusterwide var AdvFS domain.
 
Enter the device name of the cluster var partition []:dsk7h
Checking the cluster var partition: dsk7h.
 
You entered 'dsk7h' as the device name of the cluster var partition.
Is this correct? [yes]: [Return]
 
Do you want to define a quorum disk device at this time? [yes]: [Return]
The quorum disk device is the name of the disk (for example, 'dsk5')
that will be used as this cluster quorum disk.
 
Enter the device name of the quorum disk []:dsk6
Checking the quorum disk device: dsk6.
 
You entered 'dsk6' as the device name of the quorum disk device.
Is this correct? [yes]: [Return]
 
By default the quorum disk is assigned '1' vote(s).
To use this default value, press Return at the prompt.
 
The number of votes for the quorum disk is an integer, usually 0 or 1.
If you select 0 votes then the quorum disk will not contribute votes to the
cluster. If you select 1 vote then the quorum disk must be accessible to
boot and run a single member cluster.
 
Enter the number of votes for the quorum disk [1]: [Return]
Checking number of votes for the quorum disk: 1.
 
You entered '1' as the number votes for the quorum disk.
Is this correct? [yes]: [Return]
 
The default member ID for the first cluster member is '1'.
To use this default value, press Return at the prompt.
 
A member ID is used to identify each member in a cluster.
Each member must have a unique member ID, which is an integer in
the range 1-63, inclusive.
 
Enter a cluster member ID [10]: [Return]
Checking cluster member ID: 10.
 
You entered '10' as the member ID.
Is this correct? [yes]: [Return]
 
By default the 1st member of a cluster is assigned '1' vote(s).
Checking number of votes for this member: 1.
 
Each member has its own boot disk, which has an associated
device name; for example, 'dsk5'.
 
Enter the device name of the member boot disk []:dsk2
Checking the member boot disk: dsk2.
 
You entered 'dsk2' as the device name of this member's boot disk.
Is this correct? [yes]: [Return]
 
Device 'ics0' is the default virtual cluster interconnect device.
Checking virtual cluster interconnect device: ics0.
 
The virtual cluster interconnect IP name 'pepicelli-ics0' was formed by
appending '-ics0' to the system's hostname.
To use this default value, press Return at the prompt.
 
Each virtual cluster interconnect interface has a unique IP name (a 
hostname) associated with it.
 
Enter the IP name for the virtual cluster interconnect [pepicelli-ics0]: [Return]
Checking virtual cluster interconnect IP name: pepicelli-ics0.
 
You entered 'pepicelli-ics0' as the IP name for the virtual cluster interconnect.
Is this name correct? [yes]: [Return]
 
The virtual cluster interconnect IP address '10.0.0.10' was created by
replacing the last byte of the default virtual cluster interconnect network
address '10.0.0.0' with the previously chosen member ID '10'.
To use this default value, press Return at the prompt.
 
The virtual cluster interconnect IP address is the IP address
associated with the virtual cluster interconnect IP name.  (192.168.168.1 
is an example of an IP address.)
 
Enter the IP address for the virtual cluster interconnect [10.0.0.10]: [Return]
Checking virtual cluster interconnect IP address: 10.0.0.10.
 
You entered '10.0.0.10' as the IP address for the virtual cluster interconnect.
Is this address correct? [yes]: [Return]
 
The physical cluster interconnect interface device is the name of the
physical device(s) that will be used for low level cluster node
communications. Examples of the physical cluster interconnect interface
device name are: tu0, ee0, and nr0.
 
Enter the physical cluster interconnect device name(s) []:alt0
Would you like to place this Ethernet device into a NetRAIN set? [yes]:n
Checking physical cluster interconnect interface device name(s): alt0.
 
You entered 'alt0' as your physical cluster interconnect interface
device name(s). Is this correct? [yes]: [Return]
 
The physical cluster interconnect IP name 'member10-icstcp0' was formed by
appending '-icstcp0' to the word 'member' and the member ID.
Checking physical cluster interconnect IP name: member10-icstcp0.
 
The physical cluster interconnect IP address '10.1.0.10' was created by
replacing the last byte of the default cluster interconnect network address
'10.1.0.0' with the previously chosen member ID '10'.
To use this default value, press Return at the prompt.
 
The cluster physical interconnect IP address is the IP address
associated with the physical cluster interconnect IP name. (192.168.168.1
is an example of an IP address.)
 
Enter the IP address for the physical cluster interconnect [10.1.0.10]: [Return]
Checking physical cluster interconnect IP address: 10.1.0.10.
 
You entered '10.1.0.10' as the IP address for the physical cluster interconnect.
Is this address correct? [yes]: [Return]
 
 
You entered the following information:
 
    Cluster name:                                            deli.zk3.dec.com
    Cluster alias IP Address:                                16.140.112.209
    Clusterwide root partition:                              dsk7b
    Clusterwide usr  partition:                              dsk7g
    Clusterwide var  partition:                              dsk7h
    Clusterwide i18n partition:                              Directory-In-/usr
    Quorum disk device:                                      dsk6
    Number of votes assigned to the quorum disk:             1
    First member's member ID:                                10
    Number of votes assigned to this member:                 1
    First member's boot disk:                                dsk2
    First member's virtual cluster interconnect device name: ics0
    First member's virtual cluster interconnect IP name:     pepicelli-ics0
    First member's virtual cluster interconnect IP address:  10.0.0.10
    First member's physical cluster interconnect devices:    alt0
    First member's NetRAIN device name:                      Not-Applicable
    First member's physical cluster interconnect IP address: 10.1.0.10
 
If you want to change any of the above information, answer 'n' to the
following prompt. You will then be given an opportunity to change your
selections.
Do you want to continue to create the cluster? [yes]: [Return]
 
*** Info ***
 Beginning configuration of initial cluster member. 
 
Configuring Disks:                          
Creating required disk labels.
  Creating disk label on member disk: dsk2.
  Initializing cnx partition on member disk: dsk2h.
  Creating disk label on quorum disk: dsk6.
  Initializing cnx partition on quorum disk: dsk6h.
Configuring AdvFs Domains:                  
Creating AdvFS domains:
  Creating AdvFS domain 'root10_domain#root' on partition '/dev/disk/dsk2a'.
  Creating AdvFS domain 'cluster_root#root' on partition '/dev/disk/dsk7b'.
  Creating AdvFS domain 'cluster_usr#usr' on partition '/dev/disk/dsk7g'.
  Creating AdvFS domain 'cluster_var#var' on partition '/dev/disk/dsk7h'.
Creating Clusterwide File System:           
Populating clusterwide root, usr, and var file systems:
  Copying root file system to 'cluster_root#root'.
  Copying usr file system to 'cluster_usr#usr'.
  Copying var file system to 'cluster_var#var'.
Setting Up Member Area:                     
Creating Context Dependent Symbolic Links (CDSLs) for file systems:
  Creating CDSLs in root file system.
  Creating CDSLs in usr  file system.
  Creating CDSLs in var  file system.
  Creating links between clusterwide file systems.
Creating Member's Boot Disk:                
Populating member's root file system.
Setting up cluster configuration files:     
Modifying configuration files required for cluster operation:
  Creating /etc/fstab file.
  Configuring cluster alias.
  Updating /etc/hosts - adding IP address '16.140.112.209' and hostname 'deli.zk3.dec.com'.
  Updating member-specific /etc/inittab file with 'cms' entry.
  Updating /etc/hosts - adding IP address '10.0.0.10' and hostname 'pepicelli-ics0'.
  Updating /etc/hosts - adding IP address '10.1.0.10' and hostname 'member10-icstcp0'.
  Updating /etc/rc.config file.
  Updating /etc/sysconfigtab file.
  Retrieving cluster_root major and minor device numbers.
Warning: The following sysconfig variables are being set:
 kmem_debug=0xe
 kmem_audit_count=5000
 lockmode=4
 rt_preempt_opt=1
  Creating cluster device file CDSLs.
  Updating /.rhosts - adding hostname 'deli.zk3.dec.com'.
  Updating /etc/hosts.equiv - adding hostname 'deli.zk3.dec.com'.
  Updating /.rhosts - adding hostname 'pepicelli-ics0'.
  Updating /etc/hosts.equiv - adding hostname 'pepicelli-ics0'.
  Updating /.rhosts - adding hostname 'member10-icstcp0'.
  Updating /etc/hosts.equiv - adding hostname 'member10-icstcp0'.
  Updating /etc/ifaccess.conf - adding deny entry for 'ee0'.
  Updating /etc/ifaccess.conf - adding deny entry for 'sl0'.
  Updating /etc/ifaccess.conf - adding deny entry for 'tu0'.
  Updating /etc/ifaccess.conf - adding deny entry for 'tu1'.
  Updating /etc/ifaccess.conf - adding deny entry for 'tu2'.
  Updating /etc/ifaccess.conf - adding deny entry for 'tu3'.
  Updating /etc/ifaccess.conf - adding deny entry for 'tun0'.
  Updating /etc/ifaccess.conf - adding deny entry for 'tun624'.
  Updating /etc/ifaccess.conf - adding deny entry for 'ee0'.
  Updating /etc/ifaccess.conf - adding deny entry for 'sl0'.
  Updating /etc/ifaccess.conf - adding deny entry for 'tu0'.
  Updating /etc/ifaccess.conf - adding deny entry for 'tu1'.
  Updating /etc/ifaccess.conf - adding deny entry for 'tu2'.
  Updating /etc/ifaccess.conf - adding deny entry for 'tu3'.
  Updating /etc/ifaccess.conf - adding deny entry for 'tun0'.
  Updating /etc/ifaccess.conf - adding deny entry for 'tun624'.
  Finished updating member10-specific area.
 
Building a kernel for this member.
  Saving kernel build configuration.
  The kernel will now be configured using the doconfig program.
 
*** KERNEL CONFIGURATION AND BUILD PROCEDURE ***
 
Saving /sys/conf/PEPICELLI as /sys/conf/PEPICELLI.bck
 
 
*** PERFORMING KERNEL BUILD ***
	Working....Fri Feb 15 10:51:37 EST 2002
 
The new kernel is /sys/PEPICELLI/vmunix
  Finished running the doconfig program.
 
  The kernel build was successful and the new kernel
   has been copied to this member's boot disk.
  Restoring kernel build configuration.
 
Updating console variables.
  Setting console variable 'bootdef_dev' to dsk2.
  Setting console variable 'boot_dev' to dsk2.
  Setting console variable 'boot_reset' to ON.
  Saving console variables to non-volatile storage.
 
Cluster created successfully.
Cluster log created in /cluster/admin/clu_create.log
 
To run this system as a single member cluster it must be rebooted.
If you answer yes to the following question clu_create will reboot the
system for you now. If you answer no, you must manually reboot the
system after clu_create exits.
Would you like clu_create to reboot this system now? [yes]: [Return]
Shutdown at 10:52 (in 0 minutes) [pid 6011]
 

D.2    clu_add_member Log

Each time you run clu_add_member, it writes log messages to /cluster/admin/clu_add_member.log. Example D-3 shows a sample clu_add_member log file.

Example D-3:  Sample clu_add_member Log File

Do you want to continue adding this member? [yes]: [Return]
 
Each cluster member has a hostname, which is assigned to the HOSTNAME
variable in /etc/rc.config.
 
Enter the new member's fully qualified hostname []: polishham.zk3.dec.com
Checking member's hostname: polishham.zk3.dec.com
 
You entered 'polishham.zk3.dec.com' as this member's hostname.
Is this name correct? [yes]: [Return]
 
The next available member ID for a cluster member is '2'.
To use this default value, press Return at the prompt.
 
A member ID is used to identify each member in a cluster.
Each member must have a unique member ID, which is an integer in
the range 1-63, inclusive.
 
Enter a cluster member ID [2]: [Return]
Checking cluster member ID: 2
 
You entered '2' as the member ID.
Is this correct? [yes]: [Return]
 
By default, when the current cluster's expected votes are greater then 1,
each added member is assigned 1 vote(s). Otherwise, each added member is
assigned 0 (zero) votes.
To use this default value, press Return at the prompt.
 
The number of votes for a member is an integer usually 0 or 1
Enter the number of votes for this member [1]: [Return]
Checking number of votes for this member: 1
 
You entered '1' as the number votes for this member.
Is this correct? [yes]: [Return]
 
Each member has its own boot disk, which has an associated
device name; for example, 'dsk5'.
 
Enter the device name of the member boot disk []: dsk12
Checking the member boot disk: dsk12
 
You entered 'dsk12' as the device name of this member's boot disk.
Is this correct? [yes]: [Return]
 
Device 'ics0' is the default virtual cluster interconnect device
Checking virtual cluster interconnect device: ics0
 
The virtual cluster interconnect IP name 'polishham-ics0' was formed by
appending '-ics0' to the system's hostname.
To use this default value, press Return at the prompt.
 
Each virtual cluster interconnect interface has a unique IP name (a 
hostname) associated with it.
 
Enter the IP name for the virtual cluster interconnect [polishham-ics0]: [Return]
Checking virtual cluster interconnect IP name: polishham-ics0
 
You entered 'polishham-ics0' as the IP name for the virtual cluster interconnect.
Is this name correct? [yes]: [Return]
 
The virtual cluster interconnect IP address '10.0.0.2' was created by
replacing the last byte of the virtual cluster interconnect network address
'10.0.0.0' with the previously chosen member ID '2'.
To use this default value, press Return at the prompt.
 
The virtual cluster interconnect IP address is the IP address
associated with the virtual cluster interconnect IP name.  (192.168.168.1 
is an example of an IP address.)
 
Enter the IP address for the virtual cluster interconnect [10.0.0.2]: [Return]
Checking virtual cluster interconnect IP address: 10.0.0.2
 
You entered '10.0.0.2' as the IP address for the virtual cluster interconnect.
Is this address correct? [yes]: [Return]
 
Device 'mc0' is the default physical cluster interconnect interface device
To use this default value, press Return at the prompt.
 
The physical cluster interconnect interface device is the name of the
physical device(s) which will be used for low level cluster node
communications. Examples of the physical cluster interconnect interface
device name are: tu0, ee0, and nr0.
 
Enter the physical cluster interconnect device name(s) [mc0]: [Return]
Checking physical cluster interconnect interface device name(s): mc0
 
You entered 'mc0' as your physical cluster interconnect interface
device name(s). Is this correct? [yes]: [Return]
 
Each cluster member must have its own registered TruCluster Server
license. The data required to register a new member is typically located on
the License PAK certificate or it may have been previously placed on your
system as a partial or complete license data file. If you are prepared to
enter this license data at this time, clu_add_member can configure the new
member to use this license data. If you do not have the license data at this
time you can enter this data on the new member when it is up and running.
Do you wish to register the TruCluster Server license for this new member at
this time? [yes]: no
 
You entered the following information:
 
    Member's hostname:                                 polishham.zk3.dec.com
    Member's ID:                                       2
    Number of votes assigned to this member:           1
    Member's boot disk:                                dsk12
    Member's virtual cluster interconnect devices:     ics0
    Member's virtual cluster interconnect IP name:     polishham-ics0
    Member's virtual cluster interconnect IP address:  10.0.0.2
    Member's physical cluster interconnect devices:    mc0
    Member's NetRAIN device name:                      Not-Applicable
    Member's physical cluster interconnect IP address: Not-Applicable
    Member's cluster license:                          Not Entered
 
If you want to change any of the above information answers 'n' to the
following prompt. You will then be given an opportunity to change your
selections.
Do you want to continue to add this member? [yes]: [Return]
 
Creating required disk labels.
  Creating disk label on member disk : dsk12
  Initializing cnx partition on member disk : dsk12h
 
Creating AdvFS domains:
  Creating AdvFS domain 'root2_domain#root' on partition '/dev/disk/dsk12a'.
 
Creating cluster member-specific files:
  Creating new member's root member-specific files
  Creating new member's usr  member-specific files
  Creating new member's var  member-specific files
  Creating new member's boot member-specific files
 
Modifying configuration files required for new member operation:
  Updating /etc/hosts - adding IP address '10.0.0.2' and hostname 'polishham-ics0'
  Updating /etc/rc.config
  Updating /etc/sysconfigtab
  Updating member-specific /etc/inittab file with 'cms' entry.
  Updating /etc/securettys - adding ptys entry
  Updating /.rhosts - adding hostname 'polishham-ics0'
  Updating /etc/hosts.equiv - adding hostname 'polishham-ics0'
  Updating /etc/cfgmgr.auth - adding hostname 'polishham.zk3.dec.com'
  Configuring cluster alias.
  Configuring Network Time Protocol for new member
  Adding interface 'pepicelli-ics0' as an NTP peer to member 'polishham.zk3.dec.com' 
  Adding interface 'polishham-ics0' as an NTP peer to member 'pepicelli.zk3.dec.com' 
 
Configuring automatic subset configuration and kernel build.
 
clu_add_member: Initial member 2 configuration completed successfully.
From the newly added member's console, perform the following steps to
complete the newly added member's configuration:
 
  1. Set the console variable 'boot_osflags' to 'A'.
  2. Identify the console name of the newly added member's boots device.
 
     >>> show device
 
     The newly added member's boot device has the following properties:
 
     Manufacturer: DEC
     Model: HSG80
     Target: IDENTIFIER=4
     Lun: UNKNOWN
     Serial Number: SCSI-WWID:01000010:6000-1fe1-0006-3f10-0009-0270-0619-0005
 
     Note: The SCSI bus number may differ when viewed from different members.
 
  3. Boot the newly added member using genvmunix:
 
      >>> boot -file genvmunix <new-member-boot-device>
 
     During this initial boot the newly added member will:
 
     o  Configure each installed subset.
 
     o  Attempt to build and install a new kernel. If the system cannot
        build a kernel, it starts a shell where you can attempt to build
        a kernel manually. If the build succeeds, copy the new kernel to
        /vmunix. When you are finished exit the shell using ^D or 'exit'.
 
     o  The newly added member will attempt to set boot related console
        variables and continue to boot to multi-user mode.
 
     o  After the newly added member boots you should setup your system 
        default network interface using the appropriate system management
        command.
 

D.3    clu_upgrade Log

Each time you perform a rolling upgrade, clu_upgrade writes log messages to /cluster/admin/clu_upgrade.log. When the rolling upgrade is complete, clu_upgrade moves the log file to the /cluster/admin/clu_upgrade/history/release_version directory. Example D-4 shows a sample clu_upgrade log file for a rolling upgrade of a cluster from Version 5.1A to Version 5.1B. (The log is slightly reformatted for readability).

Example D-4:  Sample clu_upgrade Log File

#############################################################################
clu_upgrade Command: clu_upgrade upgrade setup 
    On Host: pepicelli2.zk3.dec.com Invoked at: Tue Apr 30 08:41:15 EDT 2002
-----------------------------------------------------------------------------
 
This is the cluster upgrade program.
You have indicated that you want to perform the 'setup' stage of the
upgrade.
 
Do you want to continue to upgrade the cluster? [yes]: [Return]
 
What type of rolling upgrade will be performed?
 
    Selection   Type of Upgrade
----------------------------------------------------------------------
         1      An upgrade using the installupdate command
         2      A patch using the dupatch command
         3      A new hardware delivery using the nhd_install command
         4      All of the above
         5      None of the above
         6      Help
         7      Display all options again
----------------------------------------------------------------------
Enter your Choices (for example, 1 2 2-3):1
You selected the following rolling upgrade options: 1
Is that correct? (y/n) [y]: [Return]
 
 
Enter the full pathname of the cluster kit mount \
    point ['???']:/cdrom/TruCluster/kit
/
A cluster kit has been found in the following location:
 
/cdrom/TruCluster/kit
 
This kit has the following version information:
 
'Tru64 UNIX TruCluster(TM) Server Software T5.1B-4 (Rev 639)'
 
Is this the correct cluster kit for the update being performed? [yes]: [Return]
 
Checking inventory and available disk space.
Marking stage 'setup' as 'started'.
Copying cluster kit '/cdrom/TruCluster/kit' to '/var/adm/update/TruClusterKit/'.
 
Creating tagged files.
 
............................................................................\
............................................................................\
............................................................................\
...............................................................
.............................................
The cluster upgrade 'setup' stage has completed successfully.
Reboot all cluster members except member: '1'
Marking stage 'setup' as 'completed'.
 
The 'setup' stage of the upgrade has completed successfully.
-----------------------------------------------------------------------------
clu_upgrade Command: clu_upgrade upgrade setup 
    On Host: pepicelli2.zk3.dec.com Exited at: Tue Apr 30 09:36:36 EDT 2002
#############################################################################
clu_upgrade Command: clu_upgrade boot  
    On Host: pepicelli1.zk3.dec.com Invoked at: Tue Apr 30 10:10:33 EDT 2002
-----------------------------------------------------------------------------
-----------------------------------------------------------------------------
clu_upgrade Command: clu_upgrade boot  
    On Host: pepicelli1.zk3.dec.com Exited at: Tue Apr 30 10:10:34 EDT 2002
#############################################################################
clu_upgrade Command: clu_upgrade boot  
    On Host: pepicelli3.zk3.dec.com Invoked at: Tue Apr 30 10:31:40 EDT 2002
-----------------------------------------------------------------------------
-----------------------------------------------------------------------------
clu_upgrade Command: clu_upgrade boot  
    On Host: pepicelli3.zk3.dec.com Exited at: Tue Apr 30 10:31:41 EDT 2002
#############################################################################
clu_upgrade Command: clu_upgrade upgrade preinstall 
    On Host: pepicelli2.zk3.dec.com Invoked at: Tue Apr 30 10:34:02 EDT 2002
-----------------------------------------------------------------------------
 
This is the cluster upgrade program.
You have indicated that you want to perform the 'preinstall' stage of the
upgrade.
 
Do you want to continue to upgrade the cluster? [yes]: [Return]
 
Checking tagged files.
............................................................................\
............................................................................\
............................................................................\
.............
Marking stage 'preinstall' as 'started'.
 
Backing up member-specific data for member: 1
.................
Marking stage 'preinstall' as 'completed'.
The cluster upgrade 'preinstall' stage has completed successfully.
On the lead member, perform the following steps before running
the installupdate command:
 
# shutdown -h now
>>> boot -fl s
 
When the system reaches single-user mode run the following commands:
 
# init s
# bcheckrc
# lmf reset
 
See the Tru64 UNIX Installation Guide for detailed information on using the
installupdate command.
 
The 'preinstall' stage of the upgrade has completed successfully.
-----------------------------------------------------------------------------
clu_upgrade Command: clu_upgrade upgrade preinstall 
    On Host: pepicelli2.zk3.dec.com Exited at: Tue Apr 30 11:41:15 EDT 2002
#############################################################################
clu_upgrade Command: clu_upgrade check install 
    On Host: pepicelli2.zk3.dec.com Invoked at: Tue Apr 30 12:33:47 EDT 2002
-----------------------------------------------------------------------------
Checking install...
The 'install'  stage of cluster upgrade is ready to be run.
-----------------------------------------------------------------------------
clu_upgrade Command: clu_upgrade check install 
    On Host: pepicelli2.zk3.dec.com Exited at: Tue Apr 30 12:33:48 EDT 2002
#############################################################################
clu_upgrade Command: clu_upgrade boot  
    On Host: pepicelli2.zk3.dec.com Invoked at: Tue Apr 30 14:51:51 EDT 2002
-----------------------------------------------------------------------------
-----------------------------------------------------------------------------
clu_upgrade Command: clu_upgrade boot  
    On Host: pepicelli2.zk3.dec.com Exited at: Tue Apr 30 14:51:51 EDT 2002
#############################################################################
clu_upgrade Command: clu_upgrade boot  
    On Host: pepicelli2.zk3.dec.com Invoked at: Tue Apr 30 15:12:43 EDT 2002
-----------------------------------------------------------------------------
-----------------------------------------------------------------------------
clu_upgrade Command: clu_upgrade boot  
    On Host: pepicelli2.zk3.dec.com Exited at: Tue Apr 30 15:12:44 EDT 2002
#############################################################################
clu_upgrade Command: clu_upgrade upgrade postinstall 
    On Host: pepicelli2.zk3.dec.com Invoked at: Tue Apr 30 15:13:47 EDT 2002
-----------------------------------------------------------------------------
 
This is the cluster upgrade program.
You have indicated that you want to perform the 'postinstall' stage of the
upgrade.
 
Do you want to continue to upgrade the cluster? [yes]: [Return]
Marking stage 'postinstall' as 'started'.
Marking stage 'postinstall' as 'completed'.
 
The 'postinstall' stage of the upgrade has completed successfully.
-----------------------------------------------------------------------------
clu_upgrade Command: clu_upgrade upgrade postinstall 
    On Host: pepicelli2.zk3.dec.com Exited at: Tue Apr 30 15:14:00 EDT 2002
 
USER SETTINGS:
--------------
OPTION_KERNEL=y
 
 
*** END UPDATE INSTALLATION (Tue Apr 30 15:46:51 EDT 2002) ***
 
 
#############################################################################
clu_upgrade Command: clu_upgrade boot  
    On Host: pepicelli1.zk3.dec.com Invoked at: Tue Apr 30 16:13:28 EDT 2002
-----------------------------------------------------------------------------
-----------------------------------------------------------------------------
clu_upgrade Command: clu_upgrade boot  
    On Host: pepicelli1.zk3.dec.com Exited at: Tue Apr 30 16:13:29 EDT 2002
 
USER SETTINGS:
--------------
OPTION_KERNEL=y
 
 
*** END UPDATE INSTALLATION (Tue Apr 30 16:45:18 EDT 2002) ***
 
 
#############################################################################
clu_upgrade Command: clu_upgrade boot  
    On Host: pepicelli3.zk3.dec.com Invoked at: Tue Apr 30 17:08:57 EDT 2002
-----------------------------------------------------------------------------
#############################################################################
clu_upgrade Command: clu_upgrade upgrade roll 
    On Host: pepicelli3.zk3.dec.com Invoked at: Tue Apr 30 16:28:03 EDT 2002
-----------------------------------------------------------------------------
 
This is the cluster upgrade program.
You have indicated that you want to perform the 'roll' stage of the
upgrade.
 
Do you want to continue to upgrade the cluster? [yes]: [Return]
Marking stage 'roll' as 'started'.
 
*** Info ***
This is the last member requiring a roll.
 
Backing up member-specific data for member: 3
.............
 
 
*** START UPDATE INSTALLATION (Tue Apr 30 16:32:50 EDT 2002) ***
    FLAGS: 
 
 
 
Checking for installed supplemental hardware support...
 
Completed check for installed supplemental hardware support
Checking for retired hardware...done.
 
Initializing new version information (OSF)...done
Initializing new version information (TCR)...done
Initializing the list of member specific files for member3...done
 
 
 
Update Installation has detected the following update installable
products on your system:
 
	Tru64 UNIX V5.1A Operating System ( Rev 1885 )
	Tru64 UNIX TruCluster(TM) Server Software V5.1A (Rev 1312)
 
 
These products will be updated to the following versions:
 
	Tru64 UNIX T5.1B-4 Operating System ( Rev 459 )
	Tru64 UNIX TruCluster(TM) Server Software T5.1B-4 (Rev 639)
 
 
It is recommended that you update your system firmware and perform a
complete system backup before proceeding.  A log of this update
installation can be found at /var/adm/smlogs/update.log.
 
 
Do you want to continue the Update Installation?  (y/n) []: y
Do you want to select optional kernel components?  (y/n) [n]: [Return]
Do you want to archive obsolete files?  (y/n) [n]: [Return]
 
*** Checking for conflicting software ***
 
 
The following software may require reinstallation after the Update
Installation is completed:
 
	Advanced Server for UNIX V5.1A ECO3
	Legato NetWorker
 
Do you want to continue the Update Installation?  (y/n) [y]: [Return]
*** Determining installed Operating System software ***
 
	Working....Tue Apr 30 16:36:34 EDT 2002
 
*** Determining installed Tru64 UNIX TruCluster(TM) Server \
    Software V5.1A (Rev 1312) software ***
 
 
*** Determining kernel components ***
 
 
 
*** KERNEL OPTION SELECTION ***
 
    Selection   Kernel Option
--------------------------------------------------------------
	1	System V Devices
	2	NTP V3 Kernel Phase Lock Loop (NTP_TIME)
	3	Kernel Breakpoint Debugger (KDEBUG)
	4	Packetfilter driver (PACKETFILTER)
	5	IP Gateway Screening Facility (GWSCREEN)
	6	IP-in-IP Tunneling (IPTUNNEL)
	7	IP Version 6 (IPV6)
	8	Point-to-Point Protocol (PPP)
	9	STREAMS pckt module (PCKT)
	10	X/Open Transport Interface (XTISO, TIMOD, TIRDWR)
	11	Digital Versatile Disk File System (DVDFS)
	12	ISO 9660 Compact Disc File System (CDFS)
	13	Audit Subsystem
	14	ATM UNI 3.0/3.1 ILMI (ATMILMI3X)
	15	IP Switching over ATM (ATMIFMP)
	16	LAN Emulation over ATM (LANE)
	17	Classical IP over ATM (ATMIP)
--- MORE TO FOLLOW ---
Enter your choices or press <Return>
to display the next screen.
 
Choices (for example, 1 2 4-6):
 	18	ATM UNI 3.0/3.1 Signalling for SVCs (UNI3X)
	19	Asynchronous Transfer Mode (ATM)
	20	All of the above
	21	None of the above
	22	Help
	23	Display all options again
--------------------------------------------------------------
 
Enter your choices.
 
Choices (for example, 1 2 4-6) [21]:20
You selected the following kernel options:
	System V Devices
	NTP V3 Kernel Phase Lock Loop (NTP_TIME)
	Kernel Breakpoint Debugger (KDEBUG)
	Packetfilter driver (PACKETFILTER)
	IP Gateway Screening Facility (GWSCREEN)
	IP-in-IP Tunneling (IPTUNNEL)
	IP Version 6 (IPV6)
	Point-to-Point Protocol (PPP)
	STREAMS pckt module (PCKT)
	X/Open Transport Interface (XTISO, TIMOD, TIRDWR)
	Digital Versatile Disk File System (DVDFS)
	ISO 9660 Compact Disc File System (CDFS)
	Audit Subsystem
	ATM UNI 3.0/3.1 ILMI (ATMILMI3X)
	IP Switching over ATM (ATMIFMP)
	LAN Emulation over ATM (LANE)
	Classical IP over ATM (ATMIP)
	ATM UNI 3.0/3.1 Signalling for SVCs (UNI3X)
	Asynchronous Transfer Mode (ATM)
 
Is that correct? (y/n) [y]: [Return]
*** Checking for file type conflicts ***
 
	Working....Tue Apr 30 16:38:16 EDT 2002
 
*** Checking for obsolete files ***
 
 
*** Checking file system space ***
 
Update Installation is now ready to begin modifying the files necessary
to reboot the cluster member off of the new OS. Please check the
/var/adm/smlogs/update.log and /var/adm/smlogs/it.log files for errors
after the installation is complete.
 
 
Do you want to continue the Update Installation? (y/n) [n]: [Return] 
 
*** Starting configuration merges for Update Install ***
 
 
	*** Merging new file ./etc/.new..sysconfigtab into 
  existing ./etc/../cluster/members/member3/boot_partition/etc/sysconfigtab 
 
  Merging /etc/../cluster/members/member3/boot_partition/etc/sysconfigtab
		Merge completed successfully.
 
 The critical files needed for reboot have been moved into place. The
 system will now reboot with the generic kernel for Compaq Computer
 Corporation Tru64 UNIX T5.1B-4 and complete the rolling upgrade for
 this member (member3).
 
clubase: Entry not found in /cluster/admin/tmp/stanza.stdin.1585508
 
The 'roll' stage has completed successfully.  This
member must be rebooted in order to run with the newly installed software.
Do you want to reboot this member at this time? []:yes
You indicated that you want to reboot this member at this time.
Is that correct? [yes]: [Return]
 
The 'roll' stage of the upgrade has completed successfully.
kill: 1573021: no such process
-----------------------------------------------------------------------------
clu_upgrade Command: clu_upgrade upgrade roll 
    On Host: pepicelli3.zk3.dec.com Exited at: Tue Apr 30 16:45:52 EDT 2002
#############################################################################
clu_upgrade Command: clu_upgrade upgrade roll 
    On Host: pepicelli1.zk3.dec.com Invoked at: Tue Apr 30 15:27:32 EDT 2002
-----------------------------------------------------------------------------
 
This is the cluster upgrade program.
You have indicated that you want to perform the 'roll' stage of the
upgrade.
 
Do you want to continue to upgrade the cluster? [yes]: [Return]
 
*** Warning ***
The cluster upgrade command was unable to find or verify the configuration file
used to build this member's kernel. clu_upgrade attempts to make a backup
copy of the configuration file which it would restore as required during a
clu_upgrade undo command. To use the default configuration file or to
continue without backing up a configuration file hit return.
Enter the name of the configuration file for this member [WAVY1]: [Return]
Marking stage 'roll' as 'started'.
Marking stage 'roll' as 'started'.
 
*** Info ***
The current quorum conditions indicate that beginning a roll of another
member at this time may result in the loss of quorum.
 
Mostly likely the limiting factor is the number of currently rolling members
that also contribute member votes.
 
You may attempt to run the clu_upgrade roll command on a member that
is not contributing member votes to the cluster.
 
You may also use the clu_quorum command to change to zero the votes of
members that have not rolled.
 
If this problem persists use, the clu_upgrade verbose command to identify
the members that are currently DOWN or rolling. Use the clu_quorum command
to identify and change the current quorum vote configuration of the
cluster.
 
 
Backing up member-specific data for member: 2
..................
 
 
*** START UPDATE INSTALLATION (Tue Apr 30 15:34:05 EDT 2002) ***
    FLAGS: 
 
 
 
Checking for installed supplemental hardware support...
 
Completed check for installed supplemental hardware support
Checking for retired hardware...done.
 
Initializing new version information (OSF)...done
Initializing new version information (TCR)...done
Initializing the list of member specific files for member2...done
 
 
 
Update Installation has detected the following update installable
products on your system:
 
	Tru64 UNIX V5.1A Operating System ( Rev 1885 )
	Tru64 UNIX TruCluster(TM) Server Software V5.1A (Rev 1312)
 
 
These products will be updated to the following versions:
 
	Tru64 UNIX T5.1B-4 Operating System ( Rev 459 )
	Tru64 UNIX TruCluster(TM) Server Software T5.1B-4 (Rev 639)
 
 
It is recommended that you update your system firmware and perform a
complete system backup before proceeding.  A log of this update
installation can be found at /var/adm/smlogs/update.log.
 
 
Do you want to continue the Update Installation?  (y/n) []: y
Do you want to select optional kernel components?  (y/n) [n]: [Return]
Do you want to archive obsolete files?  (y/n) [n]: [Return]
 
*** Checking for conflicting software ***
 
 
The following software may require reinstallation after the Update
Installation is completed:
 
	Advanced Server for UNIX V5.1A ECO3
	Legato NetWorker
 
Do you want to continue the Update Installation?  (y/n) [y]: [Return] 
*** Determining installed Operating System software ***
 
	Working....Tue Apr 30 15:37:34 EDT 2002
 
*** Determining installed Tru64 UNIX TruCluster(TM) Server \
    Software V5.1A (Rev 1312) software ***
 
 
*** Determining kernel components ***
 
 
 
*** KERNEL OPTION SELECTION ***
 
    Selection   Kernel Option
--------------------------------------------------------------
	1	System V Devices
	2	NTP V3 Kernel Phase Lock Loop (NTP_TIME)
	3	Kernel Breakpoint Debugger (KDEBUG)
	4	Packetfilter driver (PACKETFILTER)
	5	IP Gateway Screening Facility (GWSCREEN)
	6	IP-in-IP Tunneling (IPTUNNEL)
	7	IP Version 6 (IPV6)
	8	Point-to-Point Protocol (PPP)
	9	STREAMS pckt module (PCKT)
	10	X/Open Transport Interface (XTISO, TIMOD, TIRDWR)
	11	Digital Versatile Disk File System (DVDFS)
	12	ISO 9660 Compact Disc File System (CDFS)
	13	Audit Subsystem
	14	ATM UNI 3.0/3.1 ILMI (ATMILMI3X)
	15	IP Switching over ATM (ATMIFMP)
	16	LAN Emulation over ATM (LANE)
	17	Classical IP over ATM (ATMIP)
--- MORE TO FOLLOW ---
Enter your choices or press <Return>
to display the next screen.
 
Choices (for example, 1 2 4-6):
 	18	ATM UNI 3.0/3.1 Signalling for SVCs (UNI3X)
	19	Asynchronous Transfer Mode (ATM)
	20	All of the above
	21	None of the above
	22	Help
	23	Display all options again
--------------------------------------------------------------
 
Enter your choices.
 
Choices (for example, 1 2 4-6) [21]:20
You selected the following kernel options:
	System V Devices
	NTP V3 Kernel Phase Lock Loop (NTP_TIME)
	Kernel Breakpoint Debugger (KDEBUG)
	Packetfilter driver (PACKETFILTER)
	IP Gateway Screening Facility (GWSCREEN)
	IP-in-IP Tunneling (IPTUNNEL)
	IP Version 6 (IPV6)
	Point-to-Point Protocol (PPP)
	STREAMS pckt module (PCKT)
	X/Open Transport Interface (XTISO, TIMOD, TIRDWR)
	Digital Versatile Disk File System (DVDFS)
	ISO 9660 Compact Disc File System (CDFS)
	Audit Subsystem
	ATM UNI 3.0/3.1 ILMI (ATMILMI3X)
	IP Switching over ATM (ATMIFMP)
	LAN Emulation over ATM (LANE)
	Classical IP over ATM (ATMIP)
	ATM UNI 3.0/3.1 Signalling for SVCs (UNI3X)
	Asynchronous Transfer Mode (ATM)
 
Is that correct? (y/n) [y]: [Return] 
*** Checking for file type conflicts ***
 
	Working....Tue Apr 30 15:39:23 EDT 2002
 
*** Checking for obsolete files ***
 
 
*** Checking file system space ***
 
Update Installation is now ready to begin modifying the files necessary
to reboot the cluster member off of the new OS. Please check the
/var/adm/smlogs/update.log and /var/adm/smlogs/it.log files for errors
after the installation is complete.
 
 
 
 
Do you want to continue the Update Installation? (y/n) [n]: [Return]
 
*** Starting configuration merges for Update Install ***
 
 
	*** Merging new file ./etc/.new..sysconfigtab into 
    existing ./etc/../cluster/members/member2/boot_partition/etc/sysconfigtab 
 
    Merging /etc/../cluster/members/member2/boot_partition/etc/sysconfigtab
		Merge completed successfully.
 
 The critical files needed for reboot have been moved into place. The
 system will now reboot with the generic kernel for Compaq Computer
 Corporation Tru64 UNIX T5.1B-4 and complete the rolling upgrade for
 this member (member2).
 
clubase: Entry not found in /cluster/admin/tmp/stanza.stdin.1061256
 
The 'roll' stage has completed successfully.  This
member must be rebooted in order to run with the newly installed software.
Do you want to reboot this member at this time? []:y
You indicated that you want to reboot this member at this time.
Is that correct? [yes]: [Return]
 
The 'roll' stage of the upgrade has completed successfully.
kill: 1048734: no such process
-----------------------------------------------------------------------------
clu_upgrade Command: clu_upgrade upgrade roll 
    On Host: pepicelli1.zk3.dec.com Exited at: Tue Apr 30 15:48:19 EDT 2002
Marking stage 'roll' as 'completed'.
-----------------------------------------------------------------------------
clu_upgrade Command: clu_upgrade boot  
    On Host: pepicelli3.zk3.dec.com Exited at: Tue Apr 30 17:08:59 EDT 2002
#############################################################################
clu_upgrade Command: clu_upgrade upgrade switch 
    On Host: pepicelli3.zk3.dec.com Invoked at: Tue Apr 30 17:09:37 EDT 2002
-----------------------------------------------------------------------------
 
This is the cluster upgrade program.
You have indicated that you want to perform the 'switch' stage of the
upgrade.
 
Do you want to continue to upgrade the cluster? [yes]: [Return]
Initiating version switch on cluster members
.Marking stage 'switch' as 'started'.
.Successful switch of the version identifiers
 
Marking stage 'switch' as 'completed'.
The cluster upgrade 'switch' stage has completed successfully.
All cluster members must be rebooted before running the 'clean' command.
 
The 'switch' stage of the upgrade has completed successfully.
-----------------------------------------------------------------------------
clu_upgrade Command: clu_upgrade upgrade switch 
    On Host: pepicelli3.zk3.dec.com Exited at: Tue Apr 30 17:10:18 EDT 2002
#############################################################################
clu_upgrade Command: clu_upgrade boot  
    On Host: pepicelli3.zk3.dec.com Invoked at: Tue Apr 30 17:20:33 EDT 2002
-----------------------------------------------------------------------------
Marking stage 'switch' as 'completed'.
-----------------------------------------------------------------------------
clu_upgrade Command: clu_upgrade boot  
    On Host: pepicelli3.zk3.dec.com Exited at: Tue Apr 30 17:20:34 EDT 2002
#############################################################################
clu_upgrade Command: clu_upgrade boot  
    On Host: pepicelli1.zk3.dec.com Invoked at: Tue Apr 30 17:29:08 EDT 2002
-----------------------------------------------------------------------------
Marking stage 'switch' as 'completed'.
-----------------------------------------------------------------------------
clu_upgrade Command: clu_upgrade boot  
    On Host: pepicelli1.zk3.dec.com Exited at: Tue Apr 30 17:29:08 EDT 2002
#############################################################################
clu_upgrade Command: clu_upgrade boot  
    On Host: pepicelli2.zk3.dec.com Invoked at: Tue Apr 30 17:42:44 EDT 2002
-----------------------------------------------------------------------------
Marking stage 'switch' as 'completed'.
-----------------------------------------------------------------------------
clu_upgrade Command: clu_upgrade boot  
    On Host: pepicelli2.zk3.dec.com Exited at: Tue Apr 30 17:42:44 EDT 2002
#############################################################################
clu_upgrade Command: clu_upgrade upgrade clean 
    On Host: pepicelli2.zk3.dec.com Invoked at: Wed May  1 05:30:02 EDT 2002
-----------------------------------------------------------------------------
 
This is the cluster upgrade program.
You have indicated that you want to perform the 'clean' stage of the
upgrade.
 
Do you want to continue to upgrade the cluster? [yes]: [Return]
.Marking stage 'clean' as 'started'.
 
Deleting tagged files.
............................................................................\
............................................................................\
............................................................................\
.............Removing back-up and kit files
.........................
 
The Update Administration Utility is typically run after an update
installation to manage the files that are saved during an update installation.
 
Do you want to run the Update Administration Utility at this time? [yes]: [Return]
 
The Update Installation Cleanup utility is used to clean up backup
files created by Update Installation.  Update Installation can create
two types of files: .PreUPD and .PreMRG.  The .PreUPD files are
copies of unprotected customized system files as they existed prior
to running Update Installation.  The .PreMRG files are copies of
protected system files as they existed prior to running Update
Installation.
 
 
Please make a selection from the following menu.
 
        Update Installation Cleanup Main Menu
        ---------------------------------------
        c) Unprotected Customized File Administration (.PreUPD)
        p) Pre-Merge File Administration (.PreMRG)
        x) Exit This Utility
 
        Enter your choice:x
Exiting /usr/sbin/updadmin...
Marking stage 'clean' as 'completed'.
 
The 'clean' stage of the upgrade has completed successfully.
-----------------------------------------------------------------------------
clu_upgrade Command: clu_upgrade upgrade clean 
    On Host: pepicelli2.zk3.dec.com Exited at: Wed May  1 06:11:49 EDT 2002