Document revision date: 15 July 2002
[Compaq] [Go to the documentation home page] [How to order documentation] [Help on this site] [How to contact us]
[OpenVMS documentation]

OpenVMS Cluster Systems


Previous Contents Index

3.4.4 Satellites

Satellites are computers without a local system disk. Generally, satellites are consumers of cluster resources, although they can also provide facilities for disk serving, tape serving, and batch processing. If satellites are equipped with local disks, they can enhance performance by using such local disks for paging and swapping.

Satellites are booted remotely from a boot server (or from a MOP server and a disk server) serving the system disk. Section 3.4.5 describes MOP and disk server functions during satellite booting.

Note: An Alpha system disk can be mounted as a data disk on a VAX computer and, with proper MOP setup, can be used to boot Alpha satellites. Similarly, a VAX system disk can be mounted on an Alpha computer and, with the proper MOP setup, can be used to boot VAX satellites.

Reference: Cross-architecture booting is described in Section 10.5.

3.4.5 Satellite Booting

When a satellite requests an operating system load, a MOP server for the appropriate OpenVMS Alpha or OpenVMS VAX operating system sends a bootstrap image to the satellite that allows the satellite to load the rest of the operating system from a disk server and join the cluster. The sequence of actions during booting is described in Table 3-1.

Table 3-1 Satellite Booting Process
Step Action Comments
1 Satellite requests MOP service. This is the original boot request that a satellite sends out across the network. Any node in the OpenVMS Cluster that has MOP service enabled and has the LAN address of the particular satellite node in its database can become the MOP server for the satellite.
2 MOP server loads the Alpha or VAX system. ++The MOP server responds to an Alpha satellite boot request by downline loading the SYS$SYSTEM:APB.EXE program along with the required parameters.

+The MOP server responds to a VAX satellite boot request by downline loading the SYS$SHARE:NISCS_LOAD.EXE program along with the required parameters.

For Alpha and VAX computers, Some of these parameters include:

  • System disk name
  • Root number of the satellite
3 Satellite finds additional parameters located on the system disk and root. The satellite finds OpenVMS Cluster system parameters, such as SCSSYSTEMID, SCSNODE, and NISCS_CONV_BOOT. The satellite also finds the cluster group code and password.
4 Satellite executes the load program The program establishes an SCS connection to a disk server for the satellite system disk and loads the SYSBOOT.EXE program.


++Alpha specific
+VAX specific

3.4.6 Examples

Figure 3-3 shows an OpenVMS Cluster system based on a LAN interconnect with a single Alpha server node and a single Alpha system disk.

Note: To include VAX satellites in this configuration, configure a VAX system disk on the Alpha server node following the instructions in Section 10.5.

Figure 3-3 LAN OpenVMS Cluster System with Single Server Node and System Disk


In Figure 3-3, the server node (and its system disk) is a single point of failure. If the server node fails, the satellite nodes cannot access any of the shared disks including the system disk. Note that some of the satellite nodes have locally connected disks. If you convert one or more of these into system disks, satellite nodes can boot from their own local system disk.

Figure 3-4 shows an example of an OpenVMS Cluster system that uses LAN and Fibre Channel interconnects.

Figure 3-4 LAN and Fibre Channel OpenVMS Cluster System: Sample Configuration


The LAN connects nodes A and B with nodes C and D into a single OpenVMS Cluster system.

In Figure 3-4, Volume Shadowing for OpenVMS is used to maintain key data storage devices in identical states (shadow sets A and B). Any data on the shadowed disks written at one site will also be written at the other site. However, the benefits of high data availability must be weighed against the performance overhead required to use the MSCP server to serve the shadow set over the cluster interconnect.

Figure 3-5 illustrates how FDDI can be configured with Ethernet from the bridges to the server CPU nodes. This configuration can increase overall throughput. OpenVMS Cluster systems that have heavily utilized Ethernet segments can replace the Ethernet backbone with a faster LAN to alleviate the performance bottleneck that can be caused by the Ethernet.

Figure 3-5 FDDI in Conjunction with Ethernet in an OpenVMS Cluster System


Comments:

3.4.7 LAN Bridge Failover Process

The following table describes how the bridge parameter settings can affect the failover process.
Option Comments
Decreasing the LISTEN_TIME value allows the bridge to detect topology changes more quickly. If you reduce the LISTEN_TIME parameter value, you should also decrease the value for the HELLO_INTERVAL bridge parameter according to the bridge-specific guidelines. However, note that decreasing the value for the HELLO_INTERVAL parameter causes an increase in network traffic.
Decreasing the FORWARDING_DELAY value can cause the bridge to forward packets unnecessarily to the other LAN segment. Unnecessary forwarding can temporarily cause more traffic on both LAN segments until the bridge software determines which LAN address is on each side of the bridge.

Note: If you change a parameter on one LAN bridge, you should change that parameter on all bridges to ensure that selection of a new root bridge does not change the value of the parameter. The actual parameter value the bridge uses is the value specified by the root bridge.

3.5 OpenVMS Cluster Systems Interconnected by MEMORY CHANNEL

MEMORY CHANNEL is a high-performance cluster interconnect technology for PCI-based Alpha systems. With the benefits of very low latency, high bandwidth, and direct memory access, MEMORY CHANNEL complements and extends the ability of OpenVMS Clusters to work as a single, virtual system. MEMORY CHANNEL is used for node-to-node cluster communications only. You use it in combination with another interconnect, such as Fibre Channel, SCSI, CI, or DSSI, that is dedicated to storage traffic.

3.5.1 Design

A node requires the following three hardware components to support a MEMORY CHANNEL connection:

3.5.2 Examples

Figure 3-6 shows a two-node MEMORY CHANNEL cluster with shared access to Fibre Channel storage and a LAN interconnect for failover.

Figure 3-6 Two-Node MEMORY CHANNEL OpenVMS Cluster Configuration


A three-node MEMORY CHANNEL cluster connected by a MEMORY CHANNEL hub and also by a LAN interconnect is shown in Figure 3-7. The three nodes share access to the Fibre Channel storage. The LAN interconnect enables failover if the MEMORY CHANNEL interconnect fails.

Figure 3-7 Three-Node MEMORY CHANNEL OpenVMS Cluster Configuration


3.6 Multihost SCSI OpenVMS Cluster Systems

OpenVMS Cluster systems support the Small Computer Systems Interface (SCSI) as a storage interconnect. A SCSI interconnect, also called a SCSI bus, is an industry-standard interconnect that supports one or more computers, peripheral devices, and interconnecting components.

Beginning with OpenVMS Alpha Version 6.2, multiple Alpha computers can simultaneously access SCSI disks over a SCSI interconnect. Another interconnect, for example, a local area network, is required for host-to-host OpenVMS cluster communications.

3.6.1 Design

Beginning with OpenVMS Alpha Version 6.2-1H3, OpenVMS Alpha supports up to three nodes on a shared SCSI bus as the storage interconnect. A quorum disk can be used on the SCSI bus to improve the availability of two-node configurations. Host-based RAID (including host-based shadowing) and the MSCP server are supported for shared SCSI storage devices.

With the introduction of the SCSI hub DWZZH-05, four nodes can be supported in a SCSI multihost OpenVMS Cluster system. In order to support four nodes, the hub's fair arbitration feature must be enabled.

For a complete description of these configurations, see Guidelines for OpenVMS Cluster Configurations.

3.6.2 Examples

Figure 3-8 shows an OpenVMS Cluster configuration that uses a SCSI interconnect for shared access to SCSI devices. Note that another interconnect, a LAN in this example, is used for host-to-host communications.

Figure 3-8 Three-Node OpenVMS Cluster Configuration Using a Shared SCSI Interconnect


3.7 Multihost Fibre Channel OpenVMS Cluster Systems

OpenVMS Cluster systems support FC interconnect as a storage interconnect. Fibre Channel is an ANSI standard network and storage interconnect that offers many advantages over other interconnects, including high-speed transmission and long interconnect distances. A second interconnect is required for node-to-node communications.

3.7.1 Design

OpenVMS Alpha supports the Fibre Channel SAN configurations described in the latest Compaq StorageWorks Heterogeneous Open SAN Design Reference Guide and in the Data Replication Manager (DRM) user documentation. This configuration support includes multiswitch Fibre Channel fabrics, up to 500 meters of multimode fiber, and up to 100 kilometers of single-mode fiber. In addition, DRM configurations provide long-distance intersite links (ISLs) through the use of the Open Systems Gateway and wave division multiplexors. OpenVMS supports sharing of the fabric and the HSG storage with non-OpenVMS systems.

OpenVMS provides support for the number of hosts, switches, and storage controllers specified in the StorageWorks documentation. In general, the number of hosts and storage controllers is limited only by the number of available fabric connections.

Host-based RAID (including host-based shadowing) and the MSCP server are supported for shared Fibre Channel storage devices. Multipath support is available for these configurations.

For a complete description of these configurations, see Guidelines for OpenVMS Cluster Configurations.

3.7.2 Examples

Figure 3-9 shows a multihost configuration with two independent Fibre Channel interconnects connecting the hosts to the storage subsystems. Note that another interconnect is used for node-to-node communications.

Figure 3-9 Four-Node OpenVMS Cluster Configuration Using a Fibre Channel Interconnect



Chapter 4
The OpenVMS Cluster Operating Environment

This chapter describes how to prepare the OpenVMS Cluster operating environment.

4.1 Preparing the Operating Environment

To prepare the cluster operating environment, there are a number of steps you perform on the first OpenVMS Cluster node before configuring other computers into the cluster. The following table describes these tasks.
Task Section
Check all hardware connections to computer, interconnects, and devices. Described in the appropriate hardware documentation.
Verify that all microcode and hardware is set to the correct revision levels. Contact your support representative.
Install the OpenVMS operating system. Section 4.2
Install all software licenses, including OpenVMS Cluster licenses. Section 4.3
Install layered products. Section 4.4
Configure and start LANCP or DECnet for satellite booting Section 4.5

4.2 Installing the OpenVMS Operating System

Only one OpenVMS operating system version can exist on a system disk. Therefore, when installing or upgrading the OpenVMS operating systems:

4.2.1 System Disks

A system disk is one of the few resources that cannot be shared between Alpha and VAX systems. However, an Alpha system disk can be mounted as a data disk on a VAX computer and, with MOP configured appropriately, can be used to boot Alpha satellites. Similarly, a VAX system disk can be mounted on an Alpha computer and, with the appropriate MOP configuration, can be used to boot VAX satellites.

Reference: Cross-architecture booting is described in Section 10.5.

Once booted, Alpha and VAX processors can share access to data on any disk in the OpenVMS Cluster, including system disks. For example, an Alpha system can mount a VAX system disk as a data disk and a VAX system can mount an Alpha system disk as a data disk.

Note: An OpenVMS Cluster running both implementations of DECnet requires a system disk for DECnet for OpenVMS (Phase IV) and another system disk for DECnet--Plus (Phase V). For more information, see the DECnet--Plus documentation.

4.2.2 Where to Install

You may want to set up common system disks according to these guidelines:
IF you want the cluster to have... THEN perform the installation or upgrade...
One common system disk for all computer members Once on the cluster common system disk.
A combination of one or more common system disks and one or more local (individual) system disks Either:
  • Once for each system disk
or
  • Once on a common system disk and then run the CLUSTER_CONFIG.COM procedure to create duplicate system disks (thus enabling systems to have their own local system disk)
Note: If your cluster includes multiple common system disks, you must later coordinate system files to define the cluster operating environment, as described in Chapter 5.

Reference: See Section 8.5 for information about creating a duplicate system disk.

Example: If your OpenVMS Cluster consists of 10 computers, 4 of which boot from a common Alpha system disk, 2 of which boot from a second common Alpha system disk, 2 of which boot from a common VAX system disk, and 2 of which boot from their own local system disk, you need to perform an installation five times.

4.2.3 Information Required

Table 4-1 table lists the questions that the OpenVMS operating system installation procedure prompts you with and describes how certain system parameters are affected by responses you provide. You will notice that two of the prompts vary, depending on whether the node is running DECnet. The table also provides an example of an installation procedure that is taking place on a node named JUPITR.

Important: Be sure you determine answers to the questions before you begin the installation.

Note about versions: Refer to the appropriate OpenVMS OpenVMS Release Notes document for the required version numbers of hardware and firmware. When mixing versions of the operating system in an OpenVMS Cluster, check the release notes for information about compatibility.

Reference: Refer to the appropriate OpenVMS upgrade and installation manual for complete installation instructions.

Table 4-1 Information Required to Perform an Installation
Prompt Response Parameter
Will this node be a cluster member (Y/N)?  
WHEN you respond... AND... THEN the VAXcluster parameter is set to...
N CI and DSSI hardware is not present 0 --- Node will not participate in the OpenVMS Cluster.
N CI and DSSI hardware is present 1 --- Node will automatically participate in the OpenVMS Cluster in the presence of CI or DSSI hardware.
Y   2 --- Node will participate in the OpenVMS Cluster.
VAXCLUSTER
What is the node's DECnet node name? If the node is running DECnet, this prompt, the following prompt, and the SCSSYSTEMID prompt are displayed. Enter the DECnet node name or the DECnet--Plus node synonym (for example, JUPITR). If a node synonym is not defined, SCSNODE can be any name from 1 to 6 alphanumeric characters in length. The name cannot include dollar signs ($) or underscores (_). SCSNODE
What is the node's DECnet node address? Enter the DECnet node address (for example, a valid address might be 2.211). If an address has not been assigned, enter 0 now and enter a valid address when you start DECnet (discussed later in this chapter).

For DECnet--Plus, this question is asked when nodes are configured with a Phase IV compatible address. If a Phase IV compatible address is not configured, then the SCSSYSTEMID system parameter can be set to any value.

SCSSYSTEMID
What is the node's SCS node name? If the node is not running DECnet, this prompt and the following prompt are displayed in place of the two previous prompts. Enter a name of 1 to 6 alphanumeric characters that uniquely names this node. At least 1 character must be a letter. The name cannot include dollar signs ($) or underscores (_). SCSNODE
What is the node's SCSSYSTEMID number? This number must be unique within this cluster. SCSSYSTEMID is the low-order 32 bits of the 48-bit system identification number.

If the node is running DECnet for OpenVMS, calculate the value from the DECnet address using the following formula:

SCSSYSTEMID = ( DECnet-area-number * 1024) + ( DECnet-node-number)

Example: If the DECnet address is 2.211, calculate the value as follows:

SCSSYSTEMID = (2 * 1024) + 211 = 2259

SCSSYSTEMID
Will the Ethernet be used for cluster communications (Y/N)? 1  
IF you respond... THEN the NISCS_LOAD_PEA0 parameter is set to...
N 0 --- PEDRIVER is not loaded 2; cluster communications does not use Ethernet or FDDI.
Y 1 --- Loads PEDRIVER to enable cluster communications over Ethernet or FDDI.
NISCS_LOAD_PEA0
Enter this cluster's group number: Enter a number in the range of 1 to 4095 or 61440 to 65535 (see Section 2.5). This value is stored in the CLUSTER_AUTHORIZE.DAT file in the SYS$COMMON:[SYSEXE] directory. Not applicable
Enter this cluster's password: Enter the cluster password. The password must be from 1 to 31 alphanumeric characters in length and can include dollar signs ($) and underscores (_) (see Section 2.5). This value is stored in scrambled form in the CLUSTER_AUTHORIZE.DAT file in the SYS$COMMON:[SYSEXE] directory. Not applicable
Reenter this cluster's password for verification: Reenter the password. Not applicable
Will JUPITR be a disk server (Y/N)?  
IF you respond... THEN the MSCP_LOAD parameter is set to...
N 0 --- The MSCP server will not be loaded. This is the correct setting for configurations in which all OpenVMS Cluster nodes can directly access all shared storage and do not require LAN failover.
Y 1 --- Loads the MSCP server with attributes specified by the MSCP_SERVE_ALL parameter, using the default CPU load capacity.
MSCP_LOAD
Will JUPITR serve HSC or RF disks (Y/N)?  
IF you respond... THEN the MSCP_SERVE_ALL parameter is set to...
Y 1 --- Serves all available disks.
N 2 --- Serves only locally connected (not HSC, HSJ, or RF) disks.
MSCP_SERVE_ALL
Enter a value for JUPITR's ALLOCLASS parameter: 3 The value is dependent on the system configuration:
  • If the system will serve RF disks, assign a nonzero value to the allocation class.

    Reference: See Section 6.2.2.5 to assign DSSI allocation classes.

  • If the system will serve HSC disks, enter the allocation class value of the HSC.

    Reference: See Section 6.2.2.2 to assign HSC allocation classes.

  • If the system will serve HSJ disks, enter the allocation class value of the HSJ.

    Reference: For complete information about the HSJ console commands, refer to the HSJ hardware documentation. See Section 6.2.2.3 to assign HSJ allocation classes.

  • If the system will serve HSD disks, enter the allocation class value of the HSD.

    Reference: See Section 6.2.2.4 to assign HSC allocation classes.

  • If the system disk is connected to a dual-pathed disk, enter a value from 1 to 255 that will be used on both storage controllers.
  • If the system is connected to a shared SCSI bus (it shares storage on that bus with another system) and if it does not use port allocation classes for naming the SCSI disks, enter a value from 1 to 255. This value must be used by all the systems and disks connected to the SCSI bus.

    Reference: For complete information about port allocation classes, see Section 6.2.1.

  • If the system will use Volume Shadowing for OpenVMS, enter a value from 1 to 255.

    Reference: For more information, see Volume Shadowing for OpenVMS.

  • If none of the above are true, enter 0 (zero).
ALLOCLASS
Does this cluster contain a quorum disk [N]? Enter Y or N, depending on your configuration. If you enter Y, the procedure prompts for the name of the quorum disk. Enter the device name of the quorum disk. (Quorum disks are discussed in Chapter 2.) DISK_QUORUM


1All references to the Ethernet are also applicable to FDDI.
2PEDRIVER is the LAN port emulator driver that implements the NISCA protocol and controls communications between local and remote LAN ports.
3Refer to Section 6.2 for complete information about device naming conventions.


Previous Next Contents Index

  [Go to the documentation home page] [How to order documentation] [Help on this site] [How to contact us]  
  privacy and legal statement  
4477PRO_004.HTML