5    Moving ASE Applications to TruCluster Server

This chapter describes how to move Available Server Environment (ASE) applications to TruCluster Server Version 5.1B.

To continue single-instance application availability and failover, TruCluster Server provides the cluster application availability (CAA) subsystem. In TruCluster Server, CAA replaces the Available Server Environment (ASE), which, in previous TruCluster Software products, provided the ability to make applications highly available. However, unlike the case with ASE, in a TruCluster Server cluster, you do not have to explicitly manage storage resources and mount file systems on behalf of a highly available application. The cluster file system (CFS) and device request dispatcher make file and disk storage available clusterwide.

Before moving ASE services to TruCluster Server, make sure that you are familiar with CAA. See Chapter 2 for detailed information on how to use CAA.

This chapter discusses the following topics:

5.1    Comparing ASE to CAA

CAA provides resource monitoring and application restart capabilities. It provides the same type of application availability that is provided by user-defined services in the TruCluster Available Server Software and TruCluster Production Server Software products. Table 5-1 compares ASE services with their equivalents in the TruCluster Server product.

Table 5-1:  ASE Services and Their TruCluster Server Equivalents

ASE Service ASE Description TruCluster Server Equivalent
Disk service (Section 5.1.1) One or more highly available file systems, Advanced File System (AdvFS) filesets, or Logical Storage Manager (LSM) volumes. Can also include a disk-based application. Cluster file system (CFS), device request dispatcher, and CAA
Network file system (NFS) service (Section 5.1.2) One or more highly available file systems, AdvFS filesets, or LSM volumes that are exported. Can also include highly available applications. Automatically provided for exported file systems by CFS and the default cluster alias. No service definition required.
User-defined service (Section 5.1.3) An application that fails over using action scripts. CAA
Distributed raw disk (DRD) service (Section 5.1.4) Allows a disk-based, user-level application to run within a cluster by providing clusterwide access to raw physical disks. Automatically provided by the device request dispatcher. No service definition required.
Tape service (Section 5.1.5) Depends on a set of one or more tape devices for configuring the NetWorker server and other servers for failover. CFS, device request dispatcher, and CAA

The following sections describe these ASE services and explain how to handle them in a TruCluster Server environment.

5.1.1    Disk Service

ASE

An ASE disk service includes one or more highly available file systems, Advanced File System (AdvFS) filesets, or Logical Storage Manager (LSM) volumes. Disk services can also include a disk-based application and are managed within the ASE.

TruCluster Server

There are no explicit disk services in TruCluster Server. The cluster file system (CFS) makes all file storage available to all cluster members, and the device request dispatcher makes disk storage available clusterwide. Because file systems and disks are now available throughout the cluster, you do not need to mount and fail them over explicitly in your action scripts. For more information about using CFS, see the Cluster Administration manual and cfsmgr(8).

Use CAA to define a disk service's relocation policies and dependencies. If you are not familiar with CAA, see Chapter 2.

Disk services can be defined to use either a cluster alias or an IP alias for client access.

5.1.2    NFS Service

ASE

An ASE network file system (NFS) service includes one or more highly available file systems, AdvFS filesets, or LSM volumes that a member system exports to clients making the data highly available. NFS services can also include highly available applications.

TruCluster Server

There are no explicit NFS services in TruCluster Server. When configured as an NFS server, a TruCluster Server cluster provides highly available access to the file systems it exports. CFS makes all file storage available to all cluster members. You no longer need to mount any file systems within your action scripts. Define the NFS file system to be served in the /etc/exports file, as you would on a standalone server.

Remote clients can mount NFS file systems that are exported from the cluster by using the default cluster alias or by using alternate cluster aliases, as described in exports.aliases(4).

5.1.3    User-Defined Service

ASE

An ASE user-defined service consists only of an application that you want to fail over using your own action scripts. The application in a user-defined service cannot use disks.

TruCluster Server

In ASE you may have created a highly available Internet login service by setting up user-defined start and stop action scripts that invoked ifconfig. In TruCluster Server you do not need to create a login service. Clients can log in to the cluster by using the default cluster alias. CFS makes disk access available to all cluster members.

Use CAA to define a user-defined service's failover and relocation policies and dependencies. If you are not familiar with CAA, see Chapter 2.

5.1.4    DRD Service

ASE

An ASE distributed raw disk (DRD) service provides clusterwide access to raw physical disks. A disk-based, user-level application can run within a cluster, regardless of where in the cluster the physical storage it depends upon is located. A DRD service allows applications, such as database and transaction processing (TP) monitor systems, parallel access to storage media from multiple cluster members. When creating a DRD service, you specify the physical media that the service will provide clusterwide.

TruCluster Server

There are no explicit DRD services in TruCluster Server. The device request dispatcher subsystem makes all disk and tape storage available to all cluster members, regardless of where the physical storage is located. You no longer need to explicitly fail over disks when an application fails over to another member.

Prior to Tru64 UNIX Version 5.0, a separate DRD namespace was provided in a TruCluster Production Server environment. When DRD services were added, the asemgr utility assigned DRD special file names sequentially in the following form:

/dev/rdrd/drd1
/dev/rdrd/drd2
/dev/rdrd/drd3

.
.
.

In a TruCluster Server cluster, you access a raw disk device partition in a TruCluster Server configuration in the same way that you do on a Tru64 UNIX Version 5.0 or later standalone system — by using the device's special file name in the /dev/rdisk directory. For example:

/dev/rdisk/dsk2c
 

5.1.5    Tape Service

ASE

An ASE tape service depends on a set of one or more tape devices. It may also include media changer devices and file systems. A tape service enables you to configure the Legato NetWorker server and servers for other client/server-based applications for failover. The tape drives, media changers, and file systems all fail over as one unit.

TruCluster Server

There are no explicit tape services in TruCluster Server. CFS makes all file storage available to all cluster members, and the device request dispatcher makes disk and tape storage available clusterwide. Because file systems, disks, and tapes are now available throughout the cluster, you do not need to mount and fail them over explicitly in your action scripts.

Use CAA to define a tape resource's failover and relocation policies and dependencies. If you are not familiar with CAA, see Chapter 2.

Applications that access tapes and media changers can be defined to use either a cluster alias or an IP alias for client access.

5.2    Preparing to Move ASE Services to TruCluster Server

TruCluster Server Version 5.1B includes the following scripts that you can use to move storage from the Available Server Environment (ASE) to the new cluster:

The scripts and associated utility programs are available from the TruCluster Server Version 5.1B directory on the Tru64 UNIX Associated Products Volume 2 CD-ROM, in the TCRMIGRATE540 subset. See the Cluster Installation manual for a description of the scripts and installation instructions.

In general and where possible, we recommend that you use the procedures and scripts described in the Cluster Installation manual. However, if your storage topology, system configurations, or site policy make it impossible to do so, you can manually gather and configure ASE storage information. You are responsible for mapping old-style (rz*) device names to new-style (dsk*) device names. See the Cluster Installation manual for instructions on manually gathering device and storage information and configuring storage on the new Tru64 UNIX system.

If you decide to manually gather storage information and configure storage on the new Tru64 UNIX system, you should save both the var/ase/config/asecdb database and a text copy of the database before shutting down your ASE cluster. Having the ASE database content available makes it easier to set up applications on TruCluster Server.

How ASE database content is saved differs between versions of TruCluster Available Server and TruCluster Production Server. The following sections explain how to save ASE database content on a Version 1.5 or later system and a Version 1.4 or earlier system.

5.2.1    Saving ASE Database Content from TruCluster Available Server and Production Server Version 1.5 or Later

To save both the /var/ase/config/asecdb database and a text copy of the database, enter the following commands:

# cp /var/ase/config/asecdb asecdb.copy
# asemgr -d -C > asecdb.txt
 

The following information, saved from a sample ASE database, is helpful when creating a CAA profile:

!! ASE service configuration for netscape
 
@startService netscape
Service name: netscape
Service type: DISK
Relocate on boot of favored member: no
Placement policy: balanced

.
.
.

The following information, saved from a sample ASE database, is helpful when installing and configuring an application on TruCluster Server:

IP address: 16.141.8.239
Device: cludemo#netscape
  cludemo#netscape mount point: /clumig/Netscape
  cludemo#netscape filesystem type: advfs
  cludemo#netscape mount options: rw
  cludemo#netscape mount point group owner: staff
Device: cludemo#cludemo
  cludemo#cludemo mount point: /clumig/cludemo
  cludemo#cludemo filesystem type: advfs
  cludemo#cludemo mount options: rw
  cludemo#cludemo mount point group owner: staff
AdvFS domain: cludemo
  cludemo volumes: /dev/rz12c

.
.
.

5.2.2    Saving ASE Database Content from TruCluster Available Server and Production Server Version 1.4 or Earlier

On a TruCluster Available Server or Production Server Version 1.4 or earlier system, you cannot use the asemgr command to save all ASE service information. The asemgr command does not capture ASE script information. You must use the asemgr utility if you want to save all information.

To save script data, follow these steps:

  1. Start the asemgr utility.

  2. From the ASE Main Menu, choose Managing ASE Services.

  3. From the Managing ASE Services menu, choose Service Configuration.

  4. From the Service Configuration menu, choose Modify a Service.

  5. Select a service from the menu.

  6. Choose General service information.

  7. From the User-defined Service Modification menu, choose User-defined action scripts.

  8. Choose Start action from the menu. Record the values for script argument and script timeout.

  9. From the menu, choose Edit the start action script.

    Write the internal script to a file on permanent storage where it will not be deleted.

Repeat these steps as necessary for all stop, add, and delete scripts. For user-defined services, also save the check script.

To save ASE database content and the rest of your ASE service information (placement policies, service names, and so on), enter the following commands:

# asemgr -dv > ase.services.txt
# asemgr -dv {ServiceName}
 

The name of the service, ServiceName, is taken from the output that is produced by asemgr -dv. Execute asemgr -dv {ServiceName} for each service.

Note

For TruCluster Available Server Software or TruCluster Production Server Software products earlier than Version 1.5, you must perform a full installation of Tru64 UNIX Version 5.1B and TruCluster Server Version 5.1B.

5.3    ASE Script Considerations

Review ASE scripts carefully. Consider the following issues for scripts to work properly on TruCluster Server:

5.3.1    Replacing ASE Commands with CAA Commands

In TruCluster Server Version 5.1B, the asemgr command is replaced by several CAA commands. The following table compares ASE commands with their equivalent CAA commands:

ASE Command CAA Command Description
asemgr -d caa_stat Provides status on CAA resources clusterwide
asemgr -m caa_relocate Relocates an application resource from one cluster member to another
asemgr -s caa_start Starts application resources
asemgr -x caa_stop Stops application resources
  caa_profile Creates, validates, deletes, and updates a CAA resource profile
  caa_register Registers a resource with CAA
  caa_unregister Unregisters a resource with CAA
  caa_balance Optimally relocates applications based on the status of their resources
  caa_report Reports availability statistics for application resources

The caa_profile, caa_register, caa_unregister, caa_balance, and caa_report commands provide functionality that is unique to the TruCluster Server product. For information on how to use any of the CAA commands, see Chapter 2.

5.3.2    Combining Start and Stop Scripts

CAA does not call separate scripts to start and stop an application. If you have separate start and stop scripts for your application, combine them into one script. Refer to /var/cluster/caa/template/template.scr for an example.

5.3.3    Redirecting Script Output

CAA scripts run with standard output and standard error streams directed to /dev/null. If you want to capture these streams, we recommend that you employ one of the following methods, listed in order of preference:

  1. Use the Event Manager (EVM) (as demonstrated in the template script /var/cluster/caa/template/template.scr). This is the preferred method because of the output management that EVM provides.

    Refer to the sample CAA scripts in /var/cluster/caa/examples for examples of using EVM to redirect output.

  2. Use the logger command to direct output to the system log file (syslog). See logger(1) for more information. This method is not as flexible as using EVM. For example, messages stored in syslog are simple text and cannot take advantage of the advanced formatting and searching capabilities of EVM.

  3. Direct output to /dev/console. This method does not have a persistent record; messages appear only at the console.

  4. Direct output to a file. With this method, be aware of log file size, and manage file space appropriately.

5.3.4    Replacing nfs_ifconfig Script

TruCluster Server no longer includes an nfs_ifconfig script like TruCluster ASE. Replace nfs_ifconfig scripts with either an ifconfig alias/-alias statement in a CAA action script or use a cluster alias.

For more information about using an interface alias in a CAA script, see Section 5.4.1. See the the examples in Section 2.14 for information on using a cluster alias with CAA single-instance applications.

5.3.5    Handling Errors Correctly

Make sure that scripts in TruCluster Server handle errors properly. The "filesystem busy" message is no longer returned. Therefore, an application may be started twice, even if some of its processes are still active on another member.

To prevent an application from starting on another node, make sure that your stop script can stop all processes, or use fuser(8) to stop application processes.

The following example shows a shell routine that uses the fuser utility added to an application's action script. This shell routine attempts to close all open files on the application directories /AppDir, /AppDir2, and /AppDir3. If the routine cannot close the files, the routine will then return with an error, and the script can then exit with an error to signal that user intervention is required.

FUSER="/usr/sbin/fuser"			# Command to use for closing
ADVFSDIRS="/AppDir /AppDir2 /AppDir3"	# Application directories
#
# Close open files on shared disks
#
closefiles () {
	echo "Killing processes"
        for i in ${ADVFSDIRS}
            do
               echo "Killing processes on $i"
               $FUSER -ck $i
               $FUSER -uv $1 > /dev/null 2>&1
               if [ $? -ne 0 ]; then
                  echo "Retrying to close files on ${i} ..."
                  $FUSER -ck $i
                  $FUSER -uv $1 > /dev/null 2>&1
                  if [ $? -ne 0 ]; then
                     echo "Failed to close files on ${i} aborting"
                     $FUSER -uv $i
                     return 2
                  fi
               fi
        done
        echo "Processes on ${ADVFSDIRS} stopped"
}
 

5.3.6    Removing Storage Management Information

An ASE service's storage needed to be:

Because the cluster file system (CFS) in TruCluster Server makes all file storage available to all cluster members (access to storage is built into the cluster architecture), you no longer need to manage file system mounting and failover within action scripts.

You can remove all storage management information from scripts on TruCluster Server. For example, SAP R/3 scripts may have been set up to mount file systems within their own scripts. You can remove these mount points.

5.3.7    Converting Device Names

As described in Section 4.2, scripts that reference old-style device names must be modified to use the new-style device-naming model that was introduced with Tru64 UNIX Version 5.0.

During an upgrade the clu_migrate_save and clu_migrate_configure scripts gather information for mapping device names and configure storage on the new system.

If you are not using the clu_migrate_save and clu_migrate_configure scripts, you must manually map old-style (rz*) device names to their new-style (dsk*) counterparts. See the Cluster Installation manual for information on how to manually configure storage when upgrading a cluster.

If you used the ase_fix_config command to renumber buses, save the output from the command during the upgrade and use it to verify physical devices against bus numbers.

5.3.8    Replacing or Removing ASE Variables

ASE scripts may contain the following ASE environment variables:

5.3.9    Exit Codes

ASE exit codes for all scripts return 0 for success; anything else equals failure.

Each entry point of a CAA script returns an exit code of 0 for success and a nonzero value for failure. (Scripts that are generated by the caa_profile command from the script template return a 2 for failure.) For the check section of a CAA script, an exit code of 0 means that the application is running.

5.3.10    Posting Events

The Event Manager (EVM) provides a single point of focus for the multiple channels (such as log files) through which system components report event and status information. EVM combines these events into a single event stream, which the system administrator can monitor in real time or view as historical events retrieved from storage. Use the evmpost(1) command to post events into EVM from action scripts.

Note

The CAA sample scripts that are located in the /var/cluster/caa/examples directory all use EVM to post events. Refer to any one of them for an example.

5.4    Networking Considerations

The following sections discuss networking issues to consider when moving ASE services to TruCluster Server:

5.4.1    Using an Alias

If an application requires the use of an alias, you can use either a cluster alias or an interface alias.

Using a cluster alias is most appropriate when:

Using an interface alias is the preferred mechanism when:

5.4.1.1    Cluster Alias

You must use cluster aliasing to provide client access to multi-instance network services. Because the TruCluster Server cluster alias subsystem creates and manages cluster aliases on a clusterwide basis, you do not have to explicitly establish and remove interface aliases with the ifconfig command. See Chapter 3 for information about using cluster aliasing with multi-instance applications. See the Cluster Administration manual for more information about how to use cluster aliasing in general.

You can use a cluster alias to direct client traffic to a single cluster member that is hosting a single-instance application, like the Oracle8i single server. When you configure a service under a cluster alias and set the in_single cluster alias attribute, the alias subsystem ensures that all client requests that are directed at that alias are routed to a cluster member that is running the requested service as long as it is available. However, for single-instance applications, consider using CAA for more control and flexibility in managing application failover. See Chapter 2 for information about using CAA.

Although you do not need to define NFS services in a TruCluster Server cluster to allow clients to access NFS file systems exported by the cluster, you may need to deal with the fact that clients know these services by the IP addresses that are provided by the ASE environment. Clients must access NFS file systems that are served by the cluster by using the default cluster alias or a cluster alias listed in the /etc/exports.aliases file.

5.4.1.2    Interface Alias

If a single-instance application cannot easily use a cluster alias, you can continue to use an interface alias by either modifying an existing nfs_ifconfig entry to use ifconfig(8), or add a call to ifconfig in a CAA action script.

When modifying a CAA action script, call ifconfig alias to assign an alias to an interface. Use the following command prior to starting up the application; otherwise, the application might not be able to bind to the alias address:

ifconfig interface_id alias alias_address netmask mask
 

To deassign an alias from an interface, call ifconfig -alias after all applications and processes have been stopped; otherwise, an application or process might not be able to continue to communicate with the interface alias.

The following example contains a section from a sample script:


.
.
.
# Assign an IP alias to a given interface IFCNFG_ALIAS_ADD="/sbin/ifconfig tu0 alias 16.141.8.118 netmask 255.255.255.0" # #Deassign an IP alias to an interface IFCNFG_ALIAS_DEL="/sbin/ifconfig tu0 -alias 16.141.8.118 netmask 255.255.255.0"
.
.
.
 

5.4.2    Networking Services

In the TruCluster Available Server Software and TruCluster Production Server Software products, the asemgr utility provided a mechanism to monitor client networks.

In Tru64 UNIX Version 5.0 or later, client network monitoring is a feature of the base operating system. The NetRAIN interface provides protection against certain kinds of network connectivity failures. The Network Interface Failure Finder (NIFF) is an additional feature that monitors the status of its network interfaces and reports indications of network failures. Applications that are running in a TruCluster Server cluster can use the NetRAIN and NIFF features in conjunction with the Tru64 UNIX Event Manager (EVM) to monitor the health of client networks.

For more information about NetRAIN and NIFF, see the Tru64 UNIX Network Administration: Connections manual, niffd(8), niff(7), and nr(7).

5.5    File System Partitioning

CFS makes all files accessible to all cluster members. Each cluster member has the same access to a file, whether the file is stored on a device that is connected to all cluster members or on a device that is private to a single member. However, CFS does allow you to mount an AdvFS file system so that it is accessible to only a single cluster member. This is called file system partitioning.

ASE offered functionality like that of file system partitioning. File system partitioning is provided in Version 5.1B to ease migration from ASE. See the TruCluster Server Cluster Administration manual for information on how to mount partitioned file systems and any known restrictions.