This chapter describes how to move Available Server Environment (ASE) applications to TruCluster Server Version 5.1B.
To continue single-instance application availability and failover, TruCluster Server provides the cluster application availability (CAA) subsystem. In TruCluster Server, CAA replaces the Available Server Environment (ASE), which, in previous TruCluster Software products, provided the ability to make applications highly available. However, unlike the case with ASE, in a TruCluster Server cluster, you do not have to explicitly manage storage resources and mount file systems on behalf of a highly available application. The cluster file system (CFS) and device request dispatcher make file and disk storage available clusterwide.
Before moving ASE services to TruCluster Server, make sure that you are familiar with CAA. See Chapter 2 for detailed information on how to use CAA.
This chapter discusses the following topics:
Comparing ASE to CAA (Section 5.1)
Preparing to move ASE services to TruCluster Server (Section 5.2)
Reviewing ASE scripts (Section 5.3)
Using an IP alias or networking services (Section 5.4)
Partitioning file systems (Section 5.5)
CAA provides resource monitoring and application restart
capabilities.
It provides the same type of application availability
that is provided by user-defined services in the TruCluster Available Server
Software and TruCluster Production Server Software products.
Table 5-1
compares ASE services with their
equivalents in the TruCluster Server product.
Table 5-1: ASE Services and Their TruCluster Server Equivalents
ASE Service | ASE Description | TruCluster Server Equivalent |
Disk service (Section 5.1.1) | One or more highly available file systems, Advanced File System (AdvFS) filesets, or Logical Storage Manager (LSM) volumes. Can also include a disk-based application. | Cluster file system (CFS), device request dispatcher, and CAA |
Network file system (NFS) service (Section 5.1.2) | One or more highly available file systems, AdvFS filesets, or LSM volumes that are exported. Can also include highly available applications. | Automatically provided for exported file systems by CFS and the default cluster alias. No service definition required. |
User-defined service (Section 5.1.3) | An application that fails over using action scripts. | CAA |
Distributed raw disk (DRD) service (Section 5.1.4) | Allows a disk-based, user-level application to run within a cluster by providing clusterwide access to raw physical disks. | Automatically provided by the device request dispatcher. No service definition required. |
Tape service (Section 5.1.5) | Depends on a set of one or more tape devices for configuring the NetWorker server and other servers for failover. | CFS, device request dispatcher, and CAA |
The following sections describe these ASE services and explain how to
handle them in a TruCluster Server environment.
5.1.1 Disk Service
ASE
An ASE disk service includes one or more highly available file systems,
Advanced File System (AdvFS) filesets, or Logical Storage Manager
(LSM) volumes.
Disk services can also include a disk-based application
and are managed within the ASE.
TruCluster Server
There are no explicit disk services in TruCluster Server.
The cluster file system (CFS) makes all file storage available to all
cluster members, and the device request dispatcher makes disk storage
available clusterwide.
Because file systems and disks are now available
throughout the cluster, you do not need to mount and fail them over
explicitly in your action scripts.
For more information about using
CFS, see the
Cluster Administration
manual and
cfsmgr
(8).
Use CAA to define a disk service's relocation policies and dependencies. If you are not familiar with CAA, see Chapter 2.
Disk services can be defined to use either a cluster alias or an
IP alias for client access.
5.1.2 NFS Service
ASE
An ASE network file system (NFS) service includes one or more highly
available file systems, AdvFS filesets, or LSM volumes that a member
system exports to clients making the data highly available.
NFS
services can also include highly available applications.
TruCluster Server
There are no explicit NFS services in TruCluster Server.
When configured
as an NFS server, a TruCluster Server cluster provides highly available
access to the file systems it exports.
CFS makes all file storage
available to all cluster members.
You no longer need to mount any file
systems within your action scripts.
Define the NFS file system to be
served in the
/etc/exports
file, as you would on a
standalone server.
Remote clients can mount NFS file systems that are exported from the
cluster by using the default cluster alias or by using alternate cluster
aliases, as described in
exports.aliases
(4)5.1.3 User-Defined Service
ASE
An ASE user-defined service consists only of an application that you want
to fail over using your own action scripts.
The application in a
user-defined service cannot use disks.
TruCluster Server
In ASE you may have created a highly available Internet login service
by setting up user-defined start and stop action scripts that invoked
ifconfig
.
In TruCluster Server you do not need to
create a login service.
Clients can log in to the cluster by using
the default cluster alias.
CFS makes disk access available to all
cluster members.
Use CAA to define a user-defined service's failover and relocation policies
and dependencies.
If you are not familiar with CAA, see
Chapter 2.
5.1.4 DRD Service
ASE
An ASE distributed raw disk (DRD) service provides clusterwide access to
raw physical disks.
A
disk-based, user-level application can run within a cluster, regardless
of where in the cluster the physical storage it depends upon is located.
A DRD service allows applications, such as database and transaction
processing (TP) monitor systems, parallel access to storage media from
multiple cluster members.
When creating a DRD service, you specify the
physical media that the service will provide clusterwide.
TruCluster Server
There are no explicit DRD services in TruCluster Server. The device request dispatcher subsystem makes all disk and tape storage available to all cluster members, regardless of where the physical storage is located. You no longer need to explicitly fail over disks when an application fails over to another member.
Prior to Tru64 UNIX Version 5.0, a separate DRD namespace was provided
in a TruCluster Production Server environment.
When DRD services were
added, the
asemgr
utility assigned DRD special file
names sequentially in the following form:
/dev/rdrd/drd1 /dev/rdrd/drd2 /dev/rdrd/drd3
.
.
.
In a TruCluster Server cluster, you access a raw disk device partition
in a TruCluster Server configuration in the same way that you do on a
Tru64 UNIX Version 5.0 or later standalone system by using the
device's special file name in the
/dev/rdisk
directory.
For example:
/dev/rdisk/dsk2c
An ASE tape service depends on a set of one or more tape devices.
It may also
include media changer devices and file systems.
A tape service enables
you to configure the Legato NetWorker server and servers for other
client/server-based applications for failover.
The tape drives, media
changers, and file systems all fail over as one unit.
TruCluster Server
There are no explicit tape services in TruCluster Server. CFS makes all file storage available to all cluster members, and the device request dispatcher makes disk and tape storage available clusterwide. Because file systems, disks, and tapes are now available throughout the cluster, you do not need to mount and fail them over explicitly in your action scripts.
Use CAA to define a tape resource's failover and relocation policies and dependencies. If you are not familiar with CAA, see Chapter 2.
Applications that access tapes and media changers can be defined to use
either a cluster alias or an IP alias for client access.
5.2 Preparing to Move ASE Services to TruCluster Server
TruCluster Server Version 5.1B includes the following scripts that you can use to move storage from the Available Server Environment (ASE) to the new cluster:
clu_migrate_check
clu_migrate_save
clu_migrate_configure
The scripts and associated utility programs are available from the
TruCluster Server Version 5.1B directory on the Tru64 UNIX Associated
Products Volume 2 CD-ROM, in the
TCRMIGRATE540
subset.
See the
Cluster Installation
manual for a description of the scripts and
installation instructions.
In general and where possible, we recommend that you use the procedures
and scripts described in the
Cluster Installation
manual.
However, if
your storage topology, system configurations, or site policy make it
impossible to do so, you can manually gather and configure ASE storage
information.
You are responsible for mapping old-style
(rz*
) device names to new-style
(dsk*
) device names.
See the
Cluster Installation
manual for instructions on manually gathering device and storage
information and configuring storage on the new Tru64 UNIX system.
If you decide to manually gather storage information and configure storage
on the new Tru64 UNIX system, you should save both the
var/ase/config/asecdb
database and a text copy of the
database before shutting down your ASE cluster.
Having the ASE database
content available makes it easier to set up applications on TruCluster Server.
How ASE database content is saved differs between versions of
TruCluster Available Server and TruCluster Production Server.
The following
sections explain how to save ASE database content on a Version 1.5 or
later system and a Version 1.4 or earlier system.
5.2.1 Saving ASE Database Content from TruCluster Available Server and Production Server Version 1.5 or Later
To save both the
/var/ase/config/asecdb
database
and a text copy of the database, enter the following commands:
# cp /var/ase/config/asecdb asecdb.copy # asemgr -d -C > asecdb.txt
The following information, saved from a sample ASE database, is helpful when creating a CAA profile:
!! ASE service configuration for netscape @startService netscape Service name: netscape Service type: DISK Relocate on boot of favored member: no Placement policy: balanced
.
.
.
The following information, saved from a sample ASE database, is helpful when installing and configuring an application on TruCluster Server:
IP address: 16.141.8.239 Device: cludemo#netscape cludemo#netscape mount point: /clumig/Netscape cludemo#netscape filesystem type: advfs cludemo#netscape mount options: rw cludemo#netscape mount point group owner: staff Device: cludemo#cludemo cludemo#cludemo mount point: /clumig/cludemo cludemo#cludemo filesystem type: advfs cludemo#cludemo mount options: rw cludemo#cludemo mount point group owner: staff AdvFS domain: cludemo cludemo volumes: /dev/rz12c
.
.
.
5.2.2 Saving ASE Database Content from TruCluster Available Server and Production Server Version 1.4 or Earlier
On a TruCluster Available Server or Production Server Version 1.4 or
earlier system, you cannot use the
asemgr
command to
save all ASE service information.
The
asemgr
command
does not capture ASE script information.
You must use the
asemgr
utility if you want to save all information.
To save script data, follow these steps:
Start the
asemgr
utility.
From the ASE Main Menu, choose Managing ASE Services.
From the Managing ASE Services menu, choose Service Configuration.
From the Service Configuration menu, choose Modify a Service.
Select a service from the menu.
Choose General service information.
From the User-defined Service Modification menu, choose User-defined action scripts.
Choose Start action from the menu. Record the values for script argument and script timeout.
From the menu, choose Edit the start action script.
Write the internal script to a file on permanent storage where it will not be deleted.
Repeat these steps as necessary for all stop, add, and delete scripts. For user-defined services, also save the check script.
To save ASE database content and the rest of your ASE service information (placement policies, service names, and so on), enter the following commands:
# asemgr -dv > ase.services.txt # asemgr -dv {ServiceName}
The name of the service,
ServiceName, is taken from
the output that is produced by
asemgr -dv
.
Execute
asemgr -dv {ServiceName}
for
each service.
Note
For TruCluster Available Server Software or TruCluster Production Server Software products earlier than Version 1.5, you must perform a full installation of Tru64 UNIX Version 5.1B and TruCluster Server Version 5.1B.
Review ASE scripts carefully. Consider the following issues for scripts to work properly on TruCluster Server:
Replace ASE commands with cluster application availability (CAA) commands (Section 5.3.1).
Combine separate start and stop scripts (Section 5.3.2).
Redirect script output (Section 5.3.3).
Replace
nfs_config
with
ifconfig
or
create a cluster alias (Section 5.3.4).
Handle errors correctly (Section 5.3.5).
Remove storage management information from action scripts (Section 5.3.6).
Convert device names (Section 5.3.7).
Remove references to ASE-specific environment variables (Section 5.3.8).
Exit codes (Section 5.3.9).
Post events with Event Manager (EVM) (Section 5.3.10).
5.3.1 Replacing ASE Commands with CAA Commands
In TruCluster Server Version 5.1B, the
asemgr
command is
replaced by several CAA commands.
The following table compares ASE
commands with their equivalent CAA commands:
ASE Command | CAA Command | Description |
asemgr -d |
caa_stat |
Provides status on CAA resources clusterwide |
asemgr -m |
caa_relocate |
Relocates an application resource from one cluster member to another |
asemgr -s |
caa_start |
Starts application resources |
asemgr -x |
caa_stop |
Stops application resources |
caa_profile |
Creates, validates, deletes, and updates a CAA resource profile | |
caa_register |
Registers a resource with CAA | |
caa_unregister |
Unregisters a resource with CAA | |
caa_balance |
Optimally relocates applications based on the status of their resources | |
caa_report |
Reports availability statistics for application resources |
The
caa_profile
,
caa_register
,
caa_unregister
,
caa_balance
, and
caa_report
commands provide functionality that
is unique to the TruCluster Server product.
For information on how to use
any of the CAA commands, see
Chapter 2.
5.3.2 Combining Start and Stop Scripts
CAA does not call separate scripts to start and stop an application.
If you have separate start and stop scripts for your application,
combine them into one script.
Refer to
/var/cluster/caa/template/template.scr
for an
example.
5.3.3 Redirecting Script Output
CAA scripts run with standard output and standard error streams directed
to
/dev/null
.
If you want to capture these streams,
we recommend that you employ one of the following methods, listed in
order of preference:
Use the Event Manager (EVM) (as demonstrated in the template script
/var/cluster/caa/template/template.scr
).
This is the preferred method because of the output management that EVM
provides.
Refer to the sample CAA scripts in
/var/cluster/caa/examples
for examples of using EVM
to redirect output.
Use the
logger
command to direct output to the system
log file (syslog
).
See
logger
(1)syslog
are simple text and cannot take advantage of
the advanced formatting and searching capabilities of EVM.
Direct output to
/dev/console
.
This method does not
have a persistent record; messages appear only at the console.
Direct output to a file. With this method, be aware of log file size, and manage file space appropriately.
5.3.4 Replacing nfs_ifconfig Script
TruCluster Server no longer includes an
nfs_ifconfig
script like TruCluster ASE.
Replace
nfs_ifconfig
scripts with either an
ifconfig alias/-alias
statement in a CAA action
script or use a cluster alias.
For more information about using an interface alias in a CAA script, see
Section 5.4.1.
See the the examples in
Section 2.14
for information on using a cluster
alias with CAA single-instance applications.
5.3.5 Handling Errors Correctly
Make sure that scripts in TruCluster Server handle errors properly. The "filesystem busy" message is no longer returned. Therefore, an application may be started twice, even if some of its processes are still active on another member.
To prevent an application from starting on another node, make sure
that your stop script can stop all processes, or use
fuser
(8)
The following example shows a shell routine that uses the
fuser
utility added to an application's
action script.
This shell routine attempts to close all open files
on the application directories
/AppDir
,
/AppDir2
, and
/AppDir3
.
If
the routine cannot close the files, the routine will then return with an
error, and the script can then exit with an error to signal that user
intervention is required.
FUSER="/usr/sbin/fuser" # Command to use for closing ADVFSDIRS="/AppDir /AppDir2 /AppDir3" # Application directories # # Close open files on shared disks # closefiles () { echo "Killing processes" for i in ${ADVFSDIRS} do echo "Killing processes on $i" $FUSER -ck $i $FUSER -uv $1 > /dev/null 2>&1 if [ $? -ne 0 ]; then echo "Retrying to close files on ${i} ..." $FUSER -ck $i $FUSER -uv $1 > /dev/null 2>&1 if [ $? -ne 0 ]; then echo "Failed to close files on ${i} aborting" $FUSER -uv $i return 2 fi fi done echo "Processes on ${ADVFSDIRS} stopped" }
5.3.6 Removing Storage Management Information
An ASE service's storage needed to be:
On a bus that was shared by all cluster members
Defined as part of the service using the
asemgr
utility
Managed by service scripts
Because the cluster file system (CFS) in TruCluster Server makes all file storage available to all cluster members (access to storage is built into the cluster architecture), you no longer need to manage file system mounting and failover within action scripts.
You can remove all storage management information from scripts on
TruCluster Server.
For example, SAP R/3 scripts may have been set up to
mount file systems within their own scripts.
You can remove these mount points.
5.3.7 Converting Device Names
As described in Section 4.2, scripts that reference old-style device names must be modified to use the new-style device-naming model that was introduced with Tru64 UNIX Version 5.0.
During an upgrade the
clu_migrate_save
and
clu_migrate_configure
scripts gather information
for mapping device names and configure storage on the new system.
If you are not using the
clu_migrate_save
and
clu_migrate_configure
scripts, you must manually map
old-style (rz*
) device names to their new-style
(dsk*
) counterparts.
See the
Cluster Installation
manual for information on how to manually configure storage when
upgrading a cluster.
If you used the
ase_fix_config
command to renumber buses,
save the output from the command during the upgrade and use it to verify
physical devices against bus numbers.
5.3.8 Replacing or Removing ASE Variables
ASE scripts may contain the following ASE environment variables:
MEMBER_STATE
In ASE the
MEMBER_STATE
variable is placed in a
stop script to determine whether or not the script is executing on a
running system or on a system that is booting.
The
MEMBER_STATE
variable has one of the following
variables:
RUNNING
BOOTING
During system startup, TruCluster Server does not provide the option to run the stop section of an action script. To perform file cleanup of references, log files, and so on, move these cleanup actions to the start section of your action script.
ASEROUTING
The
ASEROUTING
variable no longer exists.
Its
function is replaced by the TruCluster Server
cluster alias subsystem functionality.
Remove this variable from
TruCluster Server application action scripts.
ASE_PARTIAL_MIRRORING
The
ASE_PARTIAL_MIRRORING
variable does not exist
in TruCluster Server.
Remove this variable from TruCluster Server
application action scripts.
ASE exit codes for all scripts return 0 for success; anything else equals failure.
Each entry point of a CAA script returns an exit code of 0 for success
and a nonzero value for failure.
(Scripts that are generated by the
caa_profile
command from the script template
return a 2 for failure.) For the
check
section
of a CAA script, an exit code of 0 means that the application is
running.
5.3.10 Posting Events
The Event Manager (EVM) provides a single point of focus for the multiple
channels (such as log files) through which system components report
event and status information.
EVM combines these events into a
single event stream, which the system administrator can monitor in
real time or view as historical events retrieved from storage.
Use the
evmpost
(1)
Note
The CAA sample scripts that are located in the
/var/cluster/caa/examples
directory all use EVM to post events. Refer to any one of them for an example.
The following sections discuss networking issues to consider when moving ASE services to TruCluster Server:
Using an alias (Section 5.4.1)
Networking services (Section 5.4.2)
If an application requires the use of an alias, you can use either a cluster alias or an interface alias.
Using a cluster alias is most appropriate when:
Multiple member systems must appear as a single system to clients of Transport Control Protocol (TCP)-based or User Datagram Protocol (UDP)-based applications. Often multiple instances of a given application may be active across cluster members, and cluster aliases provide a simple, reliable, and transparent mechanism for establishing client connections to those members that are hosting the target application.
You want to take advantage of the cluster alias's ability to handle network availability transparently.
Using an interface alias is the preferred mechanism when:
You are running a single-instance service and one cluster member satisfies all client requests at any given time.
Performance is critical; you want all clients to reach the one member that is providing the service, and you can never afford to take an extra routing hop.
You are able to provide for client network availability on each cluster member that can host the service, by using a Redundant Array of Independent Network Adapters (NetRAIN) interface and by setting up a dependency on a client network resource in the application's CAA profile.
You must use cluster aliasing to provide client access to
multi-instance network services.
Because the TruCluster Server cluster
alias subsystem creates and manages cluster aliases on a clusterwide basis,
you do not have to explicitly establish and remove interface aliases
with the
ifconfig
command.
See
Chapter 3
for information about using cluster
aliasing with multi-instance applications.
See the
Cluster Administration
manual for more information about how to use cluster
aliasing in general.
You can use a cluster alias to direct client traffic to a single cluster
member that is hosting a single-instance application, like the Oracle8i
single server.
When you configure a service under a cluster alias and
set the
in_single
cluster alias attribute, the
alias subsystem ensures that all client requests that are directed at
that alias are routed to a cluster member that is running the
requested service as long as it is available.
However, for
single-instance applications, consider using CAA for more control and
flexibility in managing application failover.
See
Chapter 2
for information about using CAA.
Although you do not need to define NFS services in a TruCluster Server
cluster to allow clients to access NFS file systems exported by the
cluster, you may need to deal with the fact that clients know these
services by the IP addresses that are provided by the ASE environment.
Clients must access NFS file systems that are served by the cluster by using
the default cluster alias or a cluster alias listed in the
/etc/exports.aliases
file.
5.4.1.2 Interface Alias
If a single-instance application cannot easily use a cluster
alias, you can continue to use an interface alias by either modifying
an existing
nfs_ifconfig
entry to use
ifconfig
(8)ifconfig
in a CAA action script.
When modifying a CAA action script, call
ifconfig alias
to assign an alias to an interface.
Use the following command
prior to starting up the application; otherwise, the application
might not be able to bind to the alias address:
ifconfig interface_id alias alias_address netmask mask
To deassign an alias from an interface, call
ifconfig -alias
after all applications and
processes have been stopped; otherwise, an application or process
might not be able to continue to communicate with the interface alias.
The following example contains a section from a sample script:
.
.
.
# Assign an IP alias to a given interface IFCNFG_ALIAS_ADD="/sbin/ifconfig tu0 alias 16.141.8.118 netmask 255.255.255.0" # #Deassign an IP alias to an interface IFCNFG_ALIAS_DEL="/sbin/ifconfig tu0 -alias 16.141.8.118 netmask 255.255.255.0"
.
.
.
In the TruCluster Available Server Software and TruCluster Production
Server Software products, the
asemgr
utility provided a
mechanism to monitor client networks.
In Tru64 UNIX Version 5.0 or later, client network monitoring is a feature of the base operating system. The NetRAIN interface provides protection against certain kinds of network connectivity failures. The Network Interface Failure Finder (NIFF) is an additional feature that monitors the status of its network interfaces and reports indications of network failures. Applications that are running in a TruCluster Server cluster can use the NetRAIN and NIFF features in conjunction with the Tru64 UNIX Event Manager (EVM) to monitor the health of client networks.
For more information about NetRAIN and NIFF, see the
Tru64 UNIX
Network Administration: Connections
manual,
niffd
(8)niff
(7)nr
(7)5.5 File System Partitioning
CFS makes all files accessible to all cluster members. Each cluster member has the same access to a file, whether the file is stored on a device that is connected to all cluster members or on a device that is private to a single member. However, CFS does allow you to mount an AdvFS file system so that it is accessible to only a single cluster member. This is called file system partitioning.
ASE offered functionality like that of file system partitioning. File system partitioning is provided in Version 5.1B to ease migration from ASE. See the TruCluster Server Cluster Administration manual for information on how to mount partitioned file systems and any known restrictions.