13 May 1988 .BL 2 LAVC INSTALLATION & MANAGEMENT LAVC INSTALLATION & MANAGEMENT LAVC INSTALLATION & MANAGEMENT LAVC INSTALLATION & MANAGEMENT LAVC INSTALLATION & MANAGEMENT LAVC INSTALLATION & MANAGEMENT TABLE OF CONTENTS CONTENTS 1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . 1 2 OBJECTIVES . . . . . . . . . . . . . . . . . . . . . 1 3 REFERENCE DOCUMENTS . . . . . . . . . . . . . . . . 1 4 LAVC MANAGEMENT . . . . . . . . . . . . . . . . . . 2 4.1 VMS SYSTEM MANAGEMENT . . . . . . . . . . . . . . 2 4.2 LAVC MANAGEMENT . . . . . . . . . . . . . . . . . 2 4.3 LAVC AND GROUP MANAGEMENT . . . . . . . . . . . . 3 5 LAVC IMPLEMENTATION . . . . . . . . . . . . . . . . 5 5.1 Implementation Strategies . . . . . . . . . . . . 5 5.2 LAVC nodes . . . . . . . . . . . . . . . . . . . . 7 5.3 VMS SYSTEM TREE CONFIGURATIONS . . . . . . . . . . 8 5.4 LAVC NODE startup . . . . . . . . . . . . . . . 11 5.5 LAVC NODE Access Control . . . . . . . . . . . . 11 5.6 LAVC NODE Shutdown . . . . . . . . . . . . . . . 12 5.7 LAVC Data Base . . . . . . . . . . . . . . . . . 12 5.8 Adding a Diskless Node . . . . . . . . . . . . . 14 5.9 Removing a Diskless Node . . . . . . . . . . . . 15 5.10 Adding a Full-function Node . . . . . . . . . . 15 5.11 Installing Layered Products . . . . . . . . . . 16 5.12 Installing VMS system updates . . . . . . . . . 16 6 CONSTRAINTS AND LIMITATIONS . . . . . . . . . . . 17 7 NAMING CONVENTIONS . . . . . . . . . . . . . . . . 17 8 ROUTINES and FUNCTIONS . . . . . . . . . . . . . . 18 8.1 LAVCSTART.COM . . . . . . . . . . . . . . . . . 18 8.2 LAVCWATCH.COM . . . . . . . . . . . . . . . . . 19 8.3 LAVCDISKS.COM . . . . . . . . . . . . . . . . . 20 8.4 LAVCLOGNM.COM . . . . . . . . . . . . . . . . . 20 8.5 LAVCPRINT.COM . . . . . . . . . . . . . . . . . 21 8.6 LAVCBATCH.COM . . . . . . . . . . . . . . . . . 21 8.7 LAVCLOGIN.COM . . . . . . . . . . . . . . . . . 22 8.8 LAVCSHUT.COM . . . . . . . . . . . . . . . . . . 22 8.9 NODESTART.DAT . . . . . . . . . . . . . . . . . 23 8.10 NODESPEC.COM . . . . . . . . . . . . . . . . . . 23 8.11 MGMSYSCPY.COM (available) . . . . . . . . . . . 24 8.12 MGMDATUPD.COM (available) . . . . . . . . . . . 25 8.13 MGMSYSUPD.COM (not yet available) . . . . . . . 26 8.14 MGMREMCMD.COM . . . . . . . . . . . . . . . . . 26 8.15 MGMTAILOR.COM . . . . . . . . . . . . . . . . . 26 8.16 MGMQUESTA.COM . . . . . . . . . . . . . . . . . 27 8.17 MGMCOLLECT.COM . . . . . . . . . . . . . . . . . 27 9 DATA STRUCTURES . . . . . . . . . . . . . . . . . 28 9.1 NODESTART.DAT . . . . . . . . . . . . . . . . . 28 10 ERROR-EXCEPTION HANDLING . . . . . . . . . . . . . 29 1 LAVC INSTALLATION & MANAGEMENT INTRODUCTION INTRODUCTION INTRODUCTION INTRODUCTION INTRODUCTION 1 INTRODUCTION This document has been extracted from our LAVC management specifications. It defines basic implementation strategies and cluster management approach. It also introduces procedures designed to manage and maintain cluster environment. The document is NOT fully up to date, since LAVC management requires continuous development effort. OBJECTIVES OBJECTIVES OBJECTIVES OBJECTIVES 2 OBJECTIVES * Highlight basic LAVC implementation strategies. * Define LAVC management tasks and responsible personell * Define LAVC meber node configuration techniques. * Define the structure and contents of the LAVC configuration database * Standardize LAVC node startup and shutdown procedures * Provide for propagation of VMS and other software upgrades to LAVC members * Standardize the approach for the configuration of test LAVC subsets REFERENCE DOCUMENTS REFERENCE DOCUMENTS REFERENCE DOCUMENTS REFERENCE DOCUMENTS 3 REFERENCE DOCUMENTS 1. Digital AA-JP20A-TE VMS Local Area VAXcluster Manual 2. Digital AA-Y513A-TE Guide to VAXclusters 3. Digital AI-Y514B-TE Guide to VAX/VMS Software Installation 1 LAVC INSTALLATION & MANAGEMENT LAVC MANAGEMENT LAVC MANAGEMENT LAVC MANAGEMENT LAVC MANAGEMENT LAVC MANAGEMENT 4 LAVC MANAGEMENT VMS SYSTEM MANAGEMENT VMS SYSTEM MANAGEMENT VMS SYSTEM MANAGEMENT VMS SYSTEM MANAGEMENT 4.1 VMS SYSTEM MANAGEMENT The VMS system management has two main responsibilities: - Make decisions that relate to optimizing overall performance and operation efficiency of the system - Perform tasks that relate to day-to-day overall management and control of the system: Basic responsibilities above may be broken into: - Installing and upgrading the system - Making system specific modifications - Controlling system operation - Maintaining system security - Optimizing system performance - Future requirements planning LAVC MANAGEMENT LAVC MANAGEMENT LAVC MANAGEMENT LAVC MANAGEMENT 4.2 LAVC MANAGEMENT In standard DEC LAVC configuration using a single BOOT node, muti-node management is simplified to a single (boot) node management. There is a single copy of VMS used by all the cluster members, single set of authorization files (SYSUAF.DAT, NETUAF.DAT, RIGHTSLIST.DAT) and a single queue system (JBCSYSQUE.DAT). Since not all the LAVC satellites have the same HW configuration, there is still some need for indiviual node-specific set-up, tuning and node access management. Our LAVC is configured using multiple BOOT nodes, since each node has to be capable of stand-alone operation (except for diskless nodes). Thus, there will be multiple copies of VMS and some layered VMS products. Each boot node must have a current copy of system management files (SYSUAF.DAT etc.) for stand-alone operation. However, when booted as a cluster member, each such node uses common, cluster-wide LAVC database. Stand-alone operation is typically neded for product requirements and installation procedures testing. considered to be a sub-set of LAVC operation. Therefore it uses the same configuration, set-up and management files as in LAVC, using local copies of LAVC management files. For security reasons, any changes to system management files in stand-alone mode are NOT applied backwards to LAVC. 2 LAVC INSTALLATION & MANAGEMENT LAVC MANAGEMENT LAVC AND GROUP MANAGEMENT LAVC AND GROUP MANAGEMENT LAVC AND GROUP MANAGEMENT LAVC AND GROUP MANAGEMENT 4.3 LAVC AND GROUP MANAGEMENT Due to our specific environment, management responsibilities are split among LAVC and GROUP MANAGERS. The major task of the GROUP MANAGER is to manage his development group's computing environment in LAVC, especially on nodes assigned to group. In stand alone operation, GROUP MANAGER has a full management control of the node. The following list defines basic ranges of reponsibilities: LAVC MANAGER LAVC MANAGER LAVC MANAGER LAVC MANAGER LAVC MANAGER 1. Overall LAVC configuration planning and control 2. LAVC performance monitoring and tuning 3. Hardware maintenance and capacity planning 4. MASTER and SPARE nodes software management 5. LAVC data base management: - Authorization file and rights database. Only LAVC manager may add/create/modify LAVC user accounts. - Decnet proxy logins - Cluster-wide queue system - Cluster-wide logical names - Cluster-wide startup and login procedures 6. LAVC wide resources management (disk and account quotas) 7. LAVC accounting 8. Coordinate with GROUP managers 9. Manage cluster-wide installed non-DEC products 10. Maintain LAVC management procedures 11. Implement and maintain LAVC user training 12. Maintain LAVC site and master document sets GROUP MANAGER GROUP MANAGER GROUP MANAGER GROUP MANAGER GROUP MANAGER 1. Define software available on group's workstations 2. Plan and control releases of software installed on group's workstations. 3. Plan and authorize workstation use on group basis (which group(s) have access) 4. Plan workstation resources utilization 5. Inform LAVC manager of group requirements in LAVC 6. Manage and maintain group's workstation startup datafile 7. Manage and maintain GROUP startup and login files 8. Plan and assist in group workstations environment tuning. 9. Manage stand-alone workstation operation (used for software installation testing produc requirement evaluation). In stand-alone operation, GROUP manger has full control over the workstation. 3 LAVC INSTALLATION & MANAGEMENT LAVC MANAGEMENT Group manager accounts are identified by an account name in form gggMGR, where ggg denotes particular group. In STAND ALONE mode, GROUP manager account has the same priviliges and quotas as the SYSTEM account. In LAVC, group manager's privileges are limited to: GRPNAM may insert in group logical name table DETACH may create detached processes LOG_IO may do logical i/o GROUP may affect other processes in same group PRMCEB may create permanent common event clusters PRMMBX may create permanent mailbox TMPMBX may create temporary mailbox OPER operator privilege EXQUOTA may exceed quota NETMBX may create network device VOLPRO may override volume protection PHY_IO may do physical i/o PRMGBL may create permanent global sections GRPPRV group access via system protection 4 LAVC INSTALLATION & MANAGEMENT LAVC IMPLEMENTATION LAVC IMPLEMENTATION LAVC IMPLEMENTATION LAVC IMPLEMENTATION LAVC IMPLEMENTATION 5 LAVC IMPLEMENTATION Our specific LAVC installation is based on individually booting nodes, one of which is declared to be cluster "MASTER" and one "SPARE" (failover). Each node serves it's disks for cluster-wide data access, each node provides batch queue(s) and device queue(s) for connected devices. The MASTER node's system disk holds a complete VMS systems, including all the products available cluster-wide, and all of the cluster configuration data (LAVC database). The SPARE node's system disk is a backup copy of a MASTER; it's data are used in case the MASTER node (or disk) fails. The LAVC management provides for semi-automatic failover of LAVC database between MASTER and SPARE disks. It does NOT provide failover capability for diskless nodes. | When technically possible, dual porting of the LAVC MASTER (and | SPARE) disks will be implemented. This will allow for | semi-automatic DISK failover, including failover for diskless | nodes. Any disk-based node holds (at least) the "required" VMS operating system. This allows for node stand-alone operation, and reduces disk-server load on the LAVC "boot" node. Only missing VMS components are accessed via LAVC software from the MASTER or SPARE node's disk (using a search list in sys$sysroot). Thus, for example, help libraries, code examples and infrequently accessed libraries need not be duplicated to be available. In contrary, system booting from it's own disk may survive MASTER node failure and continue operation using SPARE node's disk. Implementation Strategies Implementation Strategies Implementation Strategies Implementation Strategies 5.1 Implementation Strategies To satisfy changing requirements of an R&D environment, the LAVC configuration MUST provide for flexible re-configuration, including: o Capability to leave LAVC for any (disk-based) LAVC member o Easy re-configuration of any node to serve as "boot" node for several (diskless) satellites o Co-existence of several VMS releases in LAVC o Co-existence of several VMS layered products releases in LAVC 5 LAVC INSTALLATION & MANAGEMENT LAVC IMPLEMENTATION o Simple way to propagate any software update to selected LAVC nodes o Centralized LAVC management and organization. To satisfy the goals above, the entire LAVC configuration must be To satisfy the goals above, the entire LAVC configuration must be To satisfy the goals above, the entire LAVC configuration must be To satisfy the goals above, the entire LAVC configuration must be To satisfy the goals above, the entire LAVC configuration must be implemented using files completely separated form the standard implemented using files completely separated form the standard implemented using files completely separated form the standard implemented using files completely separated form the standard implemented using files completely separated form the standard VMS system directory tree. VMS system directory tree. VMS system directory tree. VMS system directory tree. VMS system directory tree. Directory tree containing such data is referred to as LAVC database throughout this documment. 6 LAVC INSTALLATION & MANAGEMENT LAVC IMPLEMENTATION LAVC nodes LAVC nodes LAVC nodes LAVC nodes 5.2 LAVC nodes For the LAVC, each node may be classified as one of the following: * MASTER NODE - the main LAVC node. It's system disk holds the full VMS operating system and the primary copy of LAVC management data. * SPARE NODE - the failover for MASTER node. It's system disk holds up-to date copy of VMS and LAVC management data from MASTER. * FULL NODE - the fully functional cluster member, with VMS (subset) copy on it's disk. Full node may be used as LAVC "BOOT" node for diskless nodes. * DISKLESS NODE - limited functionality node, which does NOT have VMS system on it's disks. Must boot VMS remotely, using one of the nodes above as a "boot" node (in DEC LAVC teminology SATELLITE NODE). Any node (except for DISKLESS) may leave the cluster and operate in STAND-ALONE mode, using local copy of VMS and subset of LAVC management data. The differences in cluster or stand-alone operation may be sumarized as: CLUSTERED NODE (WORKSTATION) CLUSTERED NODE (WORKSTATION) CLUSTERED NODE (WORKSTATION) CLUSTERED NODE (WORKSTATION) CLUSTERED NODE (WORKSTATION) - Direct access to large, high speed disk drives - Direct access to any spooled printer in cluster - Direct access to any batch queue in cluster - Cluster supported data backup - Full set of VMS utilities and other licensed products - Access to any SW product available in cluster - User privileges are under strict control STAND-ALONE NODE (WORKSTATION) STAND-ALONE NODE (WORKSTATION) STAND-ALONE NODE (WORKSTATION) STAND-ALONE NODE (WORKSTATION) STAND-ALONE NODE (WORKSTATION) - Group manager has full control of the system (workstation) - System may be used for any experiments at system level - VMS support may be limited (not all utilities, products) - The rest of cluster may be accessed via DECNET - Node data can not be backed-up by LAVC management 7 LAVC INSTALLATION & MANAGEMENT LAVC IMPLEMENTATION VMS SYSTEM TREE CONFIGURATIONS VMS SYSTEM TREE CONFIGURATIONS VMS SYSTEM TREE CONFIGURATIONS VMS SYSTEM TREE CONFIGURATIONS 5.3 VMS SYSTEM TREE CONFIGURATIONS The following VMS system tree schematic are included to explain VMS components placement in standard VMS system tree as oposed to standard LAVC and our LAVC configurations. STANDARD VMS SYSTEM TREE STANDARD VMS SYSTEM TREE STANDARD VMS SYSTEM TREE STANDARD VMS SYSTEM TREE STANDARD VMS SYSTEM TREE sys$sysdevice _____|_____ | [SYS0.] | sys$sysroot=DUA0:[SYS0.] |___________| | ___________________________|___________________________ ___|____ ___|____ ___|____ ___|____ ___|____ |[SYSMGR]| |[SYSEXE]| |[SYSLIB]| |[SYSHLP]| |[SYSUPD]| |________| |________| |________| |________| |________| sys$manager sys$system sys$library sys$help sys$update LAVC BOOT/SATELLITE NODE VMS SYSTEM TREE LAVC BOOT/SATELLITE NODE VMS SYSTEM TREE LAVC BOOT/SATELLITE NODE VMS SYSTEM TREE LAVC BOOT/SATELLITE NODE VMS SYSTEM TREE LAVC BOOT/SATELLITE NODE VMS SYSTEM TREE sys$sysdevice _____|_____ | [SYSn.] | sys$specific=DUA0:[SYSn.] |___________|| sys$common=DUA0:[V4COMMON.] |___________| (see note below) | ___________________________|___________________________ ___|____ ___|____ ___|____ ___|____ ___|____ |[SYSMGR]| |[SYSEXE]| |[SYSLIB]| |[SYSHLP]| |[SYSUPD]| |________|| |________|| |________|| |________|| |________|| |________| |________| |________| |________| |________| sys$manager sys$system sys$library sys$help sys$update sys$sysroot = sys$specific,sys$common sys$sysroot = sys$specific,sys$common sys$sysroot = sys$specific,sys$common sys$sysroot = sys$specific,sys$common sys$sysroot = sys$specific,sys$common Any VMS files are accessed using logical name sys$sysroot. Since in cluster environment sys$sysroot is a search-list, each file is looked up in the node-specific directory (sys$specific:[nnnn]) first; if not found, the common directory tree (sys$common:[nnnn] is used. NOTE NOTE NOTE NOTE NOTE, examples here refer to sys$common as DUA0:[V4COMMON.]. In the true VMS implementation, sys$common translates to DUA0:[SYSn.SYSCOMMON.], which is an alternate entry name for delete delete delete delete directory DUA0:[V4COMMON]. Before an attempt to delete the system specific tree, this entry MUST be removed ( $ SET FILE /REMOVE DUA0:[SYS]SYSCOMMON.DIR ). 8 LAVC INSTALLATION & MANAGEMENT LAVC IMPLEMENTATION In the LAVC environment, most of the VMS files are located in the common directory tree [V4COMMON.]. Only node-specific files (such as pagefiles, system parameters, accounting data) are placed in the system specific tree [SYSn.]. There is a separate [SYSn.] specific directory tree for each satellite using particular BOOT node. LAVC satellite node configuration is performed by the procedure sys$manager:SATELLITE_CONFIG.COM. This procedure creates (or removes) system-specific [SYSn.] tree containing all the node-specific files (on the BOOT node), and prepares everything for satellite boot and first startup. LAVC FULL NODE VMS SYSTEM TREE LAVC FULL NODE VMS SYSTEM TREE LAVC FULL NODE VMS SYSTEM TREE LAVC FULL NODE VMS SYSTEM TREE LAVC FULL NODE VMS SYSTEM TREE _____|_____ | [SYSn.] | sys$specific |___________|| sys$common |___________|| master$DUA0:[V4COMMON.] |_ _ _ _ _ _|| spare$DUA0:[V4COMMON.] |_ _ _ _ _ _| | ___________________________|___________________________ ___|____ ___|____ ___|____ ___|____ ___|____ |[SYSMGR]| |[SYSEXE]| |[SYSLIB]| |[SYSHLP]| |[SYSUPD]| |________|| |________|| |________|| |________|| |________|| |________|| |________|| |________|| |________|| |________|| |_ _ _ _ || |_ _ _ _ || |_ _ _ _ || |_ _ _ _ || |_ _ _ _ || |_ _ _ _ | |_ _ _ _ | |_ _ _ _ | |_ _ _ _ | |_ _ _ _ | sys$manager sys$system sys$library sys$help sys$update sys$sysroot = node$DUA0:[SYSn.],node$DUA0:[SYSn.SYSCOMMON], sys$sysroot = node$DUA0:[SYSn.],node$DUA0:[SYSn.SYSCOMMON], sys$sysroot = node$DUA0:[SYSn.],node$DUA0:[SYSn.SYSCOMMON], sys$sysroot = node$DUA0:[SYSn.],node$DUA0:[SYSn.SYSCOMMON], sys$sysroot = node$DUA0:[SYSn.],node$DUA0:[SYSn.SYSCOMMON], master$DUA0:[V4COMMON],spare$DUA0:[V4COMMON] master$DUA0:[V4COMMON],spare$DUA0:[V4COMMON] master$DUA0:[V4COMMON],spare$DUA0:[V4COMMON] master$DUA0:[V4COMMON],spare$DUA0:[V4COMMON] master$DUA0:[V4COMMON],spare$DUA0:[V4COMMON] (all concealed device names translated) The LAVC FULL node basic configuration corresponds to that of a LAVC BOOT node. Thus any such node may be used as a BOOT node for diskless satellites. In addition, VMS common directory trees on MASTER and SPARE node are added to provide files not available on the local node's disk. Contrary to a standard DEC VMS configuration, the our cluster-wide management files are located in the LAVC database (located on the MASTER node) instead of the sys$common system tree (and accessed using logical name pointers). - SYSUAF.DAT - common cluster authorization file - NETUAF.DAT - common cluster network proxy file - RIGHTSLIST.DAT - common cluster rights (identifier) database 9 LAVC INSTALLATION & MANAGEMENT LAVC IMPLEMENTATION - JBCSYSQUE.DAT - common cluster JOB queue control file - VMSMAIL.DAT - common cluster MAIL database | The DECNET databse NETNODEREMOTE.DAT can NOT be made | cluster-wide, since it contains download information for DISKLESS | satellites, which is boot node specific. Therefore, each (FULl) | node must have it's own NETNODEREMOTE.DAT in sys$common:[SYSEXE], | and DECNET databse updates MUST be propagated explicitly. The full node common tree [V4COMMON.] need not contain all the VMS files and layered VMS products. Such files and products are found in the [V4COMMON.] tree of the MASTER or SPARE node. For redundancy, the MASTER [V4COMMON.] tree and LAVC database are duplicated on the SPARE node. Should the MASTER node fail, any necessary file will be automatically located on the SPARE node. In some instances, LAVC member configuration may prefer to use a file (product) located on the MASTER node, even though it has a copy in it's own system tree (for example, a different VMS product release is loaded locally for testing). To standardize access to files located on LAVC master node, logical name lavc$root lavc$root lavc$root lavc$root lavc$root always points to MASTER and SPARE node's [V4COMMON.] trees. In stand-alone configuration, lavc$root is identical to sys$sysroot. 10 LAVC INSTALLATION & MANAGEMENT LAVC IMPLEMENTATION LAVC NODE startup LAVC NODE startup LAVC NODE startup LAVC NODE startup 5.4 LAVC NODE startup Common, generic SYSTARTUP.COM is used on any LAVC member node. does not does not does not does not In contrary to typical SYSTARTUP.COM, our version does not contain any node specific commands. The only specific information hard-coded in the generic SYSTARTUP.COM are the MASTER and SPARE node disks. Should those change, it is necessary to update generic sys$common:[SYSMGR]:SYSTARTUP.COM on each node in cluster. The NODE specific startup commands are located in the LAVC database under directory lavc$data:[nodename]. This allows the same node to join the cluster booting from different boot node (or it's own disk) without any need to change or move startup files. The generic startup procedure executes nested startup files according to node-specific list. The majority of such procedures are common, executed on many different nodes (such as product startups, or commonly used common LAVC procedures stored in lavc$data:[LAVCCOM]). Nested procedures should be coded as re-usable, to allow for testing and re-start if necessary. Since the startup files are maintained in a common location, significant differences in individual node startups should not occur. LAVC NODE Access Control LAVC NODE Access Control LAVC NODE Access Control LAVC NODE Access Control 5.5 LAVC NODE Access Control LAVC assumes common, cluster wide authorization file, allowing any user to use any node in cluster. In our R&D environment it may be necessary to restrict access to some nodes. LAVC management implements the access control based on user group membership. For interactive users, each node may define different access rights for users belonging to the same accounting group. lavc$access lavc$access lavc$access lavc$access Node access is controlled by logical name lavc$access, which defines a list of group access rights in a form: DEFINE/SYS LAVC$DATA "+001-230+250-120" where: o +001 grants access to members of 001 (system) group o -230 denies access to members of 230 group o +ALL grants access to anybody unless explicitly denied Users not explicitly listed in node access list are allowed to use node in "RESTRICTED" mode (lowered priority ...) Users denied access to node are notified by login process and 11 LAVC INSTALLATION & MANAGEMENT LAVC IMPLEMENTATION logged off. Access control does NOT apply to NETWORK access (controlled by PROXY database) and to BATCH (access is required to allow automatic load balancing). LAVC NODE Shutdown LAVC NODE Shutdown LAVC NODE Shutdown LAVC NODE Shutdown 5.6 LAVC NODE Shutdown Special account "SHUT" is provided to perform proper VMS shutdown sequence. On workstation nodes, "SHUT" may be used by any local workstation user, no privileges are required. SHUT account allows to reboot node ether as a LAVC member, or as stand-alone node. It also provides option to force a local node to DISMOUNT disks from cluster with /ABORT (should the node leave cluster for longer period). Without this option, any access to unavaliable disk waits (hangs) until node comes back. | NOTE NOTE NOTE NOTE | NOTE: Failure to DISMOUNT cluster-wide mounted disks for the | node booting stand-alone (off-cluster) may force the ENTIRE | CLUSTER shutdown. Stand-alone boot changes mount-count on node's | disks. If any such disk was NOT dismounted from cluster BEFORE | node comes back, cluster attempt to re-mount that disk back fails | on changed mount count, resulting in endless mount verification. LAVC Data Base LAVC Data Base LAVC Data Base LAVC Data Base 5.7 LAVC Data Base LAVC Data Base resides on LAVC MASTER and SPARE disk. Necessary subsets of the database are copied to local nodes disks to allow stand-alone operation. Such subsets are updated on daily basis. 12 LAVC INSTALLATION & MANAGEMENT LAVC IMPLEMENTATION LAVC Data Base is pointed to by logical root device lavc$data. This device contains the following directories and files: [LAVCCOM] [LAVCCOM] [LAVCCOM] [LAVCCOM] [LAVCCOM] Basic, common cluster wide DATA and PROCEDURES: o SYSUAF.DAT - LAVC wide authorization file o NETUAF.DAT - LAVC wide proxy database o RIGHTSLIST.DAT - LAVC wide rights identifier database o JBCSYSQUE.DAT - LAVC common queue file o LAVCWATCH.COM - dynamic re-configuration process startup o LAVCDISKS.COM - disk-mounting procedure for all the disks o LAVCLOGNM.COM - cluster-wide logical names set-up o LAVCPRINT.COM - print queue/forms set-up for entire cluster o LAVCBATCH.COM - batch queue set-up for entire cluster o LAVCSHUT.COM - node shutdown procedure ("SHUT" user login) o LAVCLOGIN.COM - cluster-wide login procedure o LAVCNOTE.TXT - notice displayed on user login o SYSTARTUP.COM - master copy of SYSTARTUP.COM procedure o SYSHUTDWN.COM - master copy of SYSHUTDWN.COM procedure [node] [node] [node] [node] [node] Node-specific data and procedures (should be minimal) o NODESTART.DAT - Node Startup procedures list o NODESPEC.COM - Node specific terminals/devices set-up. o NODENOTE.TXT - Node specific login notice o MODPARAMS.DAT - copy of node's sys$system:MODPARAMS.DAT [GRPnnn] [GRPnnn] [GRPnnn] [GRPnnn] [GRPnnn] Group specific data and procedures o GRPnnnSTART.COM - Group startup definitions o GRPnnnLOGIN.COM - Group specific LOGIN procedure The [GRPnnn] directory is owned and maintained by the GROUP MANAGER. Using group startup and login file, group manager effectively controls his group's working environment on any node in the cluster. 13 LAVC INSTALLATION & MANAGEMENT LAVC IMPLEMENTATION [LAVCMGM] [LAVCMGM] [LAVCMGM] [LAVCMGM] [LAVCMGM] Management / Maintenance files and procedures: o MGMSYSCPY.COM - VMS copy to a new node's system device o MGMSYSUPD.COM - VMS system update to another node o MGMDATUPD.COM - LAVC database update to other nodes o MGMCOLECT.COM - LAVC management data collection o MGMQUESTA.COM - LAVC JOB/QUEUE system re-start o MGMTAILOR.COM - VMS tailoring procedure o MGMADDUSR.COM - LAVC user authorization procedure o MGMREMCMD.COM - Execute command(s) on remote node(s) Using the files from the LAVC database, only following management files will be used from the VMS system directory tree: o sys$common:[sysmgr]SYSTARTUP.COM (copy from LAVC database) o sys$common:[sysmgr]SYSHUTDWN.COM (copy from LAVC database) o sys$specific:[sysmgr]LAVCWATCH.COM (created by LAVC wide one) o sys$specific:[sysmgr]ACCOUNTNG.DAT - node accounting data o sys$specific:[sysmgr]OPERATOR.LOG - node operator log (if used) o sys$specific:[sysexe]NET*.DAT - node DECNET database o sys$specific:[sysexe]MODPARAMS.DAT (and other AUTOGEN files) o sys$specific:[sysexe]PAGEFILE.SYS (if not remote) o sys$specific:[sysexe]SWAPFILE.SYS (if not remote) o sys$specific:[sysexe]SYSDUMP.DMP (if used) Adding a Diskless Node Adding a Diskless Node Adding a Diskless Node Adding a Diskless Node 5.8 Adding a Diskless Node "DISKLESS" is considered any node which DOES NOT USE a VMS copy on it's local disk. (a node with a VMS copy may still be booted as "diskless"). 1. Create a node database in directory lavc$data:[nodename] on the LAVC MASTER node. This database should contain tailored versions of NODESTART.DAT and NODESPEC.DAT. 2. Use the DEC procedure sys$manager:SATELLITE_CONFIG.COM on the selected BOOT node to prepare BOOT node for remote satellite. 3. Inspect / modify the MODPARAMS.DAT in the node specific system tree. In most cases the required parameters for the UIS (VWS) software must be added (use files MGMGPXPAR.DAT or MGMSTARPAR.DAT from lavc$data:[LAVCMGM]). 4. Boot the new diskless node. It automatically configures DECNET database, performs AUTOGEN and reboots. Since the boot node's sys$common:SYSTARTUP.COM is already the generic LAVC startup procedure, rebooted system starts using files from lavc$data:[nodename]. 14 LAVC INSTALLATION & MANAGEMENT LAVC IMPLEMENTATION Removing a Diskless Node Removing a Diskless Node Removing a Diskless Node Removing a Diskless Node 5.9 Removing a Diskless Node Always use sys$manager:SATELLITE_CONFIG.COM to remove a diskless node. Manual removal is a risky operation. Since SATELLITE_CONFIG.COM does NOT allow removal of a node which is currently a cluster member, node removal MUST be performed with node down. Adding a Full-function Node Adding a Full-function Node Adding a Full-function Node Adding a Full-function Node 5.10 Adding a Full-function Node A full function node is capable of being the "BOOT" node for diskless satellites. It has a VMS copy on it's local disk, including DECNET and LAVC software (some libraries, help files and examples need not be present). 1. Create node database lavc$data:[nodename] on the LAVC MASTER node. Such database should contain tailored versions of NODESTART.DAT and NODESPEC.DAT. 2. Boot the new node "DISKLESS" as described above. You do not have to create large pagefiles nor modify MODPARAMS.DAT, since the next step requires only limited VMS functionality. Additionally, UIS (VWS) software does not have to be started and lavc$data:[nodename] may be empty. 3. Use the LAVC management procedure MGMSYSCPY.COM to down-load required (sub)set of VMS. MGMSYSCPY.COM must be executed on the TARGET node (it checks node HW configuration). 4. Tailor the MODPARAMS.DAT created by MGMSYSCPY.COM, if necessary. 5. Shut the diskless node down and remove it's root on the boot node using sys$manager:SATELLITE_CONFIG.COM. 6. Boot the new node from local disk. Startup then automatically configures DECNET database, performs AUTOGEN and reboots. Since MGMSYSCPY provided the generic startup procedure in sys$common:SYSTARTUP.COM, the rebooted system starts by using files from lavc$data:[nodename]. 7. Tailor the target VMS system using MGMTAILOR.COM and particular tailoring control file. 15 LAVC INSTALLATION & MANAGEMENT LAVC IMPLEMENTATION Installing Layered Products Installing Layered Products Installing Layered Products Installing Layered Products | 5.11 Installing Layered Products | | VMS layered products are always installed using VMSINSTAL.COM. | Our LAVC management provides special, captive account VMSINST | which gives full interface to VMSINSTAL.COM to any authorized | user. Product installation thus may be performed by any | workstation user AUTHORIZED to do so by the LAVC manager (given | the password). | | VMSINST account may be used for software installation development | and testing as well. In any case, VMSINST is targeted for use on | "FULL" nodes (workstations holding it's own copy of VMS). Using | VMSINST on boot nodes (11/780) is prohibited for safety reasons. | | | Installing VMS system updates Installing VMS system updates Installing VMS system updates Installing VMS system updates | 5.12 Installing VMS system updates | | Minor VMS updates may be installed using the approach for Layered | Products, using the VMSINST account. Major VMS updates MUST be | installed by the LAVC manager. | | Since our LAVC uses mutiple copies of the VMS operating system, | procedures will be developed to PROPAGATE updates to individual | nodes, as opposed to performing the VMS update on each node | separately. 16 LAVC INSTALLATION & MANAGEMENT CONSTRAINTS AND LIMITATIONS CONSTRAINTS AND LIMITATIONS CONSTRAINTS AND LIMITATIONS CONSTRAINTS AND LIMITATIONS CONSTRAINTS AND LIMITATIONS 6 CONSTRAINTS AND LIMITATIONS Initial design contains only limited fail over from MASTER to spare node. The LAVCWATCH procedure changes LAVC logicals, thus affecting any subsequent file access. However, any files accessed on the MASTER at the moment the MASTER hangs (crashes) will remain accessed effectively blocking process execution until MASTER (or SPARE) comes back, or the process is explicitly terminated. This applies also to the JBCSYSQUE.DAT, thus the job/queue system must be restarted on each node. The LAVCWATCH procedure accomplishes this task by killing and re-starting the JOBCTL process. NAMING CONVENTIONS NAMING CONVENTIONS NAMING CONVENTIONS NAMING CONVENTIONS 7 NAMING CONVENTIONS The following name prefixes are mandatory to any files used for LAVC management: LAVC... LAVC... LAVC... LAVC... * LAVC... for files used in LAVC startup and operation NODE... NODE... NODE... NODE... * NODE... for any node specific files used in startup GRPnnn... GRPnnn... GRPnnn... GRPnnn... * GRPnnn... for group-specific files MGM... MGM... MGM... MGM... * MGM... for management utility files Any logical names related to LAVC startup and operation use LAVC$... LAVC$... LAVC$... LAVC$... prefix LAVC$.... 17 LAVC INSTALLATION & MANAGEMENT ROUTINES and FUNCTIONS ROUTINES and FUNCTIONS ROUTINES and FUNCTIONS ROUTINES and FUNCTIONS ROUTINES and FUNCTIONS 8 ROUTINES and FUNCTIONS LAVCSTART.COM LAVCSTART.COM LAVCSTART.COM LAVCSTART.COM 8.1 LAVCSTART.COM Common, generic SYSTARTUP.COM is used on any LAVC node. Embeded within procedure body is the information about LAVC MASTER and SPARE nodes, to allow location of LAVC data base. This procedure must be updated only if the basic LAVC configuration changes. Master copy of procedure is maintained in lavc$data:[LAVCCOMM] as SYSTARTUP.COM, and copied into each node's sys$manager directory after the VMS system load, or if the LAVC configuration (MASTER, SPARE disk) changes. Procedure flow: 1. Locates the LAVC database mounting the MASTER and SPARE node's disks 2. Starts the LAVCWATCH process, which is responsible for dynamic LAVC reconfiguration (should MASTER or SPARE node fail). LAVC_WATCH process (along with other functions) maintains basic LAVC logical names: * lavc$data - pointer to LAVC Data Base. * sys$sysroot - with added roots on MASTER and SPARE nodes * lavc$root - pointer to roots on MASTER and SPARE nodes 3. Locates node-specific file lavc$data:[node]NODESTARTUP.DAT This file is a list of procedures to perform during particular node startup. 4. Performs all procedures listed in NODESTARTUP.COM. Procedures are excuted synchronously, or as a detached process running uder specified UIC. 5. Checks for any errors encountered during system startup. If any, creates notification message to be displayed by system-wide login, and mails such a message to LAVC manager (if possible). 6. If no errors, procedure issues notification reply and allows user logins. In the case of severe (fatal) errors, it restricts access to holders of the OPER privilege (LAVC and GROUP managers). If the "private" bootstrap has been performed, restricts logins to holders of OPER privilege, without user notification. 18 LAVC INSTALLATION & MANAGEMENT ROUTINES and FUNCTIONS LAVCWATCH.COM LAVCWATCH.COM LAVCWATCH.COM LAVCWATCH.COM 8.2 LAVCWATCH.COM Cluster-wide procedure to monitor any significant cluster configuration changes and act accordingly. Procedure is automatically executed by the generic LAVC startup (doe NOT need to be listed in NODESTART.DAT). On the first execution (at startup time), procedure performs configuration functions, and creates it's copy on the local system's disk. This copy is then executed as a detached process LAVC_WATCH. (Local copy is used to prevent LAVC_WATCH hang-up if the MASTER disk goes off-line). In context of detached process, procedure periodically checks presence of the MASTER and SPARE node in cluster. On any configuration change, it performs configuration functions. Configuration functions currently include: o Main LAVC logical names maintennance: lavc$data, lavc$root, sys$sysroot. On any LAVC configuration change, logical names are adjusted to a new configuration. o Cluster quorum monitoring. If the cluster membership drops (either due regular shutdown or node crash), process adjusts quorum to possible minimum to prevent cluster hang-up on lost quorum. o Job/Queue system restart. If the cluster configuration (location of the JBCSYSQUE.DAT) changes, Job/Queue system must be re-started. LAVC_WATCH process waits (10 minutes) after the LAVC configuration change; if the change appears to be permannent, it restarts the Job/Queue system using management procedure MGMQUESTA.COM. Procedure master copy is maintained in lavc$data:[LAVCCOM], updates are (daily) propagated to each node's [LAVCDATA.LAVCCOM] directory. 19 LAVC INSTALLATION & MANAGEMENT ROUTINES and FUNCTIONS LAVCDISKS.COM LAVCDISKS.COM LAVCDISKS.COM LAVCDISKS.COM 8.3 LAVCDISKS.COM Cluster-wide procedure to mount any disks public to the CLUSTER. The procedure contains embeded list of all LAVC served disk devices and their labels. Transversing it's list, procedure mounts any disk it finds. In LAVC environment, mounts are cluster-wide. In stand-alone mode, procedure finds local disks only, and mounts them locally. Procedure also accepts additional arguments: - P1=DISMOUNT or REBUILD (default = MOUNT) - P2=nodename (operation limited to disks on "node") - P3=/CLUSTER (dismount is /Abort/Cluster) Procedure master copy is maintained in lavc$data:[LAVCCOM], updates are (daily) propagated to each node's [LAVCDATA.LAVCCOM] directory. LAVCLOGNM.COM LAVCLOGNM.COM LAVCLOGNM.COM LAVCLOGNM.COM 8.4 LAVCLOGNM.COM LAVC system wide logical names. Procedure creates CLUSTER / SYSTEM wide logical names on each node in cluster. Such names include: - Definition of logical names for standard VMS system files SYSUAF.DAT, NETUAF.DAT, RIGHTSLIST.DAT (pointing to LAVC data base). - Functional devices logical names (refer to PTP 147-14670-000 "R&D Account Reconfiguration") - Other logical names as needed. Procedure master copy is maintained in lavc$data:[LAVCCOM], updates are (daily) propagated to each node's [LAVCDATA.LAVCCOM] directory. 20 LAVC INSTALLATION & MANAGEMENT ROUTINES and FUNCTIONS LAVCPRINT.COM LAVCPRINT.COM LAVCPRINT.COM LAVCPRINT.COM 8.5 LAVCPRINT.COM LAVC-wide device queue setup. Procedure: 1. Starts the queue manager using cluster-wide JOB controller file lavc$data:[LAVCCOM]JBCSYSQUE.DAT 2. Defines all the named forms used in LAVC 3. Defines (initializes) cluster wide queues 4. Starts queues for devices present on local node Procedure master copy is maintained in lavc$data:[LAVCCOM], updates are (daily) propagated to each node's [LAVCDATA.LAVCCOM] directory. LAVCBATCH.COM LAVCBATCH.COM LAVCBATCH.COM LAVCBATCH.COM 8.6 LAVCBATCH.COM LAVC wide batch system setup. Procedure: 1. Starts queue manager (if not already active). 2. Defines (initialize) all cluster-wide batch queues. There will be: - Generic, cluster wide SYS$BATCH queue - Node's standard queue node$BATCH (later with enabled generic processing) - Special, named queues created where necessary (build queues, maketest queues etc.). Such queues must be explicitly set /NoEnable_Generic to prevent their use for normal (generic) jobs. 3. Starts queues present on the local node. In stand-alone mode, logical name SYS$BATCH points to node$BATCH queue. Procedure master copy is maintained in lavc$data:[LAVCCOM], updates are (daily) propagated to each node's [LAVCDATA.LAVCCOM] directory. 21 LAVC INSTALLATION & MANAGEMENT ROUTINES and FUNCTIONS LAVCLOGIN.COM LAVCLOGIN.COM LAVCLOGIN.COM LAVCLOGIN.COM 8.7 LAVCLOGIN.COM Standard system wide LOGIN procedure: 1. Checks for the "Group Manager" account, adjusts account privileges if necessary. 2. Check if the user is authorized to use this node, terminates process if not (giving notification). Checking based on the user accont clasification. 3. Modifies user's base priority based on user's account rights. 4. Displays lavc$data:[LAVCCOM]LAVCNOTE.TXT (cluster wide daily notice, if present and interactive mode) 5. Displays lavc$data:[LAVCCOM]NODENOTE.TXT (node specific daily notice, if present and interactive mode) 6. Displays sys$manager:NODEBOOT.TXT (node startup error log, created by system startup in case of errors) 7. Executes lavc$data:[LAVCGRPnnn]GRPnnnn.COM group login procedure (both in Interactive, Batch, Network or Other mode) if available. 8. Executes user private procedure sys$login:LOGIN.COM Procedure master copy is maintained in lavc$data:[LAVCCOM], updates are (daily) propagated to each node's [LAVCDATA.LAVCCOM] directory. LAVCSHUT.COM LAVCSHUT.COM LAVCSHUT.COM LAVCSHUT.COM 8.8 LAVCSHUT.COM Captive LOGIN procedure for the "SHUT" account. This account allows workstation user to properly shut-down his workstation without need for any special privileges (the SHUT account has all the privileges required). SHUTDOWN may be executed ONLY on WORKSTATION nodes, and ONLY by an interactive users logged locally to the workstation. The procedure: 1. Prompts for next boot configuration (as LAVC member or STAND-ALONE) and updates system parameter VAXCLUSTER, if necessary. 2. Prompts for standard shutdown questions 22 LAVC INSTALLATION & MANAGEMENT ROUTINES and FUNCTIONS 3. Performs shutdown with options REMOVE_NODE (in LAVC) and REBOOT_CHECK 4. Logs shutdown information Procedure master copy is maintained in lavc$data:[LAVCCOM], updates are (daily) propagated to each node's [LAVCDATA.LAVCCOM] directory. NODESTART.DAT NODESTART.DAT NODESTART.DAT NODESTART.DAT 8.9 NODESTART.DAT Node specific start-up command procedures list. LAVC management assumes that most startup operations / procedures will be common to many LAVC members. However, each node may use different sub-set of such procedures. Any commands executed during node startup therefore must be comming from either LAVC COMMON procedure, PRODUCT specific startup, or be included in NODESPEC.COM. For details on NODESTART.DAT format please refer to the following section named "DATA STRUCTURES". NODESPEC.COM NODESPEC.COM NODESPEC.COM NODESPEC.COM 8.10 NODESPEC.COM Node-specific startup procedure. Performs node - specific can not be cluster-wide can not be cluster-wide can not be cluster-wide can not be cluster-wide operations which can not be cluster-wide. Such operations typically include: - Defines node access rights by logical name LAVC$ACCESS. - Installs node-specific images (if necessary) - Installs additiona page/swap files - Configures special local devices (unless for some technical reason we must use sys$manager:syconfig.com) - Sets-up terminal ports, printers etc. For device set-up, call to LAVCDEVSET.COM should be used. This code allows to execute (test) procedure on running system, with devices already allocated to users. Procedure master copy is maintained in lavc$data:[node], updates are (daily) propagated to each node's [LAVCDATA.node] directory. 23 LAVC INSTALLATION & MANAGEMENT ROUTINES and FUNCTIONS MGMSYSCPY.COM MGMSYSCPY.COM MGMSYSCPY.COM MGMSYSCPY.COM 8.11 MGMSYSCPY.COM (available) Procedure used by system manager to propagate a copy of the VMS operating system to a disk on a new LAVC member. It assumes new LAVC member is booted diskless, and procedure is executed on the target node. Procedure: 1. Prompts for node parameters 2. Uses either BACKUP (to copy the entire VMS tree) or parameter-directed VMS procedure sys$update:VMSKITBLD.COM to copy the basic selected VMS subset to the target disk. 3. Creates new page/swap/dump files 4. Creates the intial version of sys$system:MODPARAMS.DAT. On VAX workstations this file already contains requirements for the UIS (VWS) software. 5. Creates a file sys$manager:SYSTARTUP.INI to be executed at the first boot from created system disk. This procedure will: - Configure the DECNET database (incl. known nodes copy) - Execute sys$update:AUTOGEN.COM to reboot the system 6. Copies LAVC generic startup file SYSTARTUP.COM into sys$common:[sysexe]:SYSTARTUP.COM 7. Copies LAVC generic startup file SYSHUTDWN.COM into sys$common:[sysexe]:SYSHUTDWN.COM 8. Copies a required subset of LAVC data base [LAVCCOM], [node] to the target disk. 9. Logs action in LAVC database. 24 LAVC INSTALLATION & MANAGEMENT ROUTINES and FUNCTIONS MGMDATUPD.COM MGMDATUPD.COM MGMDATUPD.COM MGMDATUPD.COM 8.12 MGMDATUPD.COM (available) Procedure checks the LAVC database for modified files, and propagates any such files to SPARE node. Files existing in individual node LAVC database subset are updated as well. Procedure action is logged in LAVC database. Procedure is intended to run in BATCH mode, presumably during night hours, after system backups complete. Procedure arguments (defaults are hardcoded): - P1 - MASTER node disk to use as database source - P2 - SPARE node disk to update - P3 - list of local node's disks to update - P4 - options (SUBMIT) 25 LAVC INSTALLATION & MANAGEMENT ROUTINES and FUNCTIONS MGMSYSUPD.COM MGMSYSUPD.COM MGMSYSUPD.COM MGMSYSUPD.COM 8.13 MGMSYSUPD.COM (not yet available) Procedure checks the LAVC MASTER node system tree for new / modified system files and propagates any such updated files to selected node(s). Procedure action is logged in LAVC database. Procedure does not handle system data (SYSUAF.DAT, ACCOUNTNG.DAT etc). Procedure prompts for arguments: - P1 - modification date (modified/since=date) - P2 - MASTER node to be used as system files source - P3 - target node(s) - P3 - options MGMREMCMD.COM MGMREMCMD.COM MGMREMCMD.COM MGMREMCMD.COM 8.14 MGMREMCMD.COM Procedure executes DCL command(s) on remote node(s) using DECNET SET HOST facility. Procedure prompts for arguments: - P1 - username|password to use for log-in - P2 - list of DCL commands/data separated by "|" - P3 - list of nodes separated by "|" MGMTAILOR.COM MGMTAILOR.COM MGMTAILOR.COM MGMTAILOR.COM 8.15 MGMTAILOR.COM Procedure tailors the VMS oprating system, using tailoring control file. DELETE operation deletes all the "target" files prescribed by the control file, except for those which can not be found on "source" VMS tree (and thus can not be restored later). RESTORE operation restores all the files prescribed by the control file, unless such file allready exists on target. Procedure prompts for arguments: - P1 - operation DELETE or RESTORE - P2 - "source" disk with VMS directory tree - P3 - "target" disk with VMS directoery tree - P4 - tailoring control file 26 LAVC INSTALLATION & MANAGEMENT ROUTINES and FUNCTIONS The tailoring control file [.TLR] format is identical to standard VMS tailoring files: [directory]filename.type Any line starting with "$" is considered a direct DCL command, comments may be included using prefix "$!". MGMQUESTA.COM MGMQUESTA.COM MGMQUESTA.COM MGMQUESTA.COM 8.16 MGMQUESTA.COM Procedure is used to re-start JOB/QUEUE system which hang due to loss of the disk holding JBCSYSQUE.DAT (or any other reason). It must be executed with ALL the privileges enabled. Procedure: - Aborts the JOBCTL process - Aborts any print symbionts found - Starts the JOBCTL proceess using sys$system:STARTUP JOBCTL - Executes LAVC$DATA:[LAVCCOM]LAVCPRINT.COM - Executes LAVC$DATA:[LAVCCOM]LAVCBATCH.COM MGMCOLLECT.COM MGMCOLLECT.COM MGMCOLLECT.COM MGMCOLLECT.COM 8.17 MGMCOLLECT.COM Procedure used to (daily) collect accounting and other system management data. Currently, the folowing files are handled: - sys$manager:ACCOUNTNG.DAT --> target:ACC_MAY05.node - sys$manager:OPERATOR.LOG --> target:OPR_MAY05.node - sys$errorlog:ERRLOG.SYS --> target:ERR_MAY05.node Procedure argument P1 defines the target device:[directory] for collected data. Procedure performs all the necessary actions necessary to open new files. After successfull copy, original data are deleted to assure system disk will not overflow. Target files are labeled with the date of collection. It is assumed that procedure is executed at midnight, thus file labeled MAY05 will contain data BEFORE MAY 05. 27 LAVC INSTALLATION & MANAGEMENT DATA STRUCTURES DATA STRUCTURES DATA STRUCTURES DATA STRUCTURES DATA STRUCTURES 9 DATA STRUCTURES NODESTART.DAT NODESTART.DAT NODESTART.DAT NODESTART.DAT 9.1 NODESTART.DAT Node - specific startup data file NODESTART.DAT contains list of command procedures to invoke during node startup in format: f|UIC|pathname|description f|UIC|pathname|description f|UIC|pathname|description f|UIC|pathname|description f|UIC|pathname|description where: f f f f * f is a severity flag for particular file [I|E|W|F]. Flag is used to classify severity of error if particular startup file can not be found. In special instances, flag "$" signalls that the entire line should be executed as DCL command. This facility is intended for exceptions only. UIC UIC UIC UIC * UIC Non-empty field requests DETACHED process to execute particular procedure, under prescribed UIC. This feature is intended for GROUP startup procedures, for processes that must be postponed and or may be executed in parallel to speed-up startup. pathname pathname pathname pathname * pathname is a full pathname to a command procedure to execute, examples are lavc$data:[LAVCCOM]LAVCDISKS, or sys$system:NETCONFIG.COM. description description description description * description is an explanatory text, displayed during startup or in error messges. NODESTART.DAT may contain comments, flagged by exclamation point "!" in column one. Most of the startup procedures listed in node's NODESTARTUP.DAT will be located either in cluster-wide lavc-$data:[LAVCCOM], or (for DIGITAL layered products) under sys$manager. However, any PRODUCT startup files will be located in particular PRODUCT directories. 28 LAVC INSTALLATION & MANAGEMENT DATA STRUCTURES Example of the NODESTART.DAT Example of the NODESTART.DAT Example of the NODESTART.DAT Example of the NODESTART.DAT Example of the NODESTART.DAT ! NODESTART.DAT - Node specific startup files for node CHOPIN ! History: ! 02/09/87,,,MXB, Example set-up ! E||lavc$data:[LAVCCOM]LAVCDISKS.COM|Mounting cluster disks E||lavc$data:[LAVCCOM]LAVCLOGNM.COM|Creating system logical names E||lavc$data:[CHOPIN]NODEDESPEC.COM|Configuring terminal ports W||lavc$data:[LAVCCOM]LAVCPRINT.COM|Starting device queues W||lavc$data:[LAVCCOM]LAVCBATCH.COM|Starting batch queues E||sys$manager:STARTNET.COM|Starting DECNET F||sys$manager:STARTVWS.COM|Starting Vax Workstation Software W||s7kdsk:[GSYS]GSSTARTUP.COM|Starting S7000 software W|[230,1]|lavc$data:[GRP230]GRP230START.COM|S7K group startup W|[250,1]|lavc$data:[GRP250]GRP250START.COM|S5K group startup W||comdsk:[NETDIST.MISC]NETSTART.COM|Starting TCP/IP ! ! end of NODESTARTUP.COM ERROR-EXCEPTION HANDLING ERROR-EXCEPTION HANDLING ERROR-EXCEPTION HANDLING ERROR-EXCEPTION HANDLING 10 ERROR-EXCEPTION HANDLING Startup procedures report errors using standard VMS format: %fac-sev-ident, Message text For system startup procedures, facility code is STARTUP. LAVC common startup procedures use the following global symbols to report errors back to generic LAVCSTARTUP.COM (SYSTARTUP.COM): sysstawar sysstawar sysstawar sysstawar - sysstawar==systawar+"warning description"+systaCRLF sysstaerr sysstaerr sysstaerr sysstaerr - sysstaerr==systaerr+"error description"+systaCRLF sysstafat sysstafat sysstafat sysstafat - sysstafat==systafat+"fatal error description"+systaCRLF The LAVCSTART.COM procedure checks symbols above; if any of symbols is not empty, it is used in notification message creation, and on fatal errors the startup procedure disables non-operator logins. 29