HP DCE for OpenVMS Alpha and OpenVMS I64
Product Guide


Previous Contents Index

1.11.5 DCL Interfaces to DCE Tools

DCE is multiplatform software designed to be used and managed on many different operating systems. For that reason, HP has worked to keep as much of the standard OSF DCE interface available as possible within the OpenVMS environment. For example, you can define foreign commands to execute DCE tools and utilities as you do on a UNIX system.

Note that the OpenVMS operating system does not differentiate between commands using lowercase and uppercase characters, but operating systems based on UNIX are case-sensitive. Many of the standard DCE commands differentiate between lowercase and uppercase characters. Many literal strings that appear in text, examples, syntax descriptions, and function descriptions must be typed exactly as shown.

To assist users more accustomed to OpenVMS syntax and conventions, HP also provides DCL interfaces for the following DCE tools:

Note that you can use these interfaces only on OpenVMS DCE systems; OSF DCE documentation includes no DCL interface information. For information about the available DCL interfaces, refer to the chapter on DCL command interfaces to DCE tools in the HP DCE for OpenVMS Alpha and OpenVMS I64 Reference Guide. Some of these interfaces can be enabled during installation and configuration.

1.11.6 Integrated Login

HP provides Integrated Login, which combines the DCE and OpenVMS login procedures. See Chapter 8 for more information.

1.11.7 Object-Oriented RPC

IDL has been extended to support a number of C++ language syntax features that provide a distributed object framework. The DCE RPC runtime environment now supports C++ bindings to remote objects. The combination of these new features creates an Object-Oriented RPC. (See Chapter 12 for more information.)


Chapter 2
DCE System Configuration

HP DCE for OpenVMS Alpha and OpenVMS I64 includes a system configuration utility, SYS$MANAGER:DCE$SETUP.COM, that is used after the kit installation to configure and start the DCE services. The HP DCE for OpenVMS Alpha and OpenVMS I64 Installation and Configuration Guide provides important information about setting up your initial DCE environment. This chapter provides general information about the DCE configuration utility options and provides details about the clobber option.

HP recommends that you use only DCE$SETUP.COM, the DCE system configuration utility, to reconfigure and restart the HP DCE services. This utility ensures that the proper configuration and sequencing of DCE operations occur. For example, instead of starting the RPC daemon (dced) directly, use DCE$SETUP.COM to start and stop daemons.

The DCE system configuration utility invokes a number of other utilities while it is configuring and starting the DCE services and creates a log file called SYS$MANAGER:DCE$SETUP.LOG. This error log file can be helpful in diagnosing problems that may occur during the product installation or subsequent reconfiguration.

Note

In a VMScluster environment, you must configure each VMScluster node separately. Although a DCE kit can be installed clusterwide, DCE services need specific DECnet and/or TCP/IP addresses and endpoints for each host. You must configure each VMScluster node that will be part of a DCE cell. Configure the VMScluster nodes exactly as single nodes are configured.

2.1 Starting and Stopping the RPC Daemon

Starting from DCE Version 3.0 following enhancements have been made to the DCE System management command procedure DCE$SETUP.COM.

The RPC daemon can be started or stopped with the two new command files DCE$RPC_STARTUP.COM and DCE$RPC_SHUTDOWN.COM, which are located in SYS$COMMON:[SYSMGR].

To start the Remote Procedure Call daemon, complete the following:

  1. Run DCE$RPC_STARTUP.COM.
  2. Specify [NO]CONFIRM to turn user prompting on or off. CONFIRM is the default.

To stop the Remote Procedure Call daemon, complete the following:

  1. Run DCE$RPC_SHUTDOWN.COM.
  2. Specify the following options in any order:

Note

The RPC daemon must not be stopped if any DCE components or RPC applications are running on the system.

2.2 Limiting RPC Transports

The RPC daemon can limit which protocols will be used by RPC applications. To restrict the protocols that can be used, set a logical name RPC_SUPPORTED_PROTSEQS that contains the valid protocols separated by a colon. Valid protocols are ncadg_ip_udp, ncacn_ip_tcp, ncacn_dnet_nsp.

To prevent RPC applications from registering endpoints that use UDP/IP, use the following command:


 
   $ Define RPC_SUPPORTED_PROTSEQS "ncacn_ip_tcp:ncacn_dnet_nsp" 
 

2.3 Using the DCE System Configuration Utility

To access the DCE system configuration utility menu, log in to the SYSTEM account and enter the following command:


$ @SYS$MANAGER:DCE$SETUP.COM 

The system configuration utility displays the following menu:


1)  Config      Configure DCE services on this system 
2)  Show        Show DCE configuration and active daemons 
3)  Stop        Terminate all active DCE daemons 
4)  Start       Start all DCE daemons 
5)  Restart     Terminate and restart all DCE daemons 
6)  Clean       Terminate all active DCE daemons and remove 
                all temporary local DCE databases 
7)  Clobber     Terminate all active DCE daemons and remove 
                all permanent local DCE databases 
8)  Test        Run Configuration Verification Program 
0)  Exit        Exit this procedure 
?)  Help        Display helpful information     
 
Please enter your selection number: 

To enter a system configuration menu command directly from the command line, type the following command:


$ @DCE$SETUP.COM command

where command is one of the system configuration commands described in Table 2-1.

Table 2-1 System Configuration Commands
Command Description
config The config command modifies the DCE configuration. To use this utility you must be logged in to either the SYSTEM account or an account with the same privileges as the SYSTEM account. The utility displays the current system configuration and then prompts for changes to the configuration. The default answers to the prompts depend on the existing configuration. To choose the default answer, press Return. You can also type a question mark (?) in response to any of the prompts to have help text displayed. A third choice is to enter new input at the prompt.

After you select all the services, the utility displays the new configuration and asks whether the permanent configuration database should be updated. The utility optionally starts all of the daemons for the configured services and runs the Configuration Verification Program (CVP).

show The show command displays the current DCE system configuration in read-only mode. You need WORLD privileges to execute this command. The HP DCE for OpenVMS Alpha and OpenVMS I64 Installation and Configuration Guide also provides information on this command.
stop The stop command terminates all active DCE daemons. You must have the SYSPRV privilege to use this command.
start The start command starts all DCE daemons based on the current DCE system configuration. You must have the SYSPRV privilege to use this command.
restart The restart command terminates all active DCE daemons and restarts the daemons based on the current DCE system configuration. You must have the SYSPRV privilege to use this command.
clean The clean command terminates all active DCE daemons. It deletes temporary local databases associated with DCE services on this system. You must have the SYSPRV privilege to execute this command. After you execute this command, you must restart the DCE services and applications. To restart the daemons after executing the clean command, use DCE$SETUP start.
clobber The clobber command terminates all active DCE daemons. It deletes temporary and permanent local databases associated with DCE services on this system, including the DCE system configuration files and any portion of the RPC name service database for the cell that is maintained on this system. You must have the SYSPRV privilege to execute this command.

After you execute this command, you must reconfigure the services on this system because clobber returns the system to the state it was in during the kit installation before the initial DCE system configuration was performed. To reconfigure the services and restart the daemons after executing the clobber command, use DCE$SETUP config.

test The test command begins the Configuration Verification Program.
exit The exit command allows you to exit from the DCE System Configuration menu without executing an option.

Implications of Using the clobber Command

Caution

The clobber command destroys a DCE cell. If you use it, you must reconfigure major portions of the cell. Using this command causes the following events:
  • All temporary and permanent DCE databases and files are deleted, including:
    • Configuration databases:
      DCE$LOCAL:[000000]DCE_CF.DB (permanent database)
      DCE$LOCAL:[000000]DCE_SERVICES.DB (permanent database)
      Loss of these databases means you must reconfigure the host by entering @SYS$MANAGER:DCE$SETUP CONFIG.
  • If the host on which the clobber command has been executed is the name service server for the cell, the namespace and all files are deleted.
    All name service entries and directories must be recreated. To recreate the DCE entries and directories by reconfiguring DCE on this host, you can enter the command @SYS$MANAGER:DCE$SETUP CONFIG. Users can create all user namespace entries and directories. You must restart the daemons either by responding YES at the configuration procedure's prompt, or by entering the command @SYS$MANAGER:DCE$SETUP START at a later time.

2.4 Kerberos

The DCE security server makes UDP port 88 (service name "kerberos5") available for use by native Kerberos clients for authentication.

Note

Kerberos realm names must match the cell name of the DCE security server.

Native kerberos5 clients have undergone minimal testing, and are currently unsupported. However, there are no known problems in this area. If this interoperability is important to your site, you may want to try it.


Chapter 3
Interoperability and Compatibility

This chapter describes interoperability and compatibility issues for HP DCE for OpenVMS Alpha and OpenVMS I64. Information is provided on the following topics:

3.1 Interoperability with Other DCE Systems

HP DCE for OpenVMS Alpha and OpenVMS I64 provides RPC interoperability with HP's other DCE offerings, with several restrictions. HP DCE systems must have at least one network transport in common with a HP DCE client or server in order to communicate. For example, a HP DCE client system that supports only the DECnet transport cannot communicate with a DCE server that supports only the Internet transports (TCP/IP and UDP/IP).

This release provides RPC interoperability with other vendors' DCE offerings, with similar restrictions to those listed for other HP DCE offerings.

The Interface Definition Language provides a data type, (error_status_t), for communicating error status values in remote procedure calls. Data of the error_status_t type is subject to translation to a corresponding native error code. For example, a "memory fault" error status value returned from a HP OSF/1 system to an OpenVMS system will be translated into the OpenVMS error status value "access violation".

In some cases, information is lost in this translation process. For example, an OpenVMS success or informational message is mapped to a generic success status value on other systems, because most non OpenVMS systems do not use the same mechanism for successful status values and would interpret the value as an error code.

3.2 Interoperability with Microsoft RPC

DCE systems can interoperate with non-DCE systems that are running Microsoft RPC. Microsoft supplies a DCE-compatible version of remote procedure call software for use on systems running MS-DOS, Windows, or Windows NT. Microsoft RPC systems can also use a DCE name service. The DCE name service can include the Cell Directory Service (CDS). Microsoft RPC servers can export and import binding information, and Microsoft RPC clients can import binding information. Thus, DCE servers can be located and used by Microsoft RPC clients and, similarly, Microsoft RPC servers can be located and used by DCE clients.

HP DCE for OpenVMS Alpha and OpenVMS I64 includes a name service interface daemon (nsid), also known as the PC Nameserver Proxy Agent, that performs DCE name service clerk functions on behalf of Microsoft RPC clients and servers. Microsoft RPC does not include a DCE name service. Microsoft RPC clients and servers locate an nsid using locally maintained nsid binding information. The binding information consists of the transport over which the nsid is available, the nsid's host network address, and, optionally, the endpoint on which the nsid waits for incoming calls from Microsoft RPC clients and servers. You must provide the nsid's transport and host network address (and, optionally, the nsid's endpoint) to Microsoft RPC clients and servers that want to use the DCE Directory Service with Microsoft RPC applications.

Note

Although your DCE cell may have several NSI daemons running, Microsoft RPC users need the binding for only one nsid. The nsid you choose must be running on a system that belongs to the same DCE cell as the DCE systems with which Microsoft RPC systems will communicate.

You can obtain the nsid binding information by running the rpccp show mapping command on the system where the nsid is running. The following example shows how to enter this command on an OpenVMS Alpha system where this release is installed. The nsid bindings are those with the annotation NSID: PC Nameserver Proxy Agent V1.0 . Select the appropriate endpoint from among these bindings. In the following example, the nsid binding for the TCP/IP network transport is ncacn_ip_tcp:16.20.16.141[4685] .


$ rpccp
rpccp> show mapping


 mappings: 
 . 
 . 
 . 
  <OBJECT>          nil 
  <INTERFACE ID>    D3FBB514-0E3B-11CB-8FAD-08002B1D29C3,1.0 
  <STRING BINDING>  ncacn_ip_tcp:16.20.16.141[4685] 
  <ANNOTATION>      NSID: PC Nameserver Proxy Agent V1.0 
 
  <OBJECT>          nil 
  <INTERFACE ID>    D3FBB514-0E3B-11CB-8FAD-08002B1D29C3,1.0 
  <STRING BINDING>  ncacn_dnet_nsp:2.711[RPC03AB0001] 
  <ANNOTATION>      NSID: PC Nameserver Proxy Agent V1.0 
 . 
 . 
 . 

For more information on using PCs with DCE, see Distributing Applications Across DCE and Windows NT.

3.3 Understanding and Using OSF DCE and VMScluster Technologies

This section describes the following:

3.3.1 Similarities Between VMScluster Environments and DCE Cells

VMScluster technology as implemented by OpenVMS systems provides some of the same features of distributed computing that OSF DCE provides. Many of the VMScluster concepts apply to DCE, and it is easy to think of a VMScluster system as being a type of DCE cell.

The following attributes are shared by DCE and VMScluster environments:

3.3.2 Differences Between VMScluster Environments and DCE Cells

VMScluster environments differ from DCE cells in two significant ways:

VMScluster environments support the concept of individual systems as nodes in the extended system. In DCE, individual systems are called hosts. In a VMScluster environment, each node effectively has two addresses: a network node address and the VMScluster alias address. These two addresses are used differently, as follows:

In DCE there is no such dual identity. All network addressing is done directly to a specified host. The DCE cell does not have a separate network address, and it does not perform any forwarding functions. To share resources across hosts, DCE applications can use replication (resource copies) or store the resources in the shared file system, DFS, if it is available.

The VMScluster environment connection-forwarding mechanism permits the entire extended system to appear on the network as a single addressable entity (the VMScluster alias address). Although DCE does not support a connection-forwarding mechanism, DCE can use the Remote Procedure Call (RPC) grouping mechanism to access shared resources in a distributed file system. This mechanism selects, from an available set, one host/server pair that provides access to the shared resource.


Previous Next Contents Index