HP DCE for OpenVMS Alpha and OpenVMS I64
Installation and Configuration Guide


Previous Contents Index


Chapter 4
Configuring a DCE Cell

This chapter describes the steps necessary to set up a DCE cell, and the DCE system configuration utility for HP DCE for OpenVMS Alpha and OpenVMS I64. Note that DCE must be configured.

4.1 Overview of the DCE Cell

A cell is the basic DCE unit. It is a group of networked systems and resources that share common DCE services. Usually, the systems in a cell are in the same geographic area, but cell boundaries are not limited by geography. A cell can contain from one to several thousand systems. The boundaries of a cell are typically determined by its purpose, as well as by security, administrative, and performance considerations.

A DCE cell is a group of systems that share a namespace under a common administration. The configuration procedure allows you to configure your system as a DCE client, create a new DCE cell, add a master Cell Directory Service (CDS) server, add a replica CDS server, and add a Distributed Time Service (DTS) local server. When you create a new cell, you automatically configure a Security server.

You do not need to create a DCE cell if you are using only the DCE Remote Procedure Call (RPC) and if your applications use only explicit RPC string bindings to provide the binding information that connects server to clients. If there are other systems in your network already using DCE services, it is possible there may be an existing cell that your system can join. If you are not sure, consult your network administrator to find out which DCE services may already be in use in your network.

At a minimum, a cell configuration includes the DCE Cell Directory Service, the DCE Security Service, and the DCE Distributed Time Service. One system in the cell must provide a DCE Directory Service server to store the cell namespace database. You can choose to install both the Cell Directory Server and the Security Server on the system from which you invoked the procedure, or you can split the two servers and put them on different systems.

Note

You must run the installation and configuration procedures on the system where you are creating a cell before you install and configure DCE on the systems that are joining the cell.

4.1.1 Creating a Cell

All DCE systems participate in a cell. If you are installing DCE and there is no cell to join, the first system on which you install the software is also the system on which you create the cell. Remember that this system is also the DCE Security Server. You can also make this system your Cell Directory Server.

When you create a cell, you must name it. The cell name must be unique across your global network. The name is used by all cell members to indicate the cell in which they participate. The configuration procedure provides a default name that is unique and is easy to remember. If you choose a name other than the default, the name must be unique. If you want to ensure that separate cells can communicate, the cell name must follow BIND or X.500 naming conventions.

4.1.2 Joining a Cell

Once the first DCE system is installed and configured and a cell is created, you can install and configure the systems that join that cell. During configuration, you need the name of the cell you are joining. Ask your network administrator for the cell name.

4.1.3 Defining a Cell Name

You need to define a name for your DCE cell that is unique in your global network and is the same on all systems that participate in this cell. The DCE naming environment supports two kinds of names: global names and local names. All entries in the DCE Directory Service have a global name that is universally meaningful and usable from anywhere in the DCE naming environment. All Directory Service entries also have a cell-relative name that is meaningful and usable only from within the cell in which that entry exists. If you plan to connect this cell to other DCE cells in your network either now or in the future, it is important that you choose an appropriate name for this cell. You cannot change the name of the cell once the cell has been created. If you are not sure how to choose an appropriate name for your DCE cell, consult Chapter 9 of the HP DCE for OpenVMS Alpha and OpenVMS I64 Product Guide, or the section on global names in the OSF DCE Administration Guide --- Introduction. Before you can register the cell in X.500, you must ensure that the HP X.500 Directory Service kit is installed on your CDS server.

HP recommends that you use the following convention to create DCE cell names: the Internet name of your host system followed by the suffix -- cell, followed by the Internet address of your organization. For example, if the Internet name of your system is myhost, and the Internet address of your organization is smallco.bigcompany.com, your cell name, in DCE syntax, would be myhost-cell.smallco.bigcompany.com. This convention has the following benefits:

If there is already a cell name defined in a previously existing DCE system configuration, do not change it unless you are removing this system from the cell in which it is currently a member and you are joining a different cell.

When the configuration procedure prompts you for the name of your DCE cell, type the cell name without the /.../ prefix; the prefix is added automatically. For example, if the full global name selected for the cell, in DCE name syntax, is /.../myhost-cell.smallco.bigcompany.com , enter myhost-cell.smallco.bigcompany.com .

4.1.4 Defining a Host Name

You need to define a name for your system that is unique within your DCE cell. You should use the default host name, which is the Internet host name (the name specified before the first dot(.)). The following example shows the default host name derived from the Internet name of myhost.mycompany.com.


Please enter your DCE host name [myhost]: 

4.1.5 Intercell Naming Using DNS

This section provides tips on defining a cell name in the Domain Name System (DNS). Names in DNS are associated with one or more data structures called resource records. The resource records define cells and are stored in a data file. For TCP/IP Services for OpenVMS, this file is called SYS$SPECIFIC:[TCPIP$BIND]<domain name>.DB.

If you are using a UNIX DNS Bind server, it is called /etc/namedb/hosts.db . To create a cell entry, you must edit the data file and create two resource records for each CDS server that maintains a replica of the cell namespace root. The following example shows a cell called ruby.axpnio.dec.com . The cell belongs to the BIND domain axpnio.dec.com. Host alo010.axpnio.dec.com is the master CDS server for the ruby.axpnio.dec.com cell . The BIND server must be authoritative for the domains of the cell name. The BIND master server requires the following entries in its data file:


alo010.axpnio.dec.com I A 25.0.0.149 
ruby.axpnio.dec.com IN MX 1 alo010.axpnio.dec.com 
ruby.axpnio.dec.com IN TXT "1 c8f5f807-487c-11cc-b499-08002b32b0ee 
Master /.../ruby.azpnio.dec.com/alo010_ch 
c84946a6-487c-11cc-b499-08002b32b0ee alo010.axpnio.dec.com" 

Note

TXT records must span only one line. The third entry above incorrectly occupies three lines to show the information included in the TXT record. You need to do whatever is required with your text editor of choice to ensure this. Widening your window helps. You should also ensure that the quotes are placed correctly and that the host name is at the end of the record.

The information to the right of the TXT column in the Hesiod Text Entry (that is, 1 c8f5f807-48...) comes directly from the cdscp show cell /.: as dns command. For example, to obtain the information that goes in the ruby.axpnio.dec.com text record (TXT), you would go to a host in the ruby cell, and enter the cdscp show cell /.: as dns command. Then, when the system displays the requested information, cut and paste this information into the record. This method ensures that you do not have any typing errors.

To ensure that the records that you have entered are valid, restart the DNS Bind server process.

4.1.6 Intercell Naming Using LDAP/X.500

This section provides tips on defining a cell name in LDAP/X500.

The cells that will communicate using intercell must be part of the same LDAP/X500 namespace. This is true only if they share a common root in the namespace tree. For example, the cells /c=us/o=hp/ou=laser-cell and /c=us/o=hp/ou=ruby-cell share the root /c=us/o=hp , and would be able to participate in intercell communications.

If your cell is part of an X.500 namespace, answer Yes to the question "Do you want to register the DCE cell in X.500?". If your cell is part of an LDAP namespace, answer Yes to the question "Do you want to register the DCE cell in LDAP?". Additional information about Intercell operations can be found in Chapter 9 of the HP DCE for OpenVMS Alpha and OpenVMS I64 Product Guide.

4.2 The DCE System Configuration Utility --- DCE$SETUP.COM

The DCE$SETUP command procedure begins the configuration process. Many of the system configuration utility prompts have default values associated with them. The default responses are based on your existing configuration, if you have one. Otherwise, default values for the most common DCE system configurations are provided. At each prompt, press RETURN to take the default displayed in brackets, type a question mark (?) for help, or supply the requested information.

The system configuration utility sets up the DCE environment on your node so that you can use DCE services. The system configuration utility leads you through the process of creating or joining a cell.

Note

If you are installing HP DCE for OpenVMS Alpha Version 3.2 over a previous version of DCE - V3.0 or V3.1 for OpenVMS Alpha, you do not have to reconfigure DCE after the installation. Before the installation, stop the DCE daemons with the following command:


$ @SYS$MANAGER:DCE$SETUP CLEAN 

Then, after the installation, enter the following command:


$ @SYS$MANAGER:DCE$SETUP START 

You must configure if you are installing DCE for the first time.

4.2.1 Configuring LDAP, NSI, and GDA

The Lightweight Directory Access Protocol (LDAP) provides access to the X.500 directory services without the overhead of the full Directory Access Protocol (DAP). The simplicity of LDAP, along with the powerful capabilities it inherits from DAP, makes it the defacto standard for Internet directory services and for TCP/IP.

Inside a cell, a directory service is accessed mostly through the name service interface (NSI) implemented as part of the run-time library. Cross-cell directory service is controlled by a global directory agent (GDA), which looks up foreign cell information on behalf of the application in either the Domain Naming Service (DNS) or X.500 database. Once that information is obtained, the application contacts the foreign CDS in the same way as the local CDS.

Once LDAP is configured, applications can request directory services from either CDS or LDAP or both. LDAP is provided as an optional directory service that is independent of CDS and duplicates CDS functionality. LDAP is for customers looking for an alternative to CDS that offers TCP/IP and Internet support.

With LDAP directory service available, GDA can look up foreign cell information by communicating through LDAP to either an LDAP-aware X.500 directory service or a standalone LDAP directory service, in addition to DNS and DAP.

Note that DCE for OpenVMS provides it's own client implementation of LDAP. Prior to installing DCE, a DCE administrator must obtain LDAP server software and install it as an LDAP server in the environment. Next, a DCE administrator must choose LDAP during the DCE installation and configuration procedure and intentionally configure LDAP directory service for a cell.

4.2.2 Kerberos 5 Security

The DCE authentication service is based on Kerberos 5. The Kerberos Key Distribution Center (KDC) is part of the DCE Security Server secd . The authorization information that is created by the DCE for OpenVMS privilege server is passed in the Kerberos 5 ticket's authorization field.

DCE provides a Kerberos configuration program (DCE$KCFG.EXE) to assist in the interoperability between DCE Kerberos and standard Kerberos. To find out more information about the kcfg program, use the following two commands.

To display individual command switches and their arguments enter:


kcfg -? 

To display a short description of the command and what it does enter:


kcfg -h 

This provides information on the configuration file management, principal registration, and service configuration.

Note

The dcesetup configuration script sets all tickets as forwardable, a default value. If tickets are not set as forwardable, the Kerberos Distribution Center (KDC) server does not provide authentication and authorization information to the telnet process. The command, kinit -f , marks tickets as forwardable.

All machines within a cell that plan to use Kerberos-enabled tools need to check and possibly modify the registry and the krb5 configuration with the kcfg executable.

To make sure that Kerberos Version 4 interoperates with Kerberos Version 5, an administrator can use the kcfg -k command to change krb.conf entries. This command needs to be entered on each machine in the cell.

The registry must contain a principal entry that describes the host machine of the KDC server. This principal entry is of the form host/<hostname> . The principal and the associated keytable entry can be created with kcfg -p . This verifies that the host entry exists; if not, it creates the host entry.

4.2.3 Starting the System Configuration Utility

You must be logged in as a privileged user. The SHOW command requires only NETMBX and TMPMBX privileges. All other commands require WORLD, SYSPRV, CMKRNL, and SYSNAM privileges. The CONFIG command requires BYPASS privileges.

You can use the same command to perform an initial configuration or to reconfigure DCE. See the Appendix for several sample configurations. To start the system configuration utility, at the DCL prompt enter the following command:


$ @SYS$MANAGER:DCE$SETUP 

The DCE System Management Main Menu appears:


 
 
                    DCE System Management Main Menu 
                           DCE for OpenVMS Alpha V3.2 
 
      1)  Configure     Configure DCE services on this system 
      2)  Show          Show DCE configuration and active daemons 
      3)  Stop          Terminate all active DCE daemons 
      4)  Start         Start all DCE daemons 
      5)  Restart       Terminate and restart all DCE daemons 
      6)  Clean         Terminate all active DCE daemons and remove 
                         all temporary local DCE databases 
      7)  Clobber       Terminate all active DCE daemons and remove 
                         all permanent local DCE databases 
      8)  Test          Run Configuration Verification Program 
 
      0)  Exit          Exit this procedure 
      ?)  Help          Display helpful information 
 
Please enter your selection: 

Enter 1 to view the DCE Configuration Menu. To skip the previous menu and go directly to the DCE Configuration Menu, enter the following command:


$ @SYS$MANAGER:DCE$SETUP CONFIG 

For information on how to configure a DCE cell or how to add a client, see Chapter 5. For information on modifying an existing configuration, see Chapter 6.


Chapter 5
Configuring DCE

This chapter explains how to create a cell and configure the Security server and CDS server on the same system. It also discusses how to configure a client system into an existing DCE cell.

5.1 DCE System Management Command Procedure

Starting from DCE Version 3.0 onwards, the DCE system management command procedure SYS$MANAGER:DCE$SETUP.COM has been changed. These changes are described in the following sections.

An RPC only configuration can be started with the startup command procedure described in the next section. DCE$SETUP stops RPCD during configuration. In DCE for OpenVMS Version 1.5, DCE$SETUP was modified not to stop RPCD. Changes in the DCE daemons required reverting to the previous behavior. DCE$SETUP.COM has been rewritten to add the new functionality for DCE R1.2.2, and to more closely match the configuration program for DCE for Tru64 UNIX.

5.1.1 Starting and Stopping the RPC Daemon

The RPC daemon can be started and stopped with the command files DCE$RPC_STARTUP.COM and DCE$RPC_SHUTDOWN.COM. These files are located in SYS$COMMON:[SYSMGR].

To start the RPC daemon, execute DCE$RPC_STARTUP.COM. You can specify the following option:


[NO]CONFIRM          Turns user prompting on or off.  CONFIRM is the default. 

To stop the RPC daemon, execute DCE$RPC_SHUTDOWN.COM. You can specify the following options in any order:


[NO]CONFIRM          Turns user prompting on or off.  CONFIRM is the default. 
CLEAN                Deletes all entries from the RPC endpoint database. 

Note

Do not stop the RPC daemons if any RPC applications are running on the system.

5.1.2 Limiting RPC Transports

The RPC daemon can limit the protocols used by RPC applications. To restrict the protocols that can be used, set a logical name RPC_SUPPORTED_PROTSEQS to contain the valid protocols separated by a colon. Valid protocols are ncadg_ip_udp , ncacn_ip_tcp , and ncacn_dnet_nsp . For example:


$ DEFINE RPC_SUPPORTED_PROTSEQS "ncadg_ip_udp:ncacn_ip_tcp" 

This prevents applications and servers from registering endpoints that utilize DECnet.

5.1.3 Logical Names Created During Configuration

The configuration process creates the following logical names:
Logical Name Description
DCE Defines a search list pointing to directories SYS$COMMON:[DCE$LIBRARY] and SYS$LIBRARY. These directories contain the Application Developer's Kit include files and other files for creating DCE applications.
DCE$COMMON,DCE_COMMON Points to the directory SYS$COMMON:[DCELOCAL]. This directory holds DCE-specific files common to all DCE hosts in a cluster.
DCE$LOCAL,DCE_LOCAL Points to the directory DCE$SPECIFIC:. This directory defines the top of the DCE directory hierarchy.
DCE$SPECIFIC Points to the directory SYS$SPECIFIC:[DCELOCAL]. This directory is for internal use only.
DCE$SYSROOT Points to the directories DCE$SPECIFIC:, DCE$COMMON:. This logical is used to find DCE files that may be in either system-specific or cluster-general trees.
TCL_LIBRARY Points to the directory DCE_COMMON/TCL (UNIX file syntax). This directory holds files that allow the TCL interface to the DCE command line programs to function.

The logical names with a dollar sign in them define VMS style directory syntax. The logical names with underscores in them define UNIX style directory syntax (for use by various DCE internal applications).

5.1.4 Configuring on a VMScluster

You must configure each node in a VMScluster separately by entering the following command on each node:


    $ @SYS$MANAGER:DCE$SETUP CONFIG 

5.2 Overview of New Cell Configuration

To configure a new cell, you must complete the following steps:

  1. To begin your initial cell creation and server configuration, invoke the DCE configuration utility.
  2. If you are creating a new cell or adding a CDS server, choose option 6 (Terminate all active DCE daemons and remove all temporary local DCE databases) to stop the DCE daemons in a controlled manner. Be sure to back up your security and CDS databases before proceeding if this has not been done.
  3. Choose option 1 from the DCE Setup Main Menu to configure DCE services on your system. You must have system privileges to modify the DCE system configuration.
    The procedure displays the following menu:


                DCE Configuration Menu 
                DCE for OpenVMS Alpha V3.2 
     
        1)  Client           Configure this system as a DCE client 
        2)  New Cell         Create a new DCE cell 
        3)  CDS Server       Add Master CDS Server 
        4)  Modify           Modify DCE cell configuration 
        5)  RPC_Only         Configure this system for RPC only 
     
        0)  Exit             Exit this procedure 
        ?)  Help             Display helpful information 
     
    Please enter your selection: 
    

    Table 5-1 provides descriptions of the options available on the DCE Configuration Menu.

    Table 5-1 Configuration Menu Options
    Option Description
    Client Provides full DCE RPC services, client services for CDS and Security, and optional time services. A DCE client system must join an existing DCE cell with a security registry and a CDS master server available on other systems in the cell.
    New Cell Provides full DCE RPC services, a security registry server for the cell, a CDS master server, a DTS server, and the NSI agent for name service independent access to directory services from PC client systems. There can be only one security registry and CDS master server in a cell, although they need not reside on the same host.
    CDS Server Provides a DCE client system with a CDS master server added. This option is used if a split server configuration is desired, and the new cell (on another system) was configured without a CDS master server.
    Modify Provides a submenu of additional configuration options that are available after the initial configuration has completed.
    RPC_Only Provides a subset of the DCE RPC services. If DCE is installed on an OpenVMS Alpha system running Version 7.2-1 or higher, NTLM security may be utilized for authenticated RPC requests. With an RPC only configuration, there are no RPC name service interface routines available. This configuration will, however, allow applications to communicate if full string bindings are supplied by the RPC client, or if the client requests the port number to complete the partial string binding from the end point mapper (DCED daemon).

  4. Choose option 2 to create a new DCE cell.
  5. At each prompt, you can press RETURN to take the default displayed in brackets or enter a question mark (?) for help. When prompted, select a cell name and a host name; the name is used again when you configure DCE client systems.
  6. The configuration utility asks if you want to configure the host as a CDS server. Answer Y to configure the CDS and security servers on the same system. Answer N to perform a split server installation in which you configure the security server on the current host and the CDS server on a different host.
  7. If you answered Y to configure the CDS and security servers on the same system, the utility asks:


    Will there be any DCE pre-R1.1 CDS servers in this cell? (YES/NO/?) [N]: 
    

    If your cell will be running any CDS servers based on OSF DCE Release 1.0.3a or lower (equivalent to HP DCE for OpenVMS Version 1.5 or lower), you should answer Y. The configuration utility sets the directory version number to 3.0 for compatibility with pre-R1.1 servers. This setting disables the use of OSF DCE Release 1.1 features such as alias cells, CDS delegation ACLs, and so on.
    If all CDS servers in your cell will be based on HP DCE for OpenVMS Version 3.0 (or higher) and based on OSF DCE Release 1.1 (or higher), answer N.
    The configuration utility sets the directory version number to 4.0 for compatibility with HP DCE for OpenVMS Version 3.0 CDS servers (OSF DCE Releases 1.2.2). This enables the use of OSF DCE Release 1.1 features such as alias cells, CDS delegation ACLs, and so on, and OSF DCE Release 1.2.2 features. Once the directory version is set to 4.0, you cannot set it back to 3.0.

  8. You are prompted to confirm the system time; it is important that you check the current time before you respond.
  9. The configuration utility will prompt for the Domain Name and DNS server address.
  10. If DECnet/OSI is installed on your system, the configuration utility displays the following message and then asks several questions about configuring a DCE Distributed Time Service server on your system.


    Previous Next Contents Index