hp.com home products and services support and drivers solutions how to buy
cd-rom home
End of Jump to page title
HP OpenVMS systems
documentation

Jump to content


HP OpenVMS Programming Concepts Manual

HP OpenVMS Programming Concepts Manual


Previous Contents Index

6.9 Synchronizing System Services Operations

A number of system services, such as the following, can be executed either synchronously or asynchronously:

The W at the end of the system service name indicates the synchronous version of the service.

The asynchronous version of a system service queues a request and immediately returns control to your program pending the completion of the request. You can perform other operations while the system service executes. To avoid data corruptions, you should not attempt any read or write access to any of the buffers or itemlists referenced by the system service call prior to the completion of the asynchronous portion of the system service call. Further, no self-referential or self-modifying itemlists should be used.

Typically, you pass an event flag and a status block to an asynchronous system service. When the system service completes, it sets the event flag and places the final status of the request in the status block. Use the SYS$SYNCH system service to ensure that the system service has completed. You pass to SYS$SYNCH the event flag and status block that you passed to the asynchronous system service; SYS$SYNCH waits for the event flag to be set and then examines the status block to be sure that the system service rather than some other program set the event flag. If the status block is still zero, SYS$SYNCH waits until the status block is filled.

The following example shows the use of the SYS$GETJPI system service:


! Data structure for SYS$GETJPI 
   .
   .
   .
INTEGER*4 STATUS, 
2         FLAG, 
2         PID_VALUE 
! I/O status block 
STRUCTURE /STATUS_BLOCK/ 
 INTEGER*2 JPISTATUS, 
2          LEN 
 INTEGER*4 ZERO /0/ 
END STRUCTURE 
RECORD /STATUS_BLOCK/ IOSTATUS 
   .
   .
   .
! Call SYS$GETJPI and wait for information 
STATUS = LIB$GET_EF (FLAG) 
IF (.NOT. STATUS) CALL LIB$SIGNAL (%VAL(STATUS)) 
STATUS = SYS$GETJPI (%VAL(FLAG), 
2                    PID_VALUE, 
2                    , 
2                    NAME_BUF_LEN, 
2                    IOSTATUS, 
2                    ,) 
IF (.NOT. STATUS) CALL LIB$SIGNAL (%VAL(STATUS)) 
   .
   .
   .
STATUS = SYS$SYNCH (%VAL(FLAG), 
2                   IOSTATUS) 
IF (.NOT. IOSTATUS.JPISTATUS) THEN 
  CALL LIB$SIGNAL (%VAL(IOSTATUS.JPISTATUS)) 
END IF 
 
END 

The synchronous version of a system service acts as if you had used the asynchronous version followed immediately by a call to SYS$SYNCH; however, it behaves this way only if you specify a status block. If you omit the status block, the result is as though you called the asynchronous version followed by a call to SYS$WAITFR. Regardless of whether you use the synchronous or asynchronous version of a system service, if you omit the efn argument, the service uses event flag 0.


Chapter 7
Synchronizing Access to Resources

This chapter describes the use of the lock manager to synchronize access to shared resources and contains the following sections:

Section 7.1 describes how the lock manager synchronizes processes to a specified resource.

Section 7.2 describes the concepts of resources and locks.

Section 7.3 describes how to use the SYS$ENQ and SYS$ENQW system services to queue lock requests.

Section 7.4 describes specialized features of locking techniques.

Section 7.5 describes how to use the SYS$DEQ system service to dequeue the lock.

Section 7.6 describes how applications can perform local buffer caching.

Section 7.7 presents a code example of how to use lock management services.

7.1 Synchronizing Operations with the Lock Manager

Cooperating processes can use the lock manager to synchronize access to a shared resource (for example, a file, program, or device). This synchronization is accomplished by allowing processes to establish locks on named resources. All processes that access the shared resources must use the lock management services; otherwise, the syncronization is not effective.

Note

The use of the term resource throughout this chapter means shared resource.

To synchronize access to resources, the lock management services provide a mechanism that allows processes to wait in a queue until a particular resource is available.

The lock manager does not ensure proper access to the resource; rather, the programs must respect the rules for using the lock manager. The rules required for proper synchronization to the resource are as follows:

A process can choose to lock a resource and then create a subprocess to operate on this resource. In this case, the program that created the subprocess (the parent program) should not exit until the subprocess has exited. To ensure that the parent program does not exit before the subprocess, specify an event flag to be set when the subprocess exits (use the completion-efn argument of LIB$SPAWN). Before exiting from the parent program, use SYS$WAITFR to ensure that the event flag is set. (You can suppress the logout message from the subprocess by using the SYS$DELPRC system service to delete the subprocess instead of allowing the subprocess to exit.)

Table 7-1 summarizes the lock manager services.

Table 7-1 Lock Manager Services
Routine Description
SYS$ENQ(W) Queues a new lock or lock conversion on a resource
SYS$DEQ Releases locks and cancels lock requests
SYS$GETLKI(W) Obtains information about the lock database

7.2 Concepts of Resources and Locks

A resource can be any entity on the operating system (for example, files, data structures, databases, or executable routines). When two or more processes access the same resource, you often need to control their access to the resource. You do not want to have one process reading the resource while another process writes new data, because a writer can quickly invalidate anything being read by a reader. The lock management system services allow processes to associate a name with a resource and request access to that resource. Lock modes enable processes to indicate how they want to share access with other processes.

To use the lock management system services, a process must request access to a resource (request a lock) using the Enqueue Lock Request (SYS$ENQ) system service. The following three arguments to the SYS$ENQ system service are required for new locks:

The lock management services compare the lock mode of the newly requested lock to the mode of other locks with the same resource name. New locks are granted in the following instances:

Processes can also use the SYS$ENQ system service to change the lock mode of a lock. This is called a lock conversion.

7.2.1 Resource Granularity

Many resources can be divided into smaller parts. As long as a part of a resource can be identified by a resource name, the part can be locked. The term resource granularity describes the part of the resource being locked.

Figure 7-1 depicts a model of a database. The database is divided into areas, such as a file, which in turn are subdivided into records. The records are further divided into items.

Figure 7-1 Model Database


The processes that request locks on the database shown in Figure 7-1 may lock the whole database, an area in the database, a record, or a single item. Locking the entire database is considered locking at a coarse granularity; locking a single item is considered locking at a fine granularity.

In this example, overall access to the database can be represented by a root resource name. Access either to areas in the database or records within areas can be represented by sublocks.

Root resources consist of the following:

Subresources consist of the following:

7.2.2 Resource Domains

Because resource names are arbitrary names chosen by applications, one application may interfere (either intentionally or unintentionally) with another application. Unintentional interference can be easily avoided by careful design, such as by using a registered facility name as a prefix for all root resource names used by an application.

Intentional interference can be prevented by using resource domains. A resource domain is a namespace for root resource names and is identified by a number. Resource domain 0 is used as a system resource domain. Usually, other resource domains are used by the UIC group corresponding to the domain number.

By using the SYS$SET_RESOURCE_DOMAIN system service, a process can gain access to any resource domain subject to normal operating system access control. By default, each resource domain allows read, write, and lock access by members of the corresponding UIC group. See the HP OpenVMS Guide to System Security for more information about access control.

7.2.3 Resource Names

The lock management system services refer to each resource by a name composed of the following four parts:

For two resources to be considered the same, these four parts must be identical for each resource.

The name specified by the process represents the resource being locked. Other processes that need to access the resource must refer to it using the same name. The correlation between the name and the resource is a convention agreed upon by the cooperating processes.

The access mode is determined by the caller's access mode unless a less privileged mode is specified in the call to the SYS$ENQ system service. Access modes, their numeric values, and their symbolic names are discussed in the HP OpenVMS Calling Standard.

The default resource domain is selected by the UIC group number for the process. You can access the system domain by setting the LCK$M_SYSTEM when you request a new root lock. Other domains can be accessed using the optional RSDM_ID parameter to SYS$ENQ. You need the SYSLCK user privilege to request systemwide locks from user or supervisor mode. No additional privilege is required to request systemwide locks from executive or kernel mode.

When a lock request is queued, it can specify the identification of a parent lock, at which point it becomes a sublock (see Section 7.4.8). However, the parent lock must be granted, or the lock request is not accepted. This enables a process to lock a resource at different degrees of granularity.

7.2.4 Choosing a Lock Mode

The mode of a lock determines whether the resource can be shared with other lock requests. Table 7-2 describes the six lock modes.

Table 7-2 Lock Modes
Mode Name Meaning
LCK$K_NLMODE Null mode. This mode grants no access to the resource. The null mode is typically used either as an indicator of interest in the resource or as a placeholder for future lock conversions.
LCK$K_CRMODE Concurrent read. This mode grants read access to the resource and allows sharing of the resource with other readers. The concurrent read mode is generally used either to perform additional locking at a finer granularity with sublocks or to read data from a resource in an "unprotected" fashion (allowing simultaneous writes to the resource).
LCK$K_CWMODE Concurrent write. This mode grants write access to the resource and allows sharing of the resource with other writers. The concurrent write mode is typically used to perform additional locking at a finer granularity, or to write in an "unprotected" fashion.
LCK$K_PRMODE Protected read. This mode grants read access to the resource and allows sharing of the resource with other readers. No writers are allowed access to the resource. This is the traditional "share lock."
LCK$K_PWMODE Protected write. This mode grants write access to the resource and allows sharing of the resource with users at concurrent read mode. No other writers are allowed access to the resource. This is the traditional "update lock."
LCK$K_EXMODE Exclusive. The exclusive mode grants write access to the resource and prevents the sharing of the resource with any other readers or writers. This is the traditional "exclusive lock."

7.2.5 Levels of Locking and Compatibility

Locks that allow the process to share a resource are called low-level locks; locks that allow the process almost exclusive access to a resource are called high-level locks. Null and concurrent read mode locks are considered low-level locks; protected write and exclusive mode locks are considered high-level. The lock modes, from lowest- to highest-level access, are:

Note that the concurrent write and protected read modes are considered to be of the same level.

Locks that can be shared with other locks are said to have compatible lock modes. High-level lock modes are less compatible with other lock modes than are low-level lock modes. Table 7-3 shows the compatibility of the lock modes.

Table 7-3 Compatibility of Lock Modes
Mode of Mode of Currently Granted Locks
Requested
Lock
NL CR CW PR PW EX
NL Yes Yes Yes Yes Yes Yes
CR Yes Yes Yes Yes Yes No
CW Yes Yes Yes No No No
PR Yes Yes No Yes No No
PW Yes Yes No No No No
EX Yes No No No No No


Key to Lock Modes:
NL = Null
CR = Concurrent read
CW = Concurrent write
PR = Protected read
PW = Protected write
EX = Exclusive

7.2.6 Lock Management Queues

A lock on a resource can be in one of the following three states:

A queue is associated with each of the three states (see Figure 7-2).

Figure 7-2 Three Lock Queues


When you request a new lock, the lock management services first determine whether the resource is currently known (that is, if any other processes have locks on that resource). If the resource is new (that is, if no other locks exist on the resource), the lock management services create an entry for the new resource and the requested lock. If the resource is already known, the lock management services determine whether any other locks are waiting in either the conversion or the waiting queue. If other locks are waiting in either queue, the new lock request is queued at the end of the waiting queue. If both the conversion and waiting queues are empty, the lock management services determine whether the new lock is compatible with the other granted locks. If the lock request is compatible, the lock is granted; if it is not compatible, it is placed in the waiting queue. You can use a flag bit to direct the lock management services not to queue a lock request if one cannot be granted immediately.

7.2.7 Concepts of Lock Conversion

Lock conversions allow processes to change the level of locks. For example, a process can maintain a low-level lock on a resource until it limits access to the resource. The process can then request a lock conversion.

You specify lock conversions by using a flag bit (see Section 7.4.6) and a lock status block. The lock status block must contain the lock identification of the lock to be converted. If the new lock mode is compatible with the currently granted locks, the conversion request is granted immediately. If the new lock mode is incompatible with the existing locks in the granted queue, the request is placed in the conversion queue. The lock retains its old lock mode and does not receive its new lock mode until the request is granted.

When a lock is dequeued or is converted to a lower-level lock mode, the lock management services inspect the first conversion request on the conversion queue. The conversion request is granted if it is compatible with the locks currently granted. Any compatible conversion requests immediately following are also granted. If the conversion queue is empty, the waiting queue is checked. The first lock request on the waiting queue is granted if it is compatible with the locks currently granted. Any compatible lock requests immediately following are also granted.

7.2.8 Deadlock Detection

A deadlock occurs when any group of locks are waiting for each other in a circular fashion.

In Figure 7-3, three processes have queued requests for resources that cannot be accessed until the current locks held are dequeued (or converted to a lower lock mode).

Figure 7-3 Deadlock


If the lock management services determine that a deadlock exists, the services choose a process to break the deadlock. The chosen process is termed the victim. If the victim has requested a new lock, the lock is not granted; if the victim has requested a lock conversion, the lock is returned to its old lock mode. In either case, the status code SS$_DEADLOCK is placed in the lock status block. Note that granted locks are never revoked; only waiting lock requests can receive the status code SS$_DEADLOCK.

Note

Programmers must not make assumptions regarding which process is to be chosen to break a deadlock.

7.2.9 Lock Quotas and Limits

While most processes do not require very many locks simultaneously (typically fewer than 100), large scale database or server applications can easily exceed this threshold.

If you set an ENQLM value of 32767 in the SYSUAF, the operating system treats it as no limit and allows an application to own up to 16,776,959 locks, the architectural maximum of the OpenVMS lock manager. The following sections describe these features in more detail.

7.2.9.1 Enqueue Limit Quota (ENQLM)

An ENQLM value of 32767 in a user's SYSUAF record is treated as if there is no quota limit for that user. This means that the user is allowed to own up to 16,776,959 locks, the architectural maximum of the OpenVMS lock manager.

A SYSUAF ENQLM value of 32767 is not treated as a limit. Instead, when a process is created that reads ENQLM from the SYSUAF, if the value in the SYSUAF is 32767, it is automatically extended to the maximum. The Create Process (SYS$CREPRC) system service allows large quotas to be passed on to the target process. Therefore, a process can be created with an arbitrary ENQLM of any value up to the maximum if it is initialized from a process with the SYSUAF quota of 32767.


Previous Next Contents Index