Article 160283 of comp.os.vms: I consider using the lock manager (SYS$ENQ and SYS$DEQ) to syncronize access to a number of resources. Are there any drawbacks? What about system overhead compared to event flags? Significantly higher. Event flag services are very cheap. On the other hand, they provide very limitied functionality. The Lock Manager is much, much more sophisticated - and you pay for it. I've never seen an exact measurement of the costs. In the ideal case, my *guess* - and that's all it is - is a factor of 10. Can you you keep the locks local to a node even if the node is in a cluster? Not directly. Let's be a bit careful though about what "keep the locks local to a node" means. One meaning is that the *name* is local, so that you don't have to worry about name collisions when the same program runs independently on multiple nodes of the same cluster. The other meaning is that the lock information itself is to be kept on the (one) node using the lock, so that no cluster traffic is needed to access it. For the first, no direct support exists. Lock names are always known through- out the cluster. Within VMS, "node-local" locks are implemented using a trick: Each node creates a lock with a name unique to the node. (Actually, I think its SYS$SYS_ID, where is the node's cluster ID number - or something like that.) To make a "local" lock, VMS simply makes it a child of that "node root" lock. As long as everyone follows this convention, these locks are effectively local. (I don't think the "node root" lock is actually used for anything.) While you can't use VMS's system root lock (it's a SYSTEM lock to which you don't have access), you can certainly do something similar yourself. For the second, no support is needed. All the data structures describing a lock get created on the node where the lock is first accessed. If no other node ever access that lock, they will stay there. (After a cluster transi- tion, the lock database gets rebuilt by having each node re-request all the locks previously held on it. Since no other nodes will ever have accessed your "local" lock, it's certain to "rematerialize" right where it was.) The previous paragraph is actually not *completely* true: The directory entry for your lock might theoretically end up anywhere. I can't now recall what VMS does to eliminate the potential overhead here, but it does *some- thing*, so that under normal circumstances, if a lock is used only on one node, its name will be resolved there. (At the least, as long as anyone on the node is accessing the lock, a "local copy" will exist on that node, and I think the name can be found among the local copies. In this case, the "local copy" will be *the* copy....) In my application there are about 20-30 processes, where each has to take 10-20 locks/sec on the average and, some have to take several hundred locks/sec at peaks. These are attainable numbers, depending on the hardware - but you're getting up there! Taking the high end of your estimates, you're talking about 600 locks/sec - a substantial load. Before going ahead, you should first look at what services you need out of your synchronization calls. If event flags are sufficient, use them. (For example, if all you want is mutual exclusion among processes on one node, there's no real reason to use anything more. In fact, you might want to use *less*, depending on the nature of the application - a shared variable accessed using interlocked instructions - or through LIB$ calls like LIB$BBSSI (yes, the LIB$ calls are available on the Alpha, even though the underlying instructions aren't) - is potentially much more efficient. -- Jerry