From: MERC::"uunet!WKUVX1.BITNET!MacroMan" 27-MAR-1993 05:00:41.98 To: MACRO32@WKUVX1.BITNET CC: Subj: Re: distributed lock manager BUCZEK@FHI-Berlin.MPG.DE writes: >... >I assumed (from the docs) LOCKDIRWT only affects the directory resources. >Because the chances, that the directory entry of a resource in on the >local node are fairly small anyway (in a bigger cluster), I gave LOCKDIRWT >only to the faster nodes. >... >...So the resource was mastered on the directory >node instead of the node with the higher locking activity. >... ---------------------------------------------------------------------------- Lock remastering only occurs among the nodes that have the largest equal values of LOCKDIRWT. We have LOCKDIRWT set to the same value (10) on all nodes except for a couple diskless workstations where it is set to 0, and the remastering works fine. The workstations never master anything this way. I *believe* that if I set LOCKDIRWT on the workstations to 1 they would master locks that were for their local use only, and remastering of other locks would would occur as before, but I haven't tested that scenario. To test remastering I wrote a program that displays which node is the master of the volume allocation lock for all mounted disks. Note that this is *not* the device lock that F$GETDVI("DSKA:","LOCKID") displays; that lock is never used after MOUNT except to lock the entire disk (i.e. SET VOLUME/REBUILD) and so will *not* be remastered based on any amount of normal disk activity. If I run this program and it shows that this volume allocation lock is mastered on another node for disk DSKA: and I then do some sustained file creation/deletion activity on DSKA: and then run the program again, this DSKA: lock will have moved to my local node. If anyone wants this program I can post it or mail it. We are running VMS 5.5-2 and have installed CSC patch #1011 V2.0 which fixes several distributed lock manager problems in SYS$CLUSTER.EXE. This patch is available from DSNlink, or CSC will mail it to you (V2.0 is the first version of this patch for 5.5-2 and it only came out this month). The release notes for this patch has this on lock remastering: Change the activity scan rate of the distributed lock manager from 1 second to 8 seconds. Increasing the scan rate to 8 seconds will reduce lock manager overhead. This will also make the algorithms to move trees much more conservative. The longer scan rate means it will take longer periods of sustained activity before a resource tree will be moved to a new master. Fewer and less often tree movements should increase perceived quality. Another point about remastering: In VMS 5.5-2 the dynamic SYSGEN parameter PE1 was added. If PE1 is set to a non-zero value than only lock trees with fewer locks than the value of PE1 will be moved to another node. So setting PE1 to 1 would prevent it altogether. This is specific to each node so if you had one espescially fast Vax you could set PE1 to 1 on it, keep it 0 elsewhere and keep LOCKDIRWT the same on all nodes. Then lock trees should migrate from the slower Vaxes to the fast Vax and stay there. ---------------------------------------------------------------------------- >btw: how does Byte Range Locking work ? Good question. I believe this was added in 5.5 for POSIX compliance which requires the ability to lock a contiguous range of bytes in a binary stream file (which have no record structure to use for locking granularity). The 5.5-1 release notes mention a bug that was fixed in this, but that's the only mention of it I can find of it in the DEC documentation set. "SHOW LOCK" in SDA does disply this lock range and FORMATing a lock block shows the following new fields: 80B57EE8 LKB$L_RQSTSRNG 00000000 80B57EEC LKB$L_RQSTERNG FFFFFFFF 80B57EF0 LKB$L_GRNTSRNG 00000000 80B57EF4 LKB$L_GRNTERNG FFFFFFFF Of course this lock "range" should be able to be used for applications other than byte streams. ---------------------------------------------------------------------------- Jess Goodman --- Accu-Weather, Inc. --- GOODMAN@ACCUWX.COM