From: CRDGW2::CRDGW2::MRGATE::"SMTP::CRVAX.SRI.COM::RELAY-INFO-VAX" 8-FEB-1991 23:27:00.19 To: MRGATE::"ARISIA::EVERHART" CC: Subj: Re: Which node do I bring up first? Received: by crdgw1.ge.com (5.57/GE 1.80) id AA04938; Fri, 8 Feb 91 22:14:09 EST Received: From UCBVAX.BERKELEY.EDU by CRVAX.SRI.COM with TCP; Fri, 8 FEB 91 19:03:02 PST Received: by ucbvax.Berkeley.EDU (5.63/1.42) id AA11763; Fri, 8 Feb 91 18:58:37 -0800 Received: from USENET by ucbvax.Berkeley.EDU with netnews for info-vax@kl.sri.com (info-vax@kl.sri.com) (contact usenet@ucbvax.Berkeley.EDU if you have questions) Date: 6 Feb 91 23:36:43 GMT From: zaphod.mps.ohio-state.edu!unix.cis.pitt.edu!pitt!cuphub!edinboro!gcc!brown@tut.cis.ohio-state.edu (The Raven) Organization: Grove City College, Grove City, PA Subject: Re: Which node do I bring up first? Message-Id: <374@gcc.uucp> References: <1991Feb1.135720.13902@axion.bt.co.uk> Sender: info-vax-request@kl.sri.com To: info-vax@kl.sri.com In article <1991Feb1.135720.13902@axion.bt.co.uk>, pkatz@axion.bt.co.uk (Philip Katz) writes: > Our main cluster will soon comprise a 6310, a 6410 and a 6510. > > I have heard suggestions that when booting the whole cluster, it is > advantageous to boot the most powerful processor first, as then the lock > manager will run on it and this is a good and useful thing to happen. > > Is there any truth in this? > Yes, and no. It was much more significant prior to VMS 5.0. Nowadays the lock load is much better balanced dynamically between all nodes. If you do run into problems, you can tailor it using the LOCKDIRWT sysgen parameter. For example, if you want the load to be primarily handled evenly by your 6310 and 6410,with a little bit of load handled by the 6510, you can set the LOCKDIRWT parameter to 2,2 and 1 respectively. However, a warning: The default for the LOCKDIRWT parameter is 0, so if you change it on any node, you had better change it on another - 2,2, and 1 will work, but 1,1, and 0 will mean your 6510 will handle almost no locks - which can cause some nasty hangups if the 6510 is mscp serving local disks around the cluster. It won't really matter if you aren't MSCP serving disks in that manner. I have a rule of thumb that I use: 1. If the node MSCP serves disks to other nodes, LOCKDIRWT should be equal to the highest weight used on any other node. 2. If the node is only an MSCP recipient node and does not have real time programs that do a lot of locking (such as a VS3100 in a LAVC configuration), set lockdirwt to 0 for that node and 1 for all others (unless circumstances show a need for different balancing). This is particularly significant in LAVC systems that have VS3100 type boxes as cluster members. If you have, for example, 2 6410s and a VS3100 in a cluster, with LOCKDIRWT = 0 on all three nodes, MSCP served disks on the VS3100 can have locks mastered on the 3100. As long as the 3100 is the only one using that file or record, no problem. But let's say that you have a big database file on your 6410 that is accessed via MSCP serving on the 3100. The 3100 runs a process that is the first one to open the database. The master copy of the lock for that file is then held on the 3100, and any other operations on that file will have their locks mastered on the 3100 including ANY AND ALL PROCESSES ON THE 6410s that access the file! This means that for any operation that requires lock conversion or queueing, LAVC activity will occur for all these processes. This can be a major problem. However, if you set LOCKDIRWT on your 6410s to 1, and to 0 on the 3100, the 3100 will only master locks for disks that are local to it. I hope this helps - if you need any more info on it, locking is covered in pretty good detail in the VMS internals and Data structures reference manual Version 5.0. -- +----------------------------------------------------------------------+ | Mitchell W. Brown Internet: brown%gcc@edinboro.edu | | Grove City College TLC Uucp: ...pitt!edinboro!gcc!brown | | Grove City, PA 16127 (412) 458-2072 | +----------------------------------------------------------------------+