From: STAR::JORDAN "Greg Jordan, VMScluster Exec 02-Apr-1997 1617 -0500" 2-APR-1997 16:27:16.84 To: STAR::ZALEWSKI CC: NORLMN::EVERHART,JORDAN Subj: RE: Some good feedback from Glenn. Karen, Paul, care to reply? -steve Hi Steve/Glen, Let me see if I can answer Glen's concerns since they are lock manager related. With the current lock manager, the node which is the master get's great performance - everyone else gets sucky performance since they need to send messages to the master. To obtain good lock manaager performance in a Galaxy - the recommended approach is to allow the shared memory to be considered as a master. As you point out, a single galactic spinlock on this resource would not scale in a Galaxy. To make the above scheme work, we need a much finer grain synchronization in the lock manager. We are working on this... We really need this today to make SMP scale better. In the model where a lock master can be shared memory - each node in the Galaxy has access to the master data in shared memory. Assuming the lock manager supports finer synchronization, each node can do what I call - near local locking. They operate on their local information in private memory and then operate on the master lock tree in shared memory. This is done without needing to send a message to another node and without leaving process context. In additon, we would only use shared memory as the master IF there were multiple Galaxy nodes interested in the resource tree. If there is only one node of the Galaxy with interest, the tree would stay completly in the node's private memory. Let me know if this clears things up. _Greg >From: NORLMN::EVERHART 2-APR-1997 14:34:30.14 >To: STAR::ZALEWSKI >CC: EVERHART >Subj: Galaxy locks > >Steve - >I read through the Galaxy IR last night. This may be a result of it having >been late when I got to locks, but just in case... > >I noticed discussion of a galactic spinlock accessing the lock database. >This to a degree bothers me. > >The reason clusters scale as they do is in part that locks are all >held locally, maybe mastered somewhere else, but J random priv'd code >that wants to peek at a lock gets a spinlock only on his machine, >not all over, most of the time. Thus replicating locks and having them >mastered somewhere is good for scaling. > >I can see that having the data for mastering held someplace shareable >for each processor, and a separate someplace for each set of master >stuff, could be a win, since each separate someplace could be controlled >by a different spinlock and copying from there with the spinlock could >be a win over interrupting and doing network stuff. However I wasn't >sure from what I read that such a scheme was what is intended or >recommended. Glomming all the lock masters together and having ONE >lock for the lot of them seems likely not to scale...I hope that is >not the plan.