Article 4877 of vmsnet.internals: >Cache coherency is handled in many VMS component with an established >messaging scheme implemented via the lock manager or the SCS services. >I'd think that if you were to rethink your approach. Shitcan the mail- >box and the cache manager; they may end up becoming bottlenecks. Been there. Done that. The first version of this whole thing I implemented used the lock manager. As the processes wanted to read the cache, they'd get a protected read lock. As someone wanted to update the cache they'd get a protected write lock. Unfortunately, this didn't work because of the performance. The read case was causing way too many locks and it ended up slowing down an important critical process too much. The lock rate on the system even jumped to over 20,000 occasionally (I believe the units are locks/min) according to MONITOR CLUSTER. Amazing what those Turbolasers can do when they're in a hurry! The point is, locks aren't acceptable in this system because it causes too much overhead in the read case. So what about having the processes hold the locks and declare blocking ASTs? This scares me because I could have more than 100 processes holding these read locks at a time. I'd hate to see 100+ processes go computable all at once when an update must occur. The updates will happen several times a second anyway, so the blocking AST rate would be high. This is why I've gone to the mailbox and cache manager approach. I've come up with a method where one process could update the cache without messing up the readers and without using locks. I've tested the mailbox to cache manager approach and it looks like it'll be plenty fast enough for my application. The updates shouldn't happen often enough for this method to be a bottleneck. I was able to get close to 600 updates a second before the CPU noticed what was going on (ie. got above about 5%). I maxed out the CPU with 80 processes running at an aggregate update rate of about 4000-5000 per second. Luckily, the slowest CPU the system will run on is a VAX 76xx series. Others include 77xx series VAXen and 76xx and 8400 class Alphas. >The scheduling system is event driven. When the process is placed >on another state queue it will remain there until an event occurs >to cause it to be scheduled elsewhere. In the case of the RWMBX, >the mailbox driver handles a read and reports that the resource is >available in a fork when the buffer space is freed. This too, I'm >fairly confident, is covered in the I&DS chapter on mailboxes in >the section which would discuss I/O completion. > >If you have access to the source listings, you might want to peruse >SCH$RAVAIL (Resource AVAILable). If not, reread the discussion of >this routine in the chapter on scheduling. Makes sense. Now that I know the answer I'll see if I can find what I missed the first time. >Let us know what you finally devise as your solution. It'll probably involve timestamps because timestamps will probably be needed for a future enhancements that are being planned anyway. -Ryan rmoore@qualcomm.com