From: Ryan Moore [rmoore@rmoore.dyndns.org] Sent: Thursday, November 21, 2002 1:01 PM To: Info-VAX@Mvb.Saic.Com Subject: Re: Read/Write Locks in an SMP setup On 21 Nov 2002, Steve Bainbridge wrote: > I have an OpenVMS application that needs Read/Write locks to protect > an area of shared memory, several processes can access this memory > 10,000+ operations per second with the vast majority of them reads - > it needs to be as efficient as possible. I had an application that needed something like this. It's possible to do synchronization without locks. But it can be tricky. > Do I need to write locks using the C primitives, if so which ones are > the most efficient to use ? I was able to solve this problem by having only one writer, but multiple readers. Remember that on Alpha, you can atomically increment a 32 bit integer only if it is aligned. And also, you need to worry about memory barriers since memory writes can occur out-of-order. I kept two integers (call them sync1 and sync2) to protect each data bucket in a data chain. The algorithm looked something like: writer: increment sync1 memory barrier do data update memory barrier increment sync2 memory barrier reader: read sync2 copy data to local buffer read sync1 if (sync2 != sync1) retry If you have a complicated data structure, this whole thing can get tricky. And note this algorithm only works with one writer. But if you can send all update requests to a single "writer" process, you should be good to go. Remember that it's important that the data in the shared memory area, especially the sync integers be properly aligned or the whole thing will go wrong. I've tested this on multiprocessor machines with several readers reading data records and the writer writing as fast as possible. Try it with something like 10 data buckets, 10 readers, and one writer where the readers read ramdom buckets and the writer updates random buckets. The data bucket could be a 1024 byte array that the writer sets to all the same byte values. Then the readers could check if they ever read a bucket where the 1024 bytes aren't all the same value. Increase the size of the data bucket to push the test even harder. -Ryan