Article 159750 of comp.os.vms: There is an excellent DSNlink article on the VIOC that should help you if you've got access to that. The article does go into a bit of detail of why you might have a low hit rate. The article title is: "[OpenVMS] How to Interpret Info From SHOW MEMORY/CACHE/FULL on VAX ". There's one for VAX and one for Alpha. The break-even points are different for VAX vs Alpha. For Vax, it's about 45-55%. For Alpha, it's 30-35%. In article <19NOV199612324094@jhuvms.hcf.jhu.edu>, ecf_stbo@jhuvms.hcf.jhu.edu (Old-Fashioned Staffordshire Plate...) wrote: > I'm trying to figure out why am getting a crummy hit rate for the VIO > cache (VMS 6.2 AXP)... Can anyone answer the following questions? > > 1) Will the cache not cache any file bigger than X blocks? If so, what is X? The limit is a single request being larger than 35 blocks, not the total size of the file. > 2) The cache seems to have a hard limit of 100 cached files, is this true? Nope. The limit is in the number of file control blocks, not files, and are from files that have been deleted on the system. The FCBs are cached to speed the allocation of file headers for newly created files. That's the number you see in $ show mem/cache/full > 3) Will it cache indexed files? I'm fairly certain it will. >> 4) I think it will not cache files if any stream has it open for write, is > this correct? Anyone know of an easy way to find a stream open for write for > a particular file? process? e.g. SDA.... False, unless you're in a cluster in which case the following rules apply: - if the file is open read-only on ALL nodes, it will be cached - if the file is open read/write on only ONE node in the cluster, it will be cached - if the file is open read/write on one node, and read-only on any other node, it will NOT be cached. On DSNlink, you can also find articles with Macro code to reset the counters. This is useful to make sure that you're measuring the right activities at the right time. One good night's backup will through your cache hit rates out the window, and is probably not useful to what you want to determine. Grab the numbers at the end of the day, reset them after your backup, and start over measuring again. This is an extract from the code, at it may help you answer question 1: ; The counters to be reset are: ; (CACHE$AR_VCC_DATA) ; + 14 for CACHE$GL_VREAD, the Read IO counter ; + 18 for CACHE$GL_READHIT, the Read Hit counter ; + 1C for CACHE$GL_VWRITE, the Write IO counter ; + 24 for CACHE$GL_RRNDMOD, Read IO bypassing VIOC due to ; function counter ; + 28 CACHE$GL_RRNDSIZ, Read IO bypassing VIOC due to ; size counter ; + 2C CACHE$GL_WRNDMOD, Write IO bypassing VIOC due to ; function counter ; + 30 CACHE$GL_WRNDSIZ, Write IO bypassing VIOC due to ; size counter When you do a $ SHOW MEM/CACHE/FULL, and see the I/O's bypassing the cache, it's the sum of the 4 numbers above. To look at the individual numbers in SDA, do the following: EVALUATE @(@CACHE$AR_VCC_DATA+nn) where nn is the number above. I hope this helps. .../Ed -- Ed Wilts Ed.Wilts@gems1.gov.bc.ca