Article 155855 of comp.os.vms: In article <526h09$i8t@aplinfo.jhuapl.edu>, humesdg1@aplcomm.jhuapl.edu (David G. Humes) writes... >Greetings, > >has not completed. So, first off, can anyone offer a good, simple >disk I/O benchmark program? Being an RMS hacker, I tend to simply use CONVERT/STAT It shows IO counts, Elapsed and CPU time. Often good enough. To create input (on a disk without highwaterrmarking!) I might use $COPY/ALLO=819200 NL: temp:temp.dat $SET FILE/END temp:temp.dat $SET FILE/ATTR=(RFM:FIX,MRS=8192,LRL=8192) To modulate IO buffer/size I'd then use $SET RMS/SEQ/BUF=x/BLO=y ! Convert will listen to this! To avoid reading much input I sometimes use somehting like $CONV/FDL=SYS$INPUT/PAD/STAT test.dat test:test.dat record; format fixed; size 8192 area 0; alloc 819200 $ where test.dat might be a file with fixed length 4 byte record counting from 1 to xxx created be a program written in your favourite language >The data storage disks consist of 4 RA72s which are bound into a volume >set. The 4 RA72s are distributed 2 each accross two requestors. Writting multiple small files or one large one? Bound volumes will spread the load reasonably in the first case in the latter case, the volume set will possibly behave just like a single disk would with all activity going from one disk to the next. For large files you may want to check out STRIPING for example by using the SW-RAID product. >The The real-time, aggregate data rate that we are trying to >keep up with is about 600K bytes/sec. That's not too much it would seem. You wouldn't happen to have HIGH WATER MARKING working against you? (unlikely... streamning data to a file generally avoid this overhead) Or perhaps an inappropriately small file pre-allocation combined with too small a file extend on the disks behind the new controller? Hope this helps, +--------------------------------------+ Hein van den Heuvel, Digital. | All opinions expressed are mine, and | "Makers of VMS and other | may not reflect those of my employer | fine Operating Systems." +--------------------------------------+