From: Bill Todd [billtodd@metrocast.net] Sent: Wednesday, April 24, 2002 1:13 PM To: Info-VAX@Mvb.Saic.Com Subject: Re: The 'tone' of c.o.v. (was Re: Prediction: VMS lives, merger or no merger!) "Andrew Harrison SUNUK Consultancy" wrote in message news:3CC6C1AD.9030007@sun.com... > > > Bill Todd wrote: ... > > While a general perception that VMS's file system > > performs relatively poorly might be understandable, the problem is not with > > its capabilities but with its default settings (which are long overdue for > > update: they're aimed at optimizing performance in systems with main memory > > on the order of a megabyte rather than a gigabyte). It is eminently > > possible for a user to obtain competitive performance from the VMS file > > system, but it requires a good deal more intimate acquaintance than is > > required by, say, typical Unix systems. > > > > I am sure that altering the default settings helps, but if David Mathogs > benchmark results for OpenVMS and other OS's were correct and if I read > them correctly changing the default settings improved performance but > not by enough to make OpenVMS filesystem I/O comparable with modern > UNIX's. OTOH, David never actually used the VMS file system directly, but only through the RMS record-management layer on top of it. The raw code-path execution differences he saw when running from a RAM disk (which is what you may be referring to above) were due to this. VMS and Unix aren't really directly comparable in this respect: VMS has a file system (which is accessible to applications, though most access it through RMS) but that performs sector-level access to files, and a record-management layer (RMS) on top of that which *can* perform simple Unix-style stream access (or even sector-level access) but is primarily aimed at performing sequential, random, and multi-keyed indexed access to record-structured files. The RMS layer does make the code path for simple access noticeably longer than it otherwise could be, while direct use of the file system likely makes it shorter than in Unix (but requires the application to do its own blocking/deblocking of data from the sectors accessed). > > Its also worth pointing out that David did not use any of the faster > UNIX filesystem options. He did not run UFS+ on Solaris in direct mode > or use VxFS or QFS all of which could have dramatically improved the > performance when compared with default UFS+ performance. Most big > DBMS servers use either VxFS or UFS+ with Direct I/O (standard part > of UFS). These both bypass the UFS buffer cache for filesystem writes > in order to improve performance and reduce memory contention. Nor did he use the direct file system access mechanisms (or even the equivalent RMS mechanisms) that do exactly the same thing - and, unlike the recent Unix facilities, have been there since Day 1. > > The OpenVMS filesystem also does not seem to support a mechanism like > read clustering which can improve read performance without making > changes to apps. Without knowing exactly what you mean by 'read clustering', I will note that VMS 1) supports facilities that make files contiguous, or optionally as contiguous as possible, on disk (and even finer 'placement control' mechanisms, but they aren't relevant to this particular point), 2) supports (via specifiable buffer sizes) variable access granularity to files (contiguous or not), such that each disk request brings in a specifiable amount of data (and this can be controlled on a per-file basis by an application or by setting values outside the application - i.e., transparently), and 3) supports asynchronous multi-buffering for both read and write access (I'm not certain whether this option can be controlled outside the application, but it is easy to do inside the application). VMS's file system faults really are only in ease of use, not in breadth of facilities or capabilities. It requires some knowledge (but is in no way impossible) to get a VMS system to access files with the default performance (and, one might add, the default fragility) that a Unix system has. In part this is because of antiquated default settings that *probably* could be changed with only beneficial effects (the 'probably' part is why they weren't changed long ago, but some mechanism allowing them to be different for at least *new* applications is long overdue). In part this is because of more basic design differences that make use of a common system cache (especially in a cluster environment) more difficult - though, as with other differences, the benefits of centralized caching can be obtained via mechanisms such as RMS's global buffers (which IIRC can be configured transparently to an application) and additional progress was made several years ago when the 'virtual I/O cache' (VIOC) was implemented (the new XFC further improves the situation, but still has some kinks left to work out). - bill