From: Bill Todd [billtodd@foo.mv.com] Sent: Friday, July 09, 1999 10:21 PM To: Info-VAX@Mvb.Saic.Com Subject: Re: Whither VMS? I was the one who expected to need a hard hat :-) [another milestone (gallstone?) passed: my first emoticon. told you I was a news newbie] For people already using VMS, your position is reasonable. But if Compaq is looking to increase enterprise penetration, it's going to need people who *aren't* already users coming on board in large numbers. You can blame marketing only up to a point: perhaps it could have made a big difference years ago, but VMS has been around long enough that people have at least some idea of its capabilities - and still choose Unix because it's an 'industry standard'. Given the increasing difficulties in obtaining competent systems people, Unix's appeal will only widen: there just are a lot more people out there who know it. (I know, that didn't have to be true either...) VMS is indeed a more appropriate 'enterprise' system than any existing Unix, but some (not just Tru64) are at least starting to catch up. This also works against increasing VMS penetration if it doesn't itself continue to advance reasonably rapidly. http://www.s390.ibm.com/marketing/gf225126.html offers an IBM take on how an 'enterprise' system differs from the Unix pack (didn't notice mention of VMS, which isn't that surprising: it's more formidable competition technically, but less in the limelight, so why draw attention to it - especially if it may just fade away?). System management total cost of ownership issues lead the list (it's hard not to think IBM really does have a significant advantage there), but the other biggie is the ability to centralize lots of different work that often shares the same data in an S/390 Sysplex (cluster) as contrasted with the inability of Unix systems to tolerate similar I/O rates and related sharing coordination overheads. VMS is currently close to being competitive with S/390 in the area of large-scale data access, but it could take the lead - and in the process widen the gap with the Unix horde instead of allowing them to catch up. VMS's basic cluster architecture is, as you note, just hunky-dory, but other aspects are not. The file system was technically state-of-the-art when it was conceived back in 1976 (more like 1973 if you count its predecessor) and did a great job of migrating to the cluster model in 1983, but is now no longer current: file systems using a transaction log to protect their meta-data offer significantly higher performance (especially in large systems - but a single-disk system with the log on the same disk is likely not the VMS design center), and a write-sharable distributed cache is *long* overdue. Exactly the same comments apply to RMS - even the dates, if you apply a liberal interpretation (Ed Marison at least *started* conceiving it around 1973, during work on Mumps) - except that in things like indexed files the meta-data management includes virtually the entire file, so the potential performance benefits of logging and write-back caching are even more dramatic (and, incidentally, provide a code base that can support equally high-performance distributed database and object management as well, should anyone in or out of Compaq have an interest in developing such things). This is of course my pet interest, and I wouldn't be making such a big deal of it (here, anyway) if the IBM paper hadn't: if *they* consider this area one of their prime differentiators, having VMS able to beat them at their own game may have real significance. Other areas I'm much less familiar with may also be subject to comparable improvements as well. You can't make such changes without changing the on-disk structures, which means you don't have co-existence of the new versions in existing clusters (not without a ridiculous amount of work and a good deal of compromise, anyway: the only possibility remotely worth considering would be to allow the new systems to support two entirely separate sets of devices, one of which they could share with the old systems). Conversion of existing data is certainly feasible, though, as long as you maintain sufficient functional compatibility - and binary compatibility could well be feasible at the application level. Hence the idea of branching off a new VMS that could get rid of the detritus that systems pick up over a lengthy lifetime without sacrificing the things that make VMS great - and in the process give a nod in the directions the Unix crowd favors as well.