From: Main, Kerry [Kerry.Main@Compaq.com] Sent: Friday, July 09, 1999 11:28 PM To: Info-VAX@Mvb.Saic.Com Subject: RE: Whither VMS? >>> For people already using VMS, your position is reasonable. But if Compaq is looking to increase enterprise penetration, it's going to need people who *aren't* already users coming on board in large numbers. << Agreed. As an example, the first new stock exchange in 25 years has chosen OpenVMS. Reference: http://www.compaq.com/newsroom/pr/1998/pr101198a.html The hottest new internet search engine these days uses OpenVMS as its backend workhorse ie. http://www.northernlight.com/ Also, in case there are any doubts about this, check out: http://www.theregister.co.uk/990709-000015.html and http://www.notess.com/search/stats/ So, #1 and #2 (AltaVista - Tru64 UNIX) Inet search engines are both Compaq Alpha based :-) Both the stock exchange and Northern Light are new Customers (as indicated by one of the posters to this listserver). Why OpenVMS ? What about the application stuff ? One of the primary issues today is that most technology vendors think they are doing a pretty good job providing IT services to the e*Commerce and online Inet world. They are not. And Customers are starting to wake up and realize what availability and scalability reallllly means. Reference: http://www.mercurycenter.com/svtech/news/indepth/docs/ebay062099.htm http://www.zdnet.com/pcweek/stories/news/0,4153,407387,00.html Imho, IT vendors discuss new technical features (like the attached file syste issues) and they are forgetting the absolute most critical piece that is most important to the business community - availability and huge scalability capacity. It is also an underlying theme in the IBM white paper pointer you provided in the attached. Todays online world can be compared to building lots of pretty houses (web applications), but then building them on foundations of sand (single server, single site solutions). Sooner or later, you just know something is going to take you down. So, how does one prepapre for the new online world ? Bottom line is that if you want to play with the big folks and serious e*Commerce world: [extract from a previous email I posted] - single system, single server (no matter how "reliable" as any number of reasons will take out a system/datacenter or put it offline) and single site solutions are no longer acceptable solutions for serious large scale e*Commerce players .. - Users do NOT care about SYSTEM availability (see subsequent point). Users do NOT care which OS and/or other supporting SW that might be running in the background. - Users do NOT understand or tolerate scheduled downtime. Unscheduled downtime is the same thing as downtime - period. Scheduled downtime is an indication of a single server solution. How many IT vendors play games with their availability numbers (the 99 game) and yet do not count scheduled downtime? Case in point - ever have a phone go dead because of "scheduled" downtime ? - Users DO CARE about APPLICATION availability. The requirement is to ensure 100% application transparency to users when taking down systems and/or datacenters for upgrades, repair etc. The new "system" for the online world includes all of the multiple sites and the associated network links. All of this needs to managed as a single entity. The individual systems must be capable of being brought down for maint, HW/SW upgrading, and tuning reboots (to tune static parameters to adjust for changing system loads) and yet still maintain 100% application availability. - database replication / log file shippng are primitive availability features as they involve application modifications (which tables, fields, records get replicated etc), additional system HW (in some cases) for replication support, do not replicate non-db files, and are prone to HW failures causing update queues to get out of sync between sites. These older technologies also infer a master-slave relationship between sites which means that there is no dynamic R/W load balancing between the sites. So, if one accepts these points as reality, how many vendors can state that they can meet these requirements ? Likely only a few, and OpenVMS is one of them. It is why the new stock exchange (see url above) chose OpenVMS. Regards, Kerry Main Senior Consultant, Compaq Canada Inc. Voice : 613-591-5078 / 621-5078 (dtn) Fax : 613-591-5113 Email : kerry.main@compaq.com -----Original Message----- From: Bill Todd [mailto:billtodd@foo.mv.com] Sent: Friday, July 09, 1999 10:21 PM To: Info-VAX@Mvb.Saic.Com Subject: Re: Whither VMS? I was the one who expected to need a hard hat :-) [another milestone (gallstone?) passed: my first emoticon. told you I was a news newbie] For people already using VMS, your position is reasonable. But if Compaq is looking to increase enterprise penetration, it's going to need people who *aren't* already users coming on board in large numbers. You can blame marketing only up to a point: perhaps it could have made a big difference years ago, but VMS has been around long enough that people have at least some idea of its capabilities - and still choose Unix because it's an 'industry standard'. Given the increasing difficulties in obtaining competent systems people, Unix's appeal will only widen: there just are a lot more people out there who know it. (I know, that didn't have to be true either...) VMS is indeed a more appropriate 'enterprise' system than any existing Unix, but some (not just Tru64) are at least starting to catch up. This also works against increasing VMS penetration if it doesn't itself continue to advance reasonably rapidly. http://www.s390.ibm.com/marketing/gf225126.html offers an IBM take on how an 'enterprise' system differs from the Unix pack (didn't notice mention of VMS, which isn't that surprising: it's more formidable competition technically, but less in the limelight, so why draw attention to it - especially if it may just fade away?). System management total cost of ownership issues lead the list (it's hard not to think IBM really does have a significant advantage there), but the other biggie is the ability to centralize lots of different work that often shares the same data in an S/390 Sysplex (cluster) as contrasted with the inability of Unix systems to tolerate similar I/O rates and related sharing coordination overheads. VMS is currently close to being competitive with S/390 in the area of large-scale data access, but it could take the lead - and in the process widen the gap with the Unix horde instead of allowing them to catch up. VMS's basic cluster architecture is, as you note, just hunky-dory, but other aspects are not. The file system was technically state-of-the-art when it was conceived back in 1976 (more like 1973 if you count its predecessor) and did a great job of migrating to the cluster model in 1983, but is now no longer current: file systems using a transaction log to protect their meta-data offer significantly higher performance (especially in large systems - but a single-disk system with the log on the same disk is likely not the VMS design center), and a write-sharable distributed cache is *long* overdue. Exactly the same comments apply to RMS - even the dates, if you apply a liberal interpretation (Ed Marison at least *started* conceiving it around 1973, during work on Mumps) - except that in things like indexed files the meta-data management includes virtually the entire file, so the potential performance benefits of logging and write-back caching are even more dramatic (and, incidentally, provide a code base that can support equally high-performance distributed database and object management as well, should anyone in or out of Compaq have an interest in developing such things). This is of course my pet interest, and I wouldn't be making such a big deal of it (here, anyway) if the IBM paper hadn't: if *they* consider this area one of their prime differentiators, having VMS able to beat them at their own game may have real significance. Other areas I'm much less familiar with may also be subject to comparable improvements as well. You can't make such changes without changing the on-disk structures, which means you don't have co-existence of the new versions in existing clusters (not without a ridiculous amount of work and a good deal of compromise, anyway: the only possibility remotely worth considering would be to allow the new systems to support two entirely separate sets of devices, one of which they could share with the old systems). Conversion of existing data is certainly feasible, though, as long as you maintain sufficient functional compatibility - and binary compatibility could well be feasible at the application level. Hence the idea of branching off a new VMS that could get rid of the detritus that systems pick up over a lengthy lifetime without sacrificing the things that make VMS great - and in the process give a nod in the directions the Unix crowd favors as well.