From - Mon Sep 15 08:01:54 1997 Newsgroups: comp.os.vms From: rdw@threel-nospam.co.uk (Rod Widdowson) Subject: Re: OpenVMS Disk Services for Windows NT, 1.0 Sender: news@threel.co.uk (Wnn System Account) Message-ID: <341cf02d.312513598@news.threel.co.uk> Date: Mon, 15 Sep 1997 08:51:00 GMT X-Nntp-Posting-Host: cassius.threel.co.uk Reply-To: rdw@threel-nspam.co.uk Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset=us-ascii References: <009BA2FB.3A1027F1.15@maxwell.ph.kcl.ac.uk> <5vbcp6$g3h@bgtnsc03.worldnet.att.net> Mime-Version: 1.0 Organization: 3L Ltd X-Newsreader: Forte Agent 1.5/32.452 NNTP-Posting-Host: delta.threel.co.uk X-NNTP-Posting-Host: delta.threel.co.uk Lines: 59 X-NNTP-Posting-Host: 194.117.152.68 Path: news.mitre.org!blanket.mitre.org!agate!ihnp4.ucsd.edu!mvb.saic.com!nntprelay.mathworks.com!europa.clark.net!4.1.16.34!cpk-news-hub1.bbnplanet.com!news.bbnplanet.com!baron.netcom.net.uk!netcom.net.uk!cableinet-uk!news.cableinet.co.uk!ispc-news.cableinet.net!threel.co.uk!news "Mark E. Levy" wrote: >Nigel Arnot wrote: > >> I'm still puzzled about this recommendation for no more than six NT systems. >> Does anyone know enough about the architecture to say why it's inadvisable >> to have 100 NT clients, 100 virtual disks, and 1:1 relationships? Seems >> to me that if that's a problem, then the quality of programming of VMS >> products is declining alarmingly. There should be no interactions down to the >> real hard disk driver, and everything running in parallel. > >I was told by a reliable source that 6 was all they could qualify at >this time, and the limit would probably be increased in the future. > I didn't write any of the code (but, as they say, I know someone who did). However I can comments that the code quality certainly hasn't dropped, the tram that did this are *good*. Indeed as you point out each disk service can run independently. Do bear in mind however that at the very bottom of the IO stack you will always need to serialise access to the devices (all at IOLOCK8) and all your completions will occur on the primary CPU. Of course if you have a 5 member cluster you can share out the load so your 100 disks can be 20 per processor. I'll leave it to others to comment on the precise limit and the reasons (Mark's comment seems very sensible). I think it's fair to say that the design center is 10's of NT servers (potentially each with thousands of PC clients). In fact the sort of place that Wolfpacks will eventually turn up. Why wouldn't you want to have 100 "direct" clients ? YMMV but I would imagine that the management cost of 100 disks on 100 clients is going to be pretty costly. If you need to share files between the NT clients you will have to share out the hundred disks and so the cost starts going up. The DEC party line is that in that sort of position you would want to bite off the management cost of PATHWORKS and given everyone access to Shares (but don't forget the beefy interconnect to allow PATHWORKS to hurl it's dirty caches around the cluster). For better or for worse NTDS is aimed around a certain type three tier deployment, where the data and it's management, security, &c is centralised but the processing requirements are farmed out to be as close to the desktop as reasonable (to help scalability). I have used such a three tier system as Dougie described - VMS serving the disk via FDDI to an NT server and thence via 10base2 to my desktop. Subjectively the performance was the same as if the disk had been local to the Server (and certainly faster than PW) plus I had the advantage of getting my files "hot" backed up, the NT backup was just a part of the VMS backup - the operations staff didn't need to do anything differently. FWIW rod Speaking for myself, not 3L.