INFO-VAX Sun, 06 Jul 2008 Volume 2008 : Issue 374 Contents: Re: HELP text error for ANALYZE/MEDIA Re: Question about NTP.CONF master and local-master commands Re: VMS SAN Primer ---------------------------------------------------------------------- Date: Sat, 05 Jul 2008 17:22:46 -0500 From: David J Dachtera Subject: Re: HELP text error for ANALYZE/MEDIA Message-ID: <486FF436.935596AA@spam.comcast.net> gerry77@no.spam.mail.com wrote: > > Hello everyone, > > I've just discovered an error in the help text for the ANALYZE/MEDIA command > (the bad block locator utility) on both Alpha and Itanium V8.3 releases. > > There are two almost identical copies of the same text, except for some slight > changes: different case for the BAD acronym and the sentence "This manual is > posted with other archived manuals on the OpenVMS Documentation website" added > to the second copy. > > I think it's not a serious error, but it makes difficult to search for command > subtopics because ANALYZE/MEDIA is considered ambiguous. > > I'm a Hobbyist, so I do not have any support contract. How can I signal this > problem to HP Engineering to have it corrected? > > Thanks, > G. (not english native, sorry for any errors) Note also that ANALYZE/MEDIA is, for the most part, obsolete. Only very old media do not provide for bad block replacement local to the drive. For modern SCSI and FC disks, low-level format is the preferred way to verify/refresh the device's bad block list. D.J.D. ------------------------------ Date: Sat, 5 Jul 2008 13:44:25 -0700 (PDT) From: AEF Subject: Re: Question about NTP.CONF master and local-master commands Message-ID: <3ec340fb-9aad-4166-800b-6eb39588368c@z66g2000hsc.googlegroups.com> On Jul 3, 10:37 pm, John Santos wrote: > AEF wrote: > > Hello, > > > [Also posted to vmsnet.networks.tcp-ip.tcpware.] > > > What is the point of the local-master and master commands? Can't you > > just use the server command to point nodes to the "local-master"? > > > Suppose I have a fleet of VAX systems in London and another in NYC and > > a non-VMS NTP server in our NYC office. Can't I just have each London > > VAX point to a particular London VAX and have that particular London > > VAX point to a set of VAX systems in NYC or even our NTP server (which > > is NOT a VAX system). And then if the WAN goes down, all the London > > VAX systems would then sync off the "local-master" London VAX? > > > local-master in London (node A): > > server NYC VAX system NY1 prefer > > server NYC VAX system NY2 > > > local-master in London (node A): > > server NYC VAX (or NYC NTP source) > > > Other London VAX systems: > > server node-A prefer > > server node-B > > > NY1 and NY2: > > server non-VAX NTP-system > > > Wouldn't this work? Why would I need a local-master or master commands > > anywhere, and if so where and how? > > > Sorry if this is a stupid question, but I've been looking on the net > > and in the FM and still don't see the point. > > > BTW, I'm running TCPware 5.3-3. > > > Thanks! > > > AEF Hi John! Thanks for your speedy reply. I'm hoping to finish this by Sunday afternoon. > If I understand NTP correctly, you use "local-master" when you > want that system to use it's own internal clock as the master > clock for all the systems that use it as a time server, when it > can't talk to an external time server. (If the WAN connection > dies, use this system as the master until it comes back.) You > would use "master" for a server that doesn't have any external > access (an isolated network), so it can't synchronize with an > external timeserver, or for a server that had an internal atomic > clock or had it's system clock set by an atomic clock or WWV > receiver, or could otherwise be regarded as an authoritative > time server. OK, I eventually figured this out. I want to use local-master for three nodes in NYC and three in London. > If you are syncing with an external time service (from your > ISP or NIST or someplace), you wouldn't use either, unless > you are worried about losing WAN connectivity. We have an NTP server in our data center and I want to sync off of that. But it may become unreachable due to a crash, a network problem, etc. So I want to use local-master as described above. > You could set up 2 or 3 London VAXes as local servers, peering > with each other and getting served by your NYC systems. Then > if one of them is down, the rest of the London VAXes can sync > from the other two, and the 3 London servers, syncing with each > other, will converge on the NYC time more quickly, averaging > out individual clock and latency differences. Can you explain the point of the peer command? I still don't get that. > > In NYC, have 2 or 3 VAXes sync with the non-VMS server (peering > with each other), and serving the rest of the NYC VAXes (and > the London servers.) That way, if one of the VAXes is down, or > the NYC NTP server is down, the other systems still have a reasonably > accurate time source. But why do I need the peer commands as opposed to server commands? [...] > The 3 NYC server VAXes would all have > > server NYC-NTP-SERVER > peer NYCVAX2 > peer NYCVAX3 > > (or NYCVAX1 & NYCVAX3 for NYCVAX2, or NYCVAX1 and NYCVAX2 for > NYCVAX3) > > The 3 London server VAXes would have > > server NYCVAX1 > server NYCVAX2 > server NYCVAX3 > peer LondonVAX2 > peer LondonVAX3 > > (substituting appropriately for LondonVAX2 and 3.) Looks fine, but I still don't see why you need peer instead of server or why it's better. Can you please explain it? Also, in the example in the TCPware Mgmt. manual, they have peer commands that "peer" with themselves! Example: ; NTP configuration on 192.168.67.3 local-master 12 server 192.168.67.1 server 192.168.34.1 server 192.168.34.2 peer 192.168.67.3 What's the point of that, or is it a typo on TCPware's part? > All the other NYC VAXes would have: > > server NYCVAX1 > server NYCVAX2 > server NYCVAX3 > > and all the other London VAXes would have: > > server LondonVAX1 > server LondonVAX2 > server LondonVAX3 > > in there respective NTP.CONF files. Yes, this is what I was going to do for them. But... would it make sense to add "server NYC-NTP-SERVER" to all the VAX systems in case the three local-masters in NYC go down or are taken down, or am I being to paranoid, or is that a bad idea for some other reason? Or should I just add "peer > -- > John Santos > Evans Griffiths & Hart, Inc. > 781-861-0670 ext 539 AEF ------------------------------ Date: Sat, 05 Jul 2008 21:31:09 -0500 From: Michael Austin Subject: Re: VMS SAN Primer Message-ID: David J Dachtera wrote: > JF Mezei wrote: >> Paul Lentz wrote: >> >>> I sorta knew there couldn't be much difference... >> But wait a minute, don't SANs use very different terminology. They talk >> about switches, fabric etc . > > True. However, the terminolgy has become very confused (confusing). > > When folks say "SAN", they really mean "storage array". > > When folks say "fibre channel", they really mean "storage area network" > (SAN - as in the interconnecting infrastructure). > > ...and "a separate 'fabric'" equates roughly to a VSAN (Virtual Storage > Area Network), corrolary to a VLAN. VSANs taking "zoning" to another > level, as it were. On CI or over Ethernet, the VMS equivalent would be > the cluster id. number. > >> And don't SANs have many many capabilities such as RAID, abilities to >> comvine physical disks into a single drive, or partition a single drive >> into multiple drives ? > > Yes. Think: "HSG" or SWXCR. > >> Do SANs provide any concept of shared locking ? > > Does a CI provide such a concept? ...shared SCSI...? > >> Can a node request that >> a block on a drive be locked for writes by other nodes ? > > Within the confines of an operating "domain" such as a VMS cluster, > certainly. However, it requires a distributed lock manager. > >> Or is it pretty much a total free for all with SANs just blindly >> executing requests on any drive from any node ? > > In so far as "drive" and "node" are virtual concepts, yes. However, > there is no "magic" which enables sharing. Read on... > >> (I would assume that SANs would have ability to provide "views" which >> means that a particular node woudl have a defined list of disks it can >> access ? > > Yes and no. "LUNs" (remember: FC is just a way to carry the SCSI > protocol over a light "beam") are "mapped" to specific fibre adapters > ("FA" for short, in the parlance) on the storage array, and "masked" for > access by specifc HBAs (by WWID). > >> Or can it go and peak at disk drives that have been assigned to >> other nodes ? > > Zoning, mapping and masking restrict "visibility" between specific HBAs > and LUNs. > >> Seems to me that there would be a large number of management issues to >> deal with that would not be needed in case of a VMS cluster. A VMS >> cluster offers a single security concept, shared locking etc. When you >> have different seperate nodes accesing drives in a SAN, those are no >> longer applicable. > > Well, you're confusing SANs with MSCP-served storage. > > The best way to think of a storage array is as if a tremendously > talented SWXCR were housed in a rack/frame with fairly large number of > physical drives. The physical drives are grouped together by the array > manager (a person, that is) into virtual devices. Think: RAIDsets, > mirrored RAIDsets (5+1 for example) and mirrored stripe sets. Quite > literally, a superset of what's available on an HSJ, HSZ or HSG. Those > virtual devices are thent presented to specific hosts via zoning, > mapping and masking. > > ...however, it is just storage. A LUN. It's still up to the host > operating environment to manage that storage. Such management is NOT the > array's job in a FCSF/SAN anymore than it would be in an HSJ on a > CI-based storage array. The array simply presents storage. Each LUN > appears to the host as if it were a separate "SCSI" device. A "LUN" may > occupy a portion of each disk in a disk group (in EVA parlance), for > example. VMS, Windows, UX, AIX, etc. only "sees" a SCSI device over FC > ($1$DGAnnnnn:), while the actual storage presented may consist of a > RAIDset or a stripeset, with or without mirroring (on the array, not > HBVS). > > There's no "magic" in a FCSF SAN which can allow incompatible operating > environments to either co-exist or share storage devices. The > limitations of each operating environment transcend the storage domain, > regardless. > > Clear as mud, eh? > > Thought so... > > D.J.D. I took a job a few years ago doing Sysadmin on OpenVMS on SAN. Looking at the SAN - it is essentially a smart Star-coupler. It directs traffic to only those arrays and "devices (aka LUN)" you have specified. Once the pointers are set - it is very very easy... The SAN Switches make up a fabric - which is nothing more than a fiber network. Your HBA's can attach to the same fabric - or redundant fabrics. Blue/Red for example. This is an over-simplification, but hopefully you get the idea... ------------------------------ End of INFO-VAX 2008.374 ************************