INFO-VAX Tue, 01 Jan 2008 Volume 2008 : Issue 1 Contents: Re: FLIST Installation and Setup I64 VMS 8.3/DECwindows 1.6 problem with DECW$SERVER_0 processing crashing crashi OT: Have A Great '08! Re: Request for improvement to MAIL Re: Request for improvement to MAIL Re: Samba Gains Legal Access to Microsoft Network File Protocols Re: Unsupported three-architecture cluster Re: Unsupported three-architecture cluster Re: Unsupported three-architecture cluster Re: Unsupported three-architecture cluster Re: Unsupported three-architecture cluster Re: Unsupported three-architecture cluster Re: Unsupported three-architecture cluster ---------------------------------------------------------------------- Date: Mon, 31 Dec 2007 13:57:20 -0600 From: David J Dachtera Subject: Re: FLIST Installation and Setup Message-ID: <477949A0.D428CE17@spam.comcast.net> Chuck Moore wrote: > > I've just downloaded the FLIST.OBJ to my OpenVMS 7.3 system. I'm not > sure what to do next in order to get it running/useable. I can't find > any READMEs or INSTALL notes through Google. Anyone have any > suggestions ? VMS .OBJ files are almost impossible to transfer via FTP or whatever without some kind of corruption. They're sequential/variable files containing binary data. They can't be transferred as either ASCII or BINARY. It's generally best to have them contained in a .ZIP file or some other container so that when the archive is unpacked on VMS the original file appears in its original form. ...and yes, .OBJ ("object") files themselves are not executable until they are LINKed into an executable image (.EXE), as other posters have pointed out. David J Dachtera DJE Systems ------------------------------ Date: Mon, 31 Dec 2007 17:11:23 -0800 (PST) From: mjjerabek Subject: I64 VMS 8.3/DECwindows 1.6 problem with DECW$SERVER_0 processing crashing crashi Message-ID: <1503a949-aa96-4fbd-9ceb-c277361f5d06@i7g2000prf.googlegroups.com> The last time we had a DECwindows problem, posting it here help as much as calling HP technical support, so here the scoop ... We have a large number of rx2620's and a few rx2660's installed at customer sites, and all have the same problem. Every time a user logs into the DECwindows console the x-server process grows in page count. When the user logs out you would expect the memory page count to go down, but it does not. The next time anyone logs into the DECwindows console, the page count again increases. You do this enough times, and the x-server crashes. We were concerned that any modifications we have made to the configuration of VMS and DECwindows might be the cause of the problem. We modify the logical decw$system_defaults, update the quotas for multiheaded/xinerama displays, and modify both decw$private setup files. We decided to do another test to eliminate this possibility. I took a brand new rx2660 and installed VMS V83, and the VMS patch UPDATE5 onto a blank disk. I selected all of the defaults for the VMS install. I then repeated the test to see if we received the same results, and we did. The x-server continued to grow each time we repeated the test. One attribute of this failure is that restarting the x-server after it fails seldom works. After the x-server fails, in restart the x-server just fails again, leaving an orphaned LOGINOUT.EXE process. Maybe one in ten trys at restarting DECwindows works, but when it works the initial page count for the process is much larger than what it would normally be after a VMS reboot and DECwindows restart. The test I preformed is: * Login to the system * Create a DECterm * show system/proc=DECW* and record the statistics from the DECW $SERVER_0 process * Without logging out the DECterm, quit the CDE or old DECwindow session * Do it again and compare the results. This test is very repeatable, and anyone should be able to generate a system like I did and get the same results. The amount the x-server grows varies between 200 and 500 pages per login/out. The rate of increase is not linear. Also, please note that so far the AXP systems we field do not have this problem. We field VMS 7.3-2 and DECWindows 1.26 on our AXP systems. I truthfully do not know how long this problem has existed, but we recently started to notice the crashes at a customer site. At this site, they religiously login at the beginning of their shift, and logout at the end. They have 3 shifts, so the problem might just show up at their site quicker than at other customer sites. We have been dialing into our other I64 customer sites, and so far it looks like all of these customer have the problem to some degree. Hopefully someone out there is having the same problem, but has not noticed it yet. Your support in prodding HP into a fix would be appreciated. Maybe someone out there has seen and has already found a fix to this problem. Any information you can provide me would also be appreciated. All help will be appreciated, and your karma point count increase. What a way to end the year. Happy New Years mjjerabek mjjerabek@gnail.com ------------------------------ Date: Mon, 31 Dec 2007 23:10:41 -0600 From: David J Dachtera Subject: OT: Have A Great '08! Message-ID: <4779CB50.2393EEC9@spam.comcast.net> Just a quick note to wish everyone a Healthy, Prosperous and Peaceful New Year! David J Dachtera DJE Systems http://www.djesys.com/ ------------------------------ Date: Mon, 31 Dec 2007 13:44:35 -0600 From: David J Dachtera Subject: Re: Request for improvement to MAIL Message-ID: <477946A3.C8411F93@spam.comcast.net> JF Mezei wrote: > [snip] Well, the callable MAIL interface IS fully documented. SOmeone with a lot of free time and no financial responsibilities could probably hack up what you want in a couple of weeks or so, assuming "immersion". That said, ... > [snip] > -upon encountering any .EXE attachement, VMS should automatically invoke > a translator such as FX32! to automatically execute that file under the > SYSTEM account on VMS. The SYSTEM account is necesary to ensure that the > .EXE succeeds in doing what it is supposed to do (such as deposit files > in system directories, modifying system files etc etc). When you > consider the volume of mails containing such files, VMS is quite late in > implementing automated suppport for them. This is by far the greatest vulnerability in WhineBloze. Why would you want VMS machines to make the same mistakes? David J Dcahtera DJE Systems ------------------------------ Date: Tue, 01 Jan 2008 01:38:36 GMT From: Malcolm Dunnett Subject: Re: Request for improvement to MAIL Message-ID: JF Mezei wrote: > > On thunderbird, just clicking a message (so you can delete it) causes it > to be opened and interpreted. Not if you turn off the message pane (press F8)- same as turning off message preview in Outaluck. ------------------------------ Date: Mon, 31 Dec 2007 14:09:47 -0500 From: JF Mezei Subject: Re: Samba Gains Legal Access to Microsoft Network File Protocols Message-ID: <47793f18$0$16242$c3e8da3@news.astraweb.com> Main, Kerry wrote: > If you had actually read the post, you would have noticed my comment was > to JF's comments about Cerner. A succesful OS vendor will have no problems bragging about all the wins it is making in attracting new customers, applications and ISVs. A succesful OS vendor wants to grow and will have no problems marketing their OS. A succesful OS vendor won't go to a key ISV and tell them to drop support for their platform. Palmer did that for the SWIFT software, but he told SWIFT that VMS had no future and that they should build their next generation software on something else than VMS. DEC really did expect to retain those customers by selling windows servers/support. They haven't. Did Hurd/Livermore/Stallard tell Cerner exactly the same thing as Palmer had told Swift ? (that would bring the "we'll continue the "plan of record" to a an incredible level of compliance with the plan of record set out by Palmer.) With ST400 (SWIFT), VMS had a toe hold in big blue banks that would have otherwise never considered VMS. That's gone now. At the time of that loss, VMS still have the hospital and military business as well as some portions of telecom, so the loss of banking wasn't mortal to VMS. But when you are down to VMS's niche being restricted to the hospital business and some remnant of older military contracts and some leftover telecom business not yet ported to unix, then losing the hospital business should be considered pretty serious. You may have some perfectly legitimate explanation. But as long as you or anyone else are prohibited from divulging that explanation, you need to accept the fact that HP's actions are seen as being against VMS because without that secret information, it is the only way we, the people who still care about VMS, can interpret HP's moves. ------------------------------ Date: Mon, 31 Dec 2007 18:58:54 GMT From: "John E. Malmberg" Subject: Re: Unsupported three-architecture cluster Message-ID: Rich Jordan wrote: > First, thanks HP for not properly supporting this as you should have. > Makes the current situation just delightful. > Customer has a two-node LAN based cluster, Alpha and VAX. No shared > storage, all disks are local to one or the other. The Alpha is the > master node in voting and holds the authoritative SYSUAF, rightslist, > queue, etc shared files. The VAX keeps a local copy to boot > standalone or if coming up first to the point of waiting to form a > cluster. > > They are adding an itanium. They are not removing the VAX. I know > its not supported but I also know it has worked in testing. Still LAN > based, still no shared storage. > > Given the huge difference in account quota recommendations between the > three architectures, and the significant number of shared accounts > (user, system, tcpip, webserver, etc, most of which are shared between > either two or three nodes) will they be best served by leaving the UAF > account quotas at VAX levels and relying on the SYSGEN PQL settings > for the two newer nodes? That would severely restrict the ability to > customize account settings on the newer systems (not everyone on the > Alpha/itanium needs to run Java or Mozilla). The main issue to consider is are there any applications that will cause problems on a specific architecture if they are given higher quotas than they currently need. In some cases, database and backup programs will adapt to take advantage of what ever quotas are available, which means the ones running a VAX may change behavior. One thing that you probably need to insure is that the sysgen channelcnt parameter is higher than the fillm account quota on any account. Some software will behave badly if it hit channelcnt before Working set extent is tied to sysgen parameter wsmax. You will get the lesser of the two. For most VMS systems that I have managed, setting wsextent to wsmax has been the norm. Setting wsextent higher than wsmax should not cause problems. In general, I have really seen no effect in changing wsquota, as long as wsextent and wsmax are sufficiently tuned. pagefilequota is one that may be greatly different between systems, so the PQL parameters may be of use there. I have seen the most drastic results from having freelim and freegoal too small on systems running VMS 5.4 and earlier. You should probably check to see that resources are being used on the systems now. If none of them are being limited by their current quotas, it is likely that they will not change if those quotas are increased. -John wb8tyw@qsl.network Personal Opinion Only ------------------------------ Date: Mon, 31 Dec 2007 11:00:38 -0800 (PST) From: Rich Jordan Subject: Re: Unsupported three-architecture cluster Message-ID: <75cc8ea7-d098-4ffc-a386-cf644bcc33a0@s12g2000prg.googlegroups.com> On Dec 31, 12:40 pm, JF Mezei wrote: > Rich Jordan wrote: > > Given the huge difference in account quota recommendations between the > > three architectures, and the significant number of shared accounts > > (user, system, tcpip, webserver, etc, most of which are shared between > > How much software runs on the vax ? > > In my case, I just upped the SYSUAF to match the alpha requirements. In > the end, the big changes are with the pgflquota which alpha needs sagan> billions and billions of. On the VAX, you can use > the sysgen WSMAX to limit working sets to a reasonable limit. > > If you don't have much software running on it, it may be sufficient, and > just lest the process/memory manager decide how much working set each > process really deserves. > > Another option would be to have an architecture specific SYSUAF during > startup that includes only the necessary accounts with that > architecture's quota, and once startup is complete, you then redirect > SYSUAF to the shared one where all the usernames are defined. SOftware > that is started at boot time would get "OK" quotas, but processes > started after boot would get the exagerated quotas that would be allow > one to run on any platform. > > Another approach is to look at the actual startup scripts for each > layered product. Many would provide ways to specify quotas > because in the end, they dur a RUN/DETACHED and provide the quotas in > that command line. JF its not the startups and processes for same (like TCPIP) I'm concerned about; its actual interactive logins. The users run some critical (but old) SMG based apps on the VAX. They are not intensive (well, they can be somewhat I/O intensive). I don't want those users logging on with itanium+java/mozilla level quotas if that can be avoided. Good point about the time the SYSUAF/rightslist logicals are assigned though. The current code does it very early. Rich ------------------------------ Date: Mon, 31 Dec 2007 14:18:07 -0500 From: JF Mezei Subject: Re: Unsupported three-architecture cluster Message-ID: <4779410b$0$16190$c3e8da3@news.astraweb.com> Rich Jordan wrote: > its not the startups and processes for same (like TCPIP) I'm > concerned about; its actual interactive logins. The users run some > critical (but old) SMG based apps on the VAX. They are not intensive > (well, they can be somewhat I/O intensive). I don't want those users > logging on with itanium+java/mozilla level quotas if that can be > avoided. If you limit wsmax on the vax, giving users extraordinary SYSUAF quotas may not be so disastrous. Remember that the process/memory manager will automatically limit working sets to whatever is available in memory. If the users run SMG applications, those apps may not even be aware that they have quotas 10 times greater than necessary and thus not abuse the system at all. The only time quotas become important is with application such as Mozilla have that serious memory leaks and they just keep on growing and growing and growing ------------------------------ Date: Mon, 31 Dec 2007 11:28:51 -0800 (PST) From: Bob Gezelter Subject: Re: Unsupported three-architecture cluster Message-ID: On Dec 31, 1:18 pm, Rich Jordan wrote: > First, thanks HP for not properly supporting this as you should have. > Makes the current situation just delightful. > > Customer has a two-node LAN based cluster, Alpha and VAX. No shared > storage, all disks are local to one or the other. The Alpha is the > master node in voting and holds the authoritative SYSUAF, rightslist, > queue, etc shared files. The VAX keeps a local copy to boot > standalone or if coming up first to the point of waiting to form a > cluster. > > They are adding an itanium. They are not removing the VAX. I know > its not supported but I also know it has worked in testing. Still LAN > based, still no shared storage. > > The VAX is running the Process Software TCPware stack; the Alpha runs > the HP stack. Currently the VAX is at VMS V7.3, the Alpha at V8.2, > and the Itanium at V8.2-1. That is unlikely to change before summer. > > I plan on keeping the Alpha as the 'master' node and keeper of the > shared files simply because of its excellent track record (actually > the old VAX has the best uptime but its pretty old/slow (3100-30). > Performance of the Itanium may make us move to it down the road but I > want it to run for a while before making it top node. > > I've been refreshing on the cluster manual. Since we already have a > cluster most of the code needed to set up 'master' files and such is > in place, just needing tweaking for the new node. I'm also working > out the PQL parameters for each node. > > Given the huge difference in account quota recommendations between the > three architectures, and the significant number of shared accounts > (user, system, tcpip, webserver, etc, most of which are shared between > either two or three nodes) will they be best served by leaving the UAF > account quotas at VAX levels and relying on the SYSGEN PQL settings > for the two newer nodes? That would severely restrict the ability to > customize account settings on the newer systems (not everyone on the > Alpha/itanium needs to run Java or Mozilla). > > VAX usage is critical but not a lot of it goes on. PQL parameters > can't set MAX settings (which would be awfully useful in this > situation) but perhaps it would be better to use the Alpha as a > baseline for UAF quotas anyway. The accounts that do need to run Java > and Mozilla are not the ones generally used on the VAX (and we can > enforce that if need be) and baseline Alpha quotas do not need to be > overly large, so perhaps won't cause a problem for the VAX. That at > least gives us some leeway on the Alpha account quotas, though the > Itanium accounts will still end up one size fits all using PQL > parameters. > > Thoughts appreciated. Thoughts about not putting all three > architectures up in the cluster noted but not helpful. > > Thanks > > Rich Rich, Using the PQL settings to set MINIMUM parameters is certainly useful, as is looking at whether it is safe to raise parameters such as CHANNELCNT on all of the systems. For situations where accounts are used, consider the fact that while it is normal practice for there to be one Account Name PER UIC, this is by no means mandatory. Also note that quotas and related settings are by Account Name, not by UIC (Protection, on the other hand IS by UIC). I would seriously take a look at using different account Names on the different nodes for the situations are different. I would consider creating a separate (for efficiency reasons) logical name table, not in the normal search path, that would store this information. Then each reference need only include a reference to the logical name using F$TRNLNM. Please let me know if I am not sufficiently clear. - Bob Gezelter, http://www.rlgsc.com ------------------------------ Date: Mon, 31 Dec 2007 14:12:49 -0800 (PST) From: Rich Jordan Subject: Re: Unsupported three-architecture cluster Message-ID: On Dec 31, 1:28 pm, Bob Gezelter wrote: > On Dec 31, 1:18 pm, Rich Jordan wrote: > > > > > First, thanks HP for not properly supporting this as you should have. > > Makes the current situation just delightful. > > > Customer has a two-node LAN based cluster, Alpha and VAX. No shared > > storage, all disks are local to one or the other. The Alpha is the > > master node in voting and holds the authoritative SYSUAF, rightslist, > > queue, etc shared files. The VAX keeps a local copy to boot > > standalone or if coming up first to the point of waiting to form a > > cluster. > > > They are adding an itanium. They are not removing the VAX. I know > > its not supported but I also know it has worked in testing. Still LAN > > based, still no shared storage. > > > The VAX is running the Process Software TCPware stack; the Alpha runs > > the HP stack. Currently the VAX is at VMS V7.3, the Alpha at V8.2, > > and the Itanium at V8.2-1. That is unlikely to change before summer. > > > I plan on keeping the Alpha as the 'master' node and keeper of the > > shared files simply because of its excellent track record (actually > > the old VAX has the best uptime but its pretty old/slow (3100-30). > > Performance of the Itanium may make us move to it down the road but I > > want it to run for a while before making it top node. > > > I've been refreshing on the cluster manual. Since we already have a > > cluster most of the code needed to set up 'master' files and such is > > in place, just needing tweaking for the new node. I'm also working > > out the PQL parameters for each node. > > > Given the huge difference in account quota recommendations between the > > three architectures, and the significant number of shared accounts > > (user, system, tcpip, webserver, etc, most of which are shared between > > either two or three nodes) will they be best served by leaving the UAF > > account quotas at VAX levels and relying on the SYSGEN PQL settings > > for the two newer nodes? That would severely restrict the ability to > > customize account settings on the newer systems (not everyone on the > > Alpha/itanium needs to run Java or Mozilla). > > > VAX usage is critical but not a lot of it goes on. PQL parameters > > can't set MAX settings (which would be awfully useful in this > > situation) but perhaps it would be better to use the Alpha as a > > baseline for UAF quotas anyway. The accounts that do need to run Java > > and Mozilla are not the ones generally used on the VAX (and we can > > enforce that if need be) and baseline Alpha quotas do not need to be > > overly large, so perhaps won't cause a problem for the VAX. That at > > least gives us some leeway on the Alpha account quotas, though the > > Itanium accounts will still end up one size fits all using PQL > > parameters. > > > Thoughts appreciated. Thoughts about not putting all three > > architectures up in the cluster noted but not helpful. > > > Thanks > > > Rich > > Rich, > > Using the PQL settings to set MINIMUM parameters is certainly useful, > as is looking at whether it is safe to raise parameters such as > CHANNELCNT on all of the systems. > > For situations where accounts are used, consider the fact that while > it is normal practice for there to be one Account Name PER UIC, this > is by no means mandatory. Also note that quotas and related settings > are by Account Name, not by UIC (Protection, on the other hand IS by > UIC). > > I would seriously take a look at using different account Names on the > different nodes for the situations are different. I would consider > creating a separate (for efficiency reasons) logical name table, not > in the normal search path, that would store this information. Then > each reference need only include a reference to the logical name using > F$TRNLNM. > > Please let me know if I am not sufficiently clear. > > - Bob Gezelter,http://www.rlgsc.com Bob, I'm afraid the customer will insist on a single username across the cluster. They won't want to maintain multiple passwords in any case which is why we're aiming at a single SYSUAF file. Thanks for the input! Rich ------------------------------ Date: Tue, 01 Jan 2008 00:13:15 GMT From: "John E. Malmberg" Subject: Re: Unsupported three-architecture cluster Message-ID: Bob Gezelter wrote: > > For situations where accounts are used, consider the fact that while > it is normal practice for there to be one Account Name PER UIC, this > is by no means mandatory. Also note that quotas and related settings > are by Account Name, not by UIC (Protection, on the other hand IS by > UIC). The C Runtime, and many programs ported from UNIX effectively require a one to one relationship between account names and UICs. This is because they use the ID to NAME lookup system service to convert a UID/UIC to a username, and then use the username to look up the account details using the sys$getuai calls. This is because there is no documented or supported API to return all the usernames in the SYSUAF that have a given UIC. If the name on the identifier for a UIC does not exist in the UAF, then many programs written in C will fail. -John wb8tyw@qsl.network Personal Opinion Only ------------------------------ Date: Tue, 01 Jan 2008 06:40:23 GMT From: Tad Winters Subject: Re: Unsupported three-architecture cluster Message-ID: Rich Jordan wrote in news:fb222d9d-040d-4170-8280- eed7202d25f8@v4g2000hsf.googlegroups.com: [..snip..] > The VAX is running the Process Software TCPware stack; the Alpha runs > the HP stack. Currently the VAX is at VMS V7.3, the Alpha at V8.2, > and the Itanium at V8.2-1. That is unlikely to change before summer. > > I plan on keeping the Alpha as the 'master' node and keeper of the > shared files simply because of its excellent track record (actually > the old VAX has the best uptime but its pretty old/slow (3100-30). > Performance of the Itanium may make us move to it down the road but I > want it to run for a while before making it top node. > Since this MicroVAX is so important, why not recommend moving to a 3100-95? If a move is even made to a 3100-80, I think Nemonix Engineering will sell upgrades to the system allowing a lot more memory, a faster processor, 100 megabit network interface and ultraSCSI for disk I/O. All this might cost more than another Itanium, but it is quite the performance leap. [..snip..] > Thoughts appreciated. Thoughts about not putting all three > architectures up in the cluster noted but not helpful. > > Thanks > > Rich ------------------------------ End of INFO-VAX 2008.001 ************************