WASD Hypertext Services - Scripting Environment

1 - Introduction

1.1 - Scripting Accounts
1.2 - Scripting Processes
    1.2.1 - Detached Process Scripting
        1.2.1.1 - Persona Scripting
        1.2.1.2 - Restricting Persona Scripting
        1.2.1.3 - Process Priorities
    1.2.2 - Subprocess Scripting
    1.2.3 - Script Process Default
    1.2.4 - Script Process Parse Type
    1.2.5 - Script Process Run-Down
    1.2.6 - Client Recalcitrance
1.3 - Caching Script Output
1.4 - Enabling A Script
1.5 - Script Mapping
1.6 - Script Run-Time
1.7 - Scripting Logicals
1.8 - Scripting Scratch Space
1.9 - DCL Processing of Requests
1.10 - Scripting Function Library
1.11 - Script-Requested, Server-Generated Error Responses
[next] [previous] [contents] [full-page]

This document is not a general tutorial on authoring scripts, CGI or any other. A large number of references in the popular computing press covers all aspects of this technology, usually quite comprehensively. The information here is about the specifics of scripting in the WASD environment, which for CGI and ISAPI is generally very much like any other implementation. (Although there are always annoying idiosyncracies, see 1.10 - Scripting Function Library for a partial solution to smoothing out some of these wrinkles.)

Scripts are mechanisms for creating simple HTTP services, sending data to (and sometimes receiving data from) a client, extending the capabilities of the basic HTTPd. Anything that can write to SYS$OUTPUT can be used to generate script output. A DCL procedure or an executable can be the basis for a script. Simply TYPE-ing a file can be provide script output. Scripts execute in processes separate from the actual HTTP server but under it's control and interacting with it.

WASD manages a script's process environment either as a dependent subprocess or independent detached process created by the HTTP server, or as a network process created using DECnet.

WASD scripting can deployed in a number of environments. Other chapters cover the specifics of these. Don't become bewildered or be put off by all these apparent options, they are basically variations on a CGI theme.

2 - CGI
3 - CGIplus
4 - Run-Time Environments
5 - CGI Callouts
6 - ISAPI
7 - DECnet & OSU
8 - Other Environments for Java, Perl, PHP, Python, Tomcat
9 - Raw TCP/IP Socket


1.1 - Scripting Accounts

It is strongly recommended to execute scripts in an account distinct from that executing the server. This minimises the risk of both unintentional and malicious interference with server operation through either Inter-Process Communication (IPC) or scripts manipulating files used by the server.

The default WASD installation creates two such accounts, with distinct UICs, usernames and default directory space. The UICs and home areas can be specified differently to the displayed defaults. Nothing should be assumed or read into the scripting account username - it's just a username.

Default Accounts
UsernameUICDefaultDescription
HTTP$SERVER[077,001]HT_ROOT:[HTTP$SERVER]Server Account
HTTP$NOBODY[076,001]HT_ROOT:[HTTP$NOBODY]Scripting Account

During startup the server checks for the existence of the default scripting account and automatically configures itself to use this for scripting. If it is not present it falls-back to using the server account. Other account names can be used if the startup procedures are modified accordingly. The default scripting username may be overridden using the /SCRIPT=AS=<username> qualifier (see the "Technical Overview"). The default scripting account cannot be a member of the SYSTEM group and cannot have any privilege other than NETMBX and TMPMBX (Privileged User Scripting describes how to configure to allow this).

Scripting under a separate account is not available with subprocess scripting and is distinct from PERSONA scripting (even though it uses the same mechanism, see below).


1.2 - Scripting Processes

Process creation under the VMS operating system is notoriously slow and expensive. This is an inescapable overhead when scripting via child processes. An obvious strategy is to avoid, at least as much as possible, the creation of these processes. The only way to do this is to share processes between multiple scripts/requests, addressing the attendant complications of isolating potential interactions between requests. These could occur through changes made by any script to the process' enviroment. For VMS this involves symbol and logical name creation, and files opened at the DCL level. In reality few scripts need to make logical name changes and symbols are easily removed between uses. DCL-opened files are a little more problematic, but again, in reality most scripts doing file manipulation will be images.

A reasonable assumption is that for almost all environments scripts can quite safely share processes with great benefit to response latency and system impact ( "Technical Overview, Performance") for a table with some comparative performances). If the local environment requires absolute script isolation for some reason then this process-persistance may easily be disabled with a consequent trade-off on performance.


Zombies

The term zombie is used to describe processes when persisting between uses (the reason should be obvious, they are neither "alive" (processing a request) nor are they "dead" (deleted :^) Zombie processes have a finite time to exist (non-life-time?) before they are automatically purged from the system ( see "Technical Overview, Configuration"). This keeps process clutter on the system to a minimum.


1.2.1 - Detached Process Scripting

With WASD it is possible to execute scripts in processes created completely independently of the server process itself. This offers a significant number of advantages over subprocesses

without too many disadvantages

Creation of a detached process is slightly more expensive in terms of system resources and initial invocation response latency (particularly if extensive login procedures are required), but this quickly becomes negligable as most script processes are used multiple times for successive scripts and/or requests.


Enabling Detached Processes

By default the server uses subprocesses for scripting (also the historical method by which WASD executes scripts). The HTTPD$CONFIG directive [DclDetachProcess] when enabled has the server create (almost) completely independent detached processes to execute scripts.

  [DclDetachProcess]  enabled

When using detached processes, during shutdown the server must explicitly ensure that each scripting process is removed from the system (with subprocesses the VMS executive provides this automatically). This is performed by the server exit handler. With VMS it is possible to bypass the exit handler (using a $DELPRC or it's equivalent $STOP/ID= for instance), making it possible for "orphaned" scripting processes to remain - and potentially accumulate on the system!

To address this possibility, during startup the server scans the system for candidate processes. These are identified by a terminal mailbox (SYS$COMMAND device), and then further that the mailbox has an ACL with two entries; the first identifying itself as a WASD HTTPd mailbox and the second allowing access to the account the script is being executed under. Such a device ACL looks like the following example.

  Device MBA335:, device type local memory mailbox, is online, record-oriented
    device, shareable, mailbox device.
 
    Error count                    0    Operations completed                  0
    Owner process                 ""    Owner UIC             [WEB,HTTP$NOBODY]
    Owner process ID        00000000    Dev Prot              S:RWPL,O:RWPL,G,W
    Reference count                1    Default buffer size                2048
    Device access control list:
      (IDENTIFIER=WASD_HTTPD_80,ACCESS=NONE)
      (IDENTIFIER=[WEB,HTTP$NOBODY],ACCESS=READ+WRITE+PHYSICAL+LOGICAL)

This rights identifier is generated from the server process name and is therefore system-unique (so multiple autonomous servers will not accidentally cleanup the script processes of others), and is created during server startup if it does not already exist. For example, if the process name was "HTTPd:80" (the default for a standard service) the rights identifier name would be "WASD_HTTPD_80" (as shown in the example above).


SYLOGIN and LOGIN Procedures

Detached scripting processes are created through the full "LOGINOUT" life-cycle and execute all system and account LOGIN procedures. Although immune to the effects of most actions within these procedures, and absorbing any output generated during this phase of the process life-cycle, some consideration should be given to minimising the LOGIN procedure paths. This can noticably reduce initial script latency on less powerful platforms.

The usual recommendations for non-interactive LOGIN procedures apply for script environments as well. Avoid interactive-only commands and reduce unnecessary interactive process environment setup. This is usually accomplished though code structures such as the following

  $ IF F$MODE() .EQS. "INTERACTIVE"
  $ THEN
       ...
  $ ENDIF
 
  $ IF F$MODE() .NES. "INTERACTIVE" THEN EXIT

WASD scripting processes can be specifically detected using DCL tests similar to the following. This checks the mode, that standard output is a mailbox, and the process name. These are fairly reliable (but not absolutely infallible) indicators.

  $ IF F$MODE() .NES. "INTERACTIVE" .AND. -
       F$GETDVI("SYS$OUTPUT","MBX") .AND. -
       F$EXTRACT(0,4,F$PROCESS()) .EQS. "HTTP" .AND. -
       F$EXTRACT(5,1,F$PROCESS()) .EQS. ":" .AND. -
       F$ELEMENT(1,"-",F$PROCESS()) .NES. "-"
  $ THEN
  $!   WASD scripting process!
       ...
  $ ENDIF


1.2.1.1 - Persona Scripting

There are advantages in running a script under a non-server account. The most obvious of these is the security isolation it offers with respect to the rest of the Web and server environment. It also means that the server account does not need to be resourced especially for any particularly demanding application.

Persona scripting requires detached processes be enabled and the $PERSONA system services available with VMS V6.2 and later. Persona scripting is available under earlier versions of VAX VMS (i.e. 6.0 and 6.1) only when having used the PERSONA_MACRO build option.


Enabling Persona Scripting

The $PERSONA functionality must be explicitly enabled at server startup using the /PERSONA qualifier ( "Technical Overview, Server Account and Environment"). The ability for the server to be able to execute scripts under any user account is a very powerful (and potentially dangerous) capability, and so is designed that the site administrator must explicitly and deliberately enable the functionality. Configuration files need to be rigorously protected against unauthorized modification.

A specific script or directory of scripts can be designated for execution under a specified account using the HTTPD$MAP configuration file set script=as= mapping rule. The following example illustrates the essentials.

  # one script to be executed under the account
  SET  /cgi-bin/a_big_script*  script=as=BIG_ACCOUNT
  # all scripts in this area to be executed under this account
  SET  /database-bin/*  script=as=DBACCNT


Required Access

Access to package scripting directories (e.g. HT_ROOT:[CGI-BIN]) is controlled by ACLs and possession of the rights identifier WASD_HTTP_NOBODY. If a non-server account requires access to these areas it too will need to be granted this identifier.


User Account Scripting

In some situations it may be desirable to allow the average Web user to experiment with or implement scripts. If the set script=as= mapping rule specifies a tilde character then for a user request the mapped SYSUAF username is substituted.

The following example shows the essentials of setting up a user environment where access to a subdirectory in the user's home directory, [.WWW] with script's located in a subdirectory of that, [.WWW.CGI-BIN].

  SET   /~*/www/cgi-bin/*  script=AS=~
  UXEC  /~*/cgi-bin/*  /*/www/cgi-bin/*
  USER  /~*/*  /*/www/*
  REDIRECT  /~*  /~*/
  PASS  /~*/*  /dka0/users/*/*
To enable user CGIplus scripting include something like
  UXEC+  /~*/cgiplus-bin/*  /*/www/cgi-bin/*

Where the site administrator has less than full control of the scripting environment it may be prudent to put some constraints on the quantity of resource that potentially can be consumed by non-core or errant scripting. The following HTTPD$MAP rule allows the "maximum" CPU time consumed by a single script to be constrained.

  SET   /cgi-bin/cgi_process  script=CPU=00:00:05

Note that this is on a per-script basis, contrasted to the sort of limit a CPULM-type constraint would place on a scripting process.

The following HTTPD$CONFIG rule specifies at which priority the scripting process executes. This can be used to provide the server and it's infrastructure an advantage over user scripts.

  [DclDetachProcessPriority]  1,2
See 1.2.1.3 - Process Priorities for further detail.


Authenticated User Scripting

If the set script=as= mapping rule specifies a dollar then a request that has been SYSUAF authenticated has the SYSUAF username substituted.

  SET   /cgi-bin/cgi_process  script=AS=$

If the script has not been subject to SYSUAF authorization then this causes the script activation fail. To allow authenticated requests to be executed under the corresponding VMS account and non-authenticated requests to script as the usual server/scripting account use the following variant.

  SET   /cgi-bin/cgi_process  script=AS=$?

If the server startup included /PERSONA=AUTHORIZED then only requests that have been subject to HTTP authorization and authentication are allowed to script under non-server accounts.


Privileged User Scripting

By default a privileged account cannot be used for scripting. This is done to reduce the chance of unintended capabilities when executing scripts. With additional configuration it is possible to use such accounts. Great care should be exercised when undertaking this.

To allow the server to activate a script using a privileged account the keyword /PERSONA=RELAXED must be used with the persona startup qualifier.

If the keywords /PERSONA=RELAXED=AUTHORIZED are used then privileged accounts are allowed for scripting but only if the request has been subject to HTTP authorization and authentication.


1.2.1.2 - Restricting Persona Scripting

By default, activating the /PERSONA server startup qualifier allows all the modes described above to be deployed using appropriate mapping rules. Of course there may be circumstances where such broad capabilities are inappropriate or otherwise undesirable. It is possible to control which user accounts are able to be used in this fashion with a rights identifier. Only those accounts granted the identifier can have scripts activated under them. This means all accounts ... including the server account!

Recommendation

The simplest solution might appear to be to just grant all required accounts the WASD_HTTP_NOBODY identifier described above. While this is certainly possible it does provide read access to all parts of the server package this identifier controls, and write access to the HT_ROOT:[SCRATCH] default file scratch space (1.8 - Scripting Scratch Space). If scripting outside of the site administrator's control is being deployed it may be better to create a separate identifier as just described.

This is enabled by specifying the name of a rights identifier as a parameter to the /PERSONA qualifier. This may be any identifier but the one shown in the following example is probably as good as any.

  $ HTTPD /PERSONA=WASD_SCRIPTING

This identifier could be created using the following commands

  $ SET DEFAULT SYS$SYSTEM
  $ MCR AUTHORIZE
  UAF> ADD /IDENTIFIER WASD_SCRIPTING
and granted to accounts using
  UAF> GRANT /IDENTIFIER WASD_SCRIPTING HTTP$NOBODY

Meaningful combinations of startup parameters are possible:

  /PERSONA=(RELAXED)
  /PERSONA=(RELAXED=AUTHORIZED)
  /PERSONA=(AUTHORIZED,RELAXED)
  /PERSONA=(ident-name,RELAXED)
  /PERSONA=(ident-name,AUTHORIZED,RELAXED)
  /PERSONA=(ident-name,RELAXED=AUTHORIZED)


1.2.1.3 - Process Priorities

When detached processes are created they can be assigned differing priorities depending on the origin and purpose. The objective is to give the server process a slight advantage when competing with scripts for system resources. This allows the server to respond to new requests more quickly (reducing latency) even if a script may then take some time to complete the request.

The allocation of base process priorities is determined from the HTTPD$CONFIG [DclDetachProcessPriority] configuration directive, which takes one or two (comma-separated) integers that determine how many priorities lower than the server scripting processes are created. The first integer determines server processes. A second, if supplied, determines user scripts. User scripts may never be a higher priority that server scripts. The following provides example directives.

  [DclDetachProcessPriority]  1
  [DclDetachProcessPriority]  0,1
  [DclDetachProcessPriority]  1,2

Scripts executed under the server account, or those created using a mapped username (i.e. "script=as=username"), have a process priority set by the first/only integer.

Scripts activated from user mappings (i.e. "script=as=~" or "script=as=$") have a process priority set by any second integer, or fall back to the priority of the first/only integer.


1.2.2 - Subprocess Scripting

WASD's default (and historical) scripting environment is with subprocesses created by the server.

With persistent subprocess scripting the pooled-resource BYTLM can become a particular issue. After the first subprocess-based script is executed the WATCH report provides some information on the BYTLM required to support both the desired number of incoming network connections and script subprocess IPC mailboxes. When using these numbers to resource the BYTLM quota of the server account keep in mind that as well as server-subprocess IPC consumption of BYTLM there may be additional requirements whatever processing is performed by the script.

For a standard configuration 15,000 bytes should be allowed for each possible script subprocess, 1,000 bytes for each potential client network connection, an additional 20,000 bytes overhead, plus any additional requirements for script processing, etc. Hence for a maximum of 30 scripts and 100 network clients, a BYTLM of approximately 260,000 minimum should be allowed.


Subprocess Environment

When the subprocess is spawned by the server none of the parent's environment is propagated. Hence the subprocess has no symbols, logical names, etc., created by the site's SYLOGIN.COM, the server account's LOGIN.COM, etc. This is done quite deliberately to provide a pristine and standard default environment for the script's execution. For this reason all scripts must provide all of their required environment to operate. In particular, if a verb is made available via a SY/LOGIN.COM this will not be available to the script. Verbs available via the DCLTABLES.EXE or DCL$PATH of course will be available.

There are two basic methods for supplying a script with a required environment.


Caution!

When scripts are executed within unprivileged subprocesses created by the HTTP server, the processes are owned by the HTTP server account (HTTP$SERVER). Script actions could potentially affect server behaviour. For example it is possible for subprocesses to create or modify logical name values in the JOB table (e.g. change the value of LNM$FILE_DEV altering the logical search path). Obviously these types of actions are undesirable. In addition scripts can access any WORLD-readable and modify any WORLD-writable resource in the system/cluster, opening a window for information leakage or mischievous/malicious actions (some might argue that anyone with important WORLD-accessable resources on their system deserves all that happens to them - but we know they're out there :^) Script authors should be aware of any potential side-effects of their scripts and Web administrators vigilant against possible malicious behaviours of scripts they do not author.


1.2.3 - Script Process Default

For standard CGI and CGIplus script the script process' default device and directory is established using a SET DEFAULT command immediately before activating the script. This default is derived from the script file specification.

An alternative default location may be specified using the mapping rule shown in the following example.

  set /cgi-bin/this-script* script=default=WEB:[THIS-SCRIPT]

The default may be specified in VMS or Unix file system syntax as appropriate. If in Unix syntax (beginning with a forward-slash) no SET DEFAULT is performed using DCL. The script itself must access this value using the SCRIPT_DEFAULT CGI variable and perform a chdir().


1.2.4 - Script Process Parse Type

On platforms where the Extended File Specification (EFS) is supported a SET PROCESS /PARSE=EXTENDED or SET PROCESS /PARSE=TRADITIONAL is executed by the scripting process before script activation depending on whether the script path is located on an ODS-2 or ODS-5 volume.


1.2.5 - Script Process Run-Down

The server can stop a script process at any point, although this is generally done at a time and in such a way as to eliminate any disruption to request processing. Reasons for the server running-down a script process.

In running down a script process the server must both update it's own internal data structures as well as manage the run-down of the script process environment and script process itself. These are the steps.

  1. Exit handling.

  2. Input and output to all of the process' streams is cancelled. For scripts that may still be still processing this can result in I/O stream errors. The server waits for all queued I/O to disappear.

  3. If the script process has not already deleted itself the server issues a $DELPRC against it.

  4. The server receives the process termination AST and this completes the process run-down sequence.


1.2.6 - Client Recalcitrance

If a client disconnects from a running script (by hitting the browser Stop button, or selecting another hyperlink) the loss of network connectivity is detected by the server at the next output write.

Generally it is necessary for there to be some mechanism for a client to stop long-running (and presumably resource consuming) scripts. Network disconnection is the only viable one. Experience would indicate however that most scripts are short running and most disconnections are due to clients changing their minds about waiting for a page to build or having seen the page superstructure moving on to something else.

With these considerations in mind there is significiant benefit in not running-down a script immediately the client disconnection is detected. A short wait will result in most scripts completing their output elegantly (the script itself unaware the output is not being transmitted on to the client), and in the case of persistent scripts remaining available for the next request, or for standard CGI the process remaining for use in the next CGI script.

The period allowing the script to complete it's processing may be set using the HTTPD$CONFIG configuration directive [DclBitBucketTimeout]. It should be set to say fifteen seconds, or whatever is appropriate to the local site.

  [DclBitBucketTimeout]  00:00:15

NB. "Bit-bucket" is a common term for the place discarded data is stored. :^)


1.3 - Caching Script Output

The WASD cache was originally provided to reduce file-system access (a somewhat expensive activity under VMS). With the expansion in the use of dynamically generated page content (e.g. PHP, Perl, Python) there is an obvious need to reduce the system impact of some of these activities. While many such responses have content specific to the individual request a large number are also generated as general site pages, perhaps with simple time or date components, or other periodic information. Non-file caching is intended for this type of dynamic content.

Revalidation of non-file content is difficult to implement for a number of reasons, both by the server and by the scripts, and so is not provided. Instead the cache entry is flushed on expiry of the [CacheValidateSeconds], or as otherwise specified by path mapping, and the request is serviced by the content source (script, PHP, Perl, etc.) with the generated response being freshly cached. Browser requests specifying no-caching are honoured (within server configuration parameters) and will flush the entry, resulting in the content being reloaded.


Controlling Script Caching

Determining which script content is to be cached and which not, and how long before flushing, is done using mapping rules (described in detail in the "Technical Overview"). The source of script cache content is specified using one or a combination of the following SET rules against general or specific paths in HTTPD$MAP. All mapping rules (script and non-script) are described here to put the script oriented ones into context. Those specific to script output caching are noted.

cache=[no]cgi from Common Gateway Interface (CGI) responses (for script output)
cache=[no]file from the file system (default and pre-8.4 cache behaviour)
cache=[no]net caches the full data stream irrespective of the source
cache=[no]nph full stream from Non-Parse Header (NPH) response (for script output)
cache=[no]query cache requests with query strings (use with care)
cache=[no]script both CGI and NPH responses (for script output)
cache=[no]ssi from Server-Side Includes (SSI) documents

A good understanding of site requirements and dynamic content sources, along with considerable care in specifying cache path SETings, is required to cache dynamic content effectively. It is especially important to get the content revalidation period appropriate to the content of the pages. This is specified using the following path SETings.

cache=expires=0 cancels any expiry
cache=expires=DAY expires when the day changes
cache=expires=HOUR when the hour changes
cache=expires=MINUTE when the minute changes
cache=expires=<hh:mm:ss> expires after the specified period in the cache


Examples

To cache the content of PHP-generated home pages that contain a time-of-day clock, resolving down to the minute, would require a mapping rule similar to the following.

 set /**/index.php cache=cgi cache=expires=minute

To prevent requests from flushing a particular scripts output (say the main page of a site) using no-cache fields until the server determines that it needs reloading use the cache guard period.

 set /index.py cache=script cache=expires=hour cache=guard=01:00:00


1.4 - Enabling A Script

By default the server accesses scripts using the search list logical name CGI-BIN, although this can be significantly changed using mapping rules. CGI-BIN is defined to first search HT_ROOT:[CGI-BIN] and then HT_ROOT:[AXP-BIN], HT_ROOT:[IA64-BIN], or HT_ROOT:[VAX-BIN] depending on the platform. [CGI-BIN] is intended for architecture-neutral script files (.CLASS., COM, .PL, .PY, etc.) and the architecture specific directories for executables (.EXE, .DLL, etc.)

These directories are delivered empty and it is up to the site to populate them with the desired scripts. A script is made available by copying it's file(s) into the appropriate directory. By default ACLs will be propagated to allow access by the default scripting account. Scripts can be made unavailable by deleting them from these directories.

NOTE

It is good security practice to deploy only those scripts a site is actually using. This minimises vulnerability by simply reducing the number of possibly problematic scripts. A periodic audit of script directories is a good policy.

WASD script executables are built into the HT_ROOT:[AXP], HT_ROOT:[IA64] or HT_ROOT:[VAX] directories depending on the architecture. Other script files, such as DCL procedures, Perl examples, Java class examples, etc. are located in other directories in the HT_ROOT:[SRC] tree. The procedure HT_ROOT:[INSTALL]SCRIPTS.COM assists in the installation or deinstallation of groups of WASD scripts.


1.5 - Script Mapping

Scripts are enabled using the exec/uxec or script rules in the mapping file (also see "Technical Overview, Mapping Rules"). The script portion of the result must be a URL equivalent of the physical VMS procedure or executable specification.

All files in a directory may be mapped as scripts using the exec rule. For instance, in the HTTPD$MAP configuration file can be found a rule

  exec /cgi-bin/* /cgi-bin/*
which results in request paths beginning "/cgi-bin/" having the following path component mapped as a script. Hence a path "/cgi-bin/cgi_symbols.com" will result in the server attempting to execute a file named CGI-BIN:[000000]CGI_SYMBOLS.COM.

Multiple such paths may be designated as executable, with their contents expected to be scripts, either directly executable by VMS (e.g. .EXEs and .COMs) or processable by a designated interpreter, etc., (e.g. .PLs, .CLASSes) (1.6 - Script Run-Time).

In addition individual files may be specified as scripts. This is done using the script rule. In the following example the request path "/help" activates the "Conan The Librarian" script.

  script /help* /cgi-bin/conan*

Of course, multiple such rules may be used to map such abbreviated or self-explanatory script paths to the actual script providing the application.


Mapping Local or Third-Party Scripts

It is not necessary to move/copy scripts into the server directory structure to make them accessable. In fact there are probably good reasons for not doing so! For instance, it keeps a package together so that at the next upgrade there is no possibility of the "server-instance" of that application being overlooked.

To make scripts provided by third party packages available for server activation three requirements must be met.

  1. The server account (HTTP$SERVER by default) must have read and execute access to the directory containing the scripts. Script files are searched for by the server before activation is attempted. This can be enabled using the SECHAN utility (see "Technical Overview").
      $ SECHAN /ASIF=CGI-BIN device:[directory]script-directory.DIR
    

  2. The scripting account (HTTP$NOBODY by default) must have read and execute access to any and all images and other resources required to use the application. There may be some consideration of file protections required when multiple accessors need to be accomodated (e.g. scripting and application accounts) so a specific solution may be required. If only the scripting account requires read access then the SECHAN utility could again be used to provide that to the directory (or directories) and contained files.
      $ SECHAN /ASIF=CGI-BIN device:[000000]directory.DIR
      $ SECHAN /ASIF=CGI-BIN device:[directory]*.*
    

  3. Mapping rules must exist to make the script and any required resources accessable.

Most packages having such an interface for Web server access would provide details on mapping into the package directory. For illustration the following mapping rules provide access to a package's scripts (assuming it provides more than one) and also into a documentation area.

The hypothetical "Application X" directory locations are

  APPLICATIONX_ROOT:[DOC]
  APPLICATIONX_ROOT:[CGI-BIN]

The required mapping rules would be

  pass /applicationX/* /applicationX_root/docs/*
  exec /appX-bin/* /applicationX_root/cgi-bin/*

Access to X's scripts would be using a path such as

  http://the.host.name/appx-bin/main_script?plus=some&query=string
NOTE

When allowing the server and scripting account access into parts of the file system outside of the WASD package it is recommended to control the environment very carefully. Third-party scripting areas in particular should be modelled on those present in the package itself. The SECHAN utility described in the "Technical Overview" may be of some assistance with this.


"Wrapping" Local or Third-Party Scripts

Sometimes it may be necessary to provide a particular non-WASD, local, or third-party script with a particular environment in which to execute. This can be provided by wrapping the script executable or interpreted script in a DCL procedure (of course, if the local or third-party script is already activated by a DCL procedure, then that may need to be directly modified). Simply create a DCL procedure, in the same directory as the script executable, containing the required environmental commands.

For example, the following DCL procedure defines a scratch directory and provides the location of the configuration file. It is assumed the script executable is APPLICATIONX_ROOT:[CGI-BIN]APPX.EXE and the script wrapper APPLICATIONX_ROOT:[CGI-BIN]APPX.COM.

  $! wrapper for APPX CGI executable
  $ SET DEFAULT APPLICATIONX_ROOT:[000000]
  $ DEFINE /USER SYS$SCRATCH APPLICATIONX_ROOT:[SCRATCH]
  $ APPX == "$APPLICATIONX_ROOT:[CGI-BIN]APPX"
  $ APPX /CONFIG=APPLICATIONX_ROOT:[CONFIG]APPX.CONF


1.6 - Script Run-Time

A script is merely an executed or interpreted file. Although by default VMS executables and DCL procedures can be used as scripts, other environments may also be configured. For example, scripts written for the Perl language may be transparently given to the Perl interpreter in a script subprocess. This type of script activation is based on a unique file type (extension following the file name), for the Perl example this is most commonly ".PL", or sometimes ".CGI". Both of these may be configured to automatically invoke the site's Perl interpreter, or any other for that matter.

This configuration is performed using the HTTPD$CONFIG [DclScriptRunTime] directive, where a file type is associated with a run-time interpreter. This parameter takes two components, the file extension and the run-time verb. The verb may be specified as a simple, globally-accessable verb (e.g. one embedded in the CLI tables), or in the format to construct a foreign-verb, providing reasonable versatility. Run-time parameters may also be appended to the verb if desired. The server ensures the verb is foreign-assigned if necessary, then used on a command line with the script file name as the final parameter to it.

The following is an example showing a Perl interpreter being specified. The first line assumes the "Perl" verb is globally accessable on the system (e.g. perhaps provided by the DCL$PATH logical) while the second (for the sake of illustration) shows the same Perl interpreter being configured for a different file type using the foreign verb syntax.

  [DclScriptRunTime]
  .PL PERL
  .CGI $PERL_EXE:PERL

A file contain a Perl script then may be activated merely by specifying a path such as the following

  /cgi-bin/example.pl

To add any required parameters just append them to the verb specified.

  [DclScriptRunTime]
  .XYZ XYZ_INTERPRETER -vms -verbose -etc
  .XYZ $XYZ_EXE:XYZ_INTERPRETER /vms /verbose /etc

If a more complex run-time interpreter is required it may be necessary to wrap the script's execution in a DCL procedure.


Script File Extensions

The WASD server does not require a file type (extension) to be explicitly provided when activating a script. This can help hide the implementation detail of any script. If the script path does not contain a file type the server searches the script location for a file with one of the known file types, first ".COM" for a DCL procedure, then ".EXE" for an executable, then any file types specified using script run-time configuration directive, in the order specified.

For instance, the script activated in the Perl example above could have been specified as below and (provided there was no "EXAMPLE.COM" or "EXAMPLE.EXE" in the search) the same script would have been executed.

  /cgi-bin/example


1.7 - Scripting Logicals

Two logicals provide some control of and input to the DCL subprocess scripting environment (which includes standard CGI, CGIplus and ISAPI, DECnet-based CGI, but excludes DECnet-based OSU).

Note that most WASD scripts also contain logical names that can be set for debugging purposes. These are generally in the format script_name$DBUG and if exist activate debugging statements throughout the script.


1.8 - Scripting Scratch Space

Scripts often require temporary file space during execution. Of course this can be located anywhere the scripting account (most often HTTP$SERVER) has appropriate access. The WASD package does provide a default area for such purposes with permissions set during startup to allow the server account full access. The default area is located in

  HT_ROOT:[SCRATCH]
as is accessed by the server and scripts using the logical name
  HT_SCRATCH:

The server provides for the routine clean-up of old files in HT_SCRATCH: left behind by aborted or misbehaving scripts (although as a matter of design all scripts should attempt to clean up after themselves). The HTTPD$CONFIG directives

  [DclCleanupScratchMinutesMax]
  [DclCleanupScratchMinutesOld]
control how frequently the clean-up scan occurs, and how old files need to be before being deleted. Whenever script processes are active the scratch area is scanned at the maximum period specified, or whenever the last script process is purged from the system by the server.

Of course there is always the potential for interaction between scripts using a common area for such purposes. At the most elemetary, care must be taken to ensure unique file name are generated. At worst there is the potential for malicious interaction and information leakage. Use such common areas with discretion.

NOTE

Beware of shared scratch areas. They rely on cooperation between scripts for minimising potential interactions. They can also be a source of unintended or malicious information leakage.


Unique File Names - DCL

The "UNIQUE_ID" CGI variable provides a unique 19 character alpha-numeric string (UNIQUE_ID Note) suitable for many uses including the type extension of temporary files. The following DCL illustrates the essentials of generating a script-unqiue file name. For mutliple file names add further text to the type, as shown below.

  $ SCRATCH_DIR = "HT_SCRATCH:"
  $ PROC_NAME = F$PARSE(F$ENVIRONMENT("PROCEDURE"),,,"NAME")
  $ INFILE_NAME = SCRATCH_DIR + PROC_NAME + "." + WWW_UNIQUE_ID + "_IN"
  $ OUTFILE_NAME = SCRATCH_DIR + PROC_NAME + "." + WWW_UNIQUE_ID + "_OUT"


Unique File Names - C Language

A similar approach can be used for script coded using the C language, with the useful capacity to mark the file for delete-on-close (of course this is only really useful if it's, say, only to be written, rewound and then re-read without closing first - but I'm sure you get the idea).

  #define HT_SCRATCH "HT_SCRATCH:"
  #define SCRIPT_NAME "EXAMPLE"
 
  char  *unqiueId;
  char  tmpFileName [256];
  FILE  *tmpFile;
 
  if ((uniqueId = getenv("WWW_UNIQUE_ID")) == NULL)
  {
     printf ("Error: WWW_UNIQUE_ID absent!\n");
     exit (1);
  }
  sprintf (tmpFileName, HT_SCRATCH SCRIPT_NAME ".%s", uniqueId);
 
  if ((tmpFile = fopen (tmpFileName, "w+", "fop=dlt")) == NULL)
     exit (vaxc$errno); 


1.9 - DCL Processing of Requests

DCL is the native scripting environment for VMS and provides a rich set of constructs and capabilities for ad hoc and low usage scripting, and as a glue when several processing steps need to be undertaken for a particular script. In common with many interpreted environments care must be taken with effective exception handling and data validation. To assist with the processing of request content and response generation from within DCL procedures the CGIUTL utility is available in

HT_ROOT:[SRC.MISC]

Functionality includes

Most usefully it can read the request body, decoding form-URL-encoded contents into DCL symbols and/or a scratch file, allowing a DCL procedure to easily and effectively process this form of request.

NOTE

Never substitute the contents of CGI variables directly into the code stream using interpreters that will allows this (e.g. DCL, Perl). You run a very real risk of having unintended content maliciously change the intended function of the code. For example, never use comma substitution of a CGI variable at the DCL command line as in
  $ COPY 'WWW_FORM_SRC' 'WWW_FORM_DST'
Always pre-process the content of the variable first, ensuring there has been nothing inserted that could subvert the intended purpose. The CGIUTL assists complying with this rule by providing an explicit, non-DCL substitution character for use on the command-line (see source code descriptive prologue).


1.10 - Scripting Function Library

A source code collection of C language functions useful for processing the more vexing aspects of CGI and general script programming is available in CGILIB. This and an example implementation is available in

HT_ROOT:[SRC.MISC]

Functionality includes

The WASD scripts use this library extensively and may serve as example applications.


1.11 - Script-Requested, Server-Generated Error Responses

Of course a script can generate any output it requires including non-success (non-200) pages (e.g. 400, 401, 302, etc.) For error pages a certain consistency results from making these substantially the same layout and content as those generated by the server itself. To this end, script response header output can contain one or more of several extension fields to indicate to the server that instead of sending the script response to the client it should internally generate an error response using the script-supplied information. These fields are listed in Script-Control: section of 2.2.1 - CGI Compliant Output and are available in any scripting environment.

If a "Script-Control: X-error-text="text of error message"" field occurs in the script response header the server stops processing further output and generates an error message. Other error fields can be used to provide additional or message-modifying information. A significant example is the "Script-Control: X-error-vms-status=integer" field which supplies a VMS status value for a more detailed, status-related error message explanation.

Essentially the script just generates a standard CGI "Status: nnn" response and includes at least the "X-error-text=" field before the header-terminating empty record (blank line). Some variations are shown in the following DCL examples.

  $! vanilla error message
  $ say = "write sys$output"
  $ say "Status: 400"
  $ say "Script-Control: X-error-text=""Confusing URL components!"""
  $ say ""
 
  $! VMS status error message 
  $ say = "write sys$output"
  $! "status: 000" allows the server to select the HTTP status code
  $ say "Status: 000"
  $ say "Script-Control: X-error-text=""/a/file/name.txt"""
  $ say "Script-Control: X-error-vms-status=%X00000910"
  $ say "Script-Control: X-error-vms-text=""A:[FILE]NAME.TXT"""
  $ say ""
 
  $! add META source module name and line generating message
  $ say = "write sys$output"
  $ say "Status: 500"
  $ say "Script-Control: X-error-text=""Don't know what to do now..."""
  $ say "Script-Control: X-error-module=EXAMPLE; X-error-line=999"
  $ say ""

Interestingly, because CGI environments should ignore response fields unknown to them, for scripts deployed across multiple server platforms it should be possible to have these WASD-specific elements in every header for WASD uses followed by other explicitly error page content for use in those other environments.

  $! WASD error content, plus other platform content
  $ say = "write sys$output"
  $ say "Status: 404"
  $ say "Script-Control: X-error-text=""Requested object not found."""
  $ say "Content-Type: text/html"
  $ say ""
  $ say "<B>ERROR 404:</B>&nbsp; Requested object not found."

An example implemented using DCL is available

HT_ROOT:[SRC.OTHER]REQUEST_ERROR_MSG.COM

and if currently enabled for scripting

/cgi-bin/request_error_msg


[next] [previous] [contents] [full-page]