WASD Hypertext Services - Technical Overview

[next] [previous] [contents] [full-page]

13 - Cache

WASD HTTPd provides an optional, configurable, monitorable file data and revision time cache. File data, so that requests for documents can be fulfilled without reference to the underlying file system, potentially reducing request latency and more importantly improving overall server performance and system impact, and file revision time, so that requests specifying an "If-Modified-Since:" header can also benefit from the above.

Note that it is a file-system cache. Only documents generated from the file system are cached, not from any potentially dynamic sources, such as scripts, directory listings, SSI documents, etc. The reason should be obvious. Only the file system provides a reliable mechansim for ensuring the validity of the cached data (i.e. has the original changed in some way since loaded?)

Files are cached according to mapped path (not necessarily the same path supplied with the request) and not by the file name represented by any path. This is a design decision targeted at avoiding any access to RMS before searching the cache. For example, the ambiguous reference to the directory

  /ht_root/
may result in the following file being accessed (due to home page resolution)
  HT_ROOT:[000000]HOME.HTML
and the contents returned to the client and consequently cached. Each time the path "/ht_root/" is subsequently requested it will be path hit and serviced from the cache entry without any recourse to RMS.

Of course the same file may be requested with the unambigious path

  /ht_root/home.html
which is completely different to the first instance, although ultimately accessing the same file. Hence one file may be cached multiple times against distinct paths. Although isolated instances of this are possible, the likelihood of significant resources being consumed in practice should be quite low.

Why Implement Caching?

Caching, in concept, attempts to improve performance by keeping data in storage that is faster to access than it's usual location. The performance improvement can be assessed in three basic ways; reduction of

This cache is provided to address all three. Where networks are particularly responsive a reduction in request latency can often be noticable. It is also suggested a cache "hit" may consume less CPU cycles than the equivalent access to the (notoriously expensive) VMS file system. Where servers are particularly busy or where disk subsystems particularly loaded a reduction in the need to access the file system can significantly improve performance while simultaneously reducing the impact of the server on other system activities. The author's feeling is though, that for most VMS sites high levels of hits are not a great concern, and for these caching can easily be left disabled, particularly if the file system's virtual I/O cache is enabled.

A comparison between cached and non-cached performance is provided in 14 - Server Performance.

Why take so long to implement caching? (introduced in version 4.5) Well, WASD's intranet services are not particularly busy sites. This coupled with powerful hardware has meant performance has not been an overriding concern. However, this cache module came about because I felt like creating it and it was an obvious lack of functionality within the server, not because WASD (the organisation) needed it.

Terminology

This is what is meant when used.


13.1 - Cache Suitability Considerations

A cache is not always of benefit! It's cost may outweigh it's return.

Any cache's efficiencies can only occur where subsets of data are consistently being demanded. Although these subsets may change slowly over time a consistent and rapidly changing aggregate of requests lose the benefit of more readily accessable data to the overhead of cache management, due to the constant and continuous flushing and reloading of cache data. This server's cache is no different, it will only improve performance if the site experiences some consistency in the files requested. For sites that have only a small percentage of files being repeatedly requested it is probably better that the cache be disabled. The other major consideration is available system memory. On a system where memory demand is high there is little value in having cache memory sitting in page space, trading disk I/O and latency for paging I/O and latency. On memory-challenged systems cache is probably best disabled.

To help assessment of the cache's efficiency for any given site monitor the administration menu's cache report.

Two sets of data provide complementary information, cache activity and file request profile.

Recommendation

Monitor the site's cache behaviour and adjust parameters from an assessment based on the guidelines above.

Again, the author's suggestion is, that for most VMS sites, high levels of access are not a great concern, and for these caching can easily be left disabled.


13.2 - Cache Content Validation

The cache will automatically revalidate the file data after a specified number of seconds ([CacheValidateSeconds] configuration parameter), by comparing the original file revision time to the current revision time. If different the file contents have changed and the cache contents declared invalid. If found invalid the file transfer then continues outside of the cache with the new contents being concurrently reloaded into the cache.

Cache validation is also always performed if the request uses "Pragma: no-cache" (i.e. as with the Netscape Navigator reload function). Hence there is no need for any explicit flushing of the cache under normal operation. If a document does not immediately reflect any changes made to it (i.e. validation time has not been reached) validation (and consequent reload) can be "forced" with a browser reload.

If a site's contents are relatively static the validation seconds could be set to an extended period (say 3600 seconds, one hour) and then rely on an explicit "reload" to force validation of a changed file.

The entire cache may be purged of cached data either from the server administration menu or using command line server control, as in the following example

  $ HTTPD /DO=CACHE=PURGE


13.3 - Cache Configuration

The cache is controlled using HTTPD$CONFIG file configuration directives. A number of parameters control the cache's behaviour. See the example configuration file HT_ROOT:[EXAMPLE]HTTPD$CONFIG.CONF.


13.4 - Cache Control

The cache may be enabled, disabled and purged from the server administration menu (see 11 - Server Administration). In addition the same control may be exercised from the command line (see 5.3.2 - Server Command Line Control) using

  $ HTTPD /DO=CACHE=ON
  $ HTTPD /DO=CACHE=OFF
  $ HTTPD /DO=CACHE=PURGE

If cache parameters are altered in the configuration file the server must be restarted to put these into effect. Disabling the cache on an ad hoc basis (from menu or command line) does not alter the contents in any way so it can merely be reenabled with use of the cache's previous contents resuming. In this way comparisions between the two environments may more easily be made.


[next] [previous] [contents] [full-page]