From: EVMS::CROLL "A hole is to dig. 01-Nov-1996 1713 -0400" 1-NOV-1996 17:14:01.26 To: everhart CC: Subj: FWD: EDO MONTHLY REPORT - OCTOBER From: STAR::SMTP%"˘movies::chandley˘@movies.enet.dec.com" "01-Nov-1996 0811 -0500" 1-NOV-1996 08:12:17.78 To: "base_os_extended@star.zko.dec.com"@vbormc.vbo.dec.com CC: chandley@movies.enet.dec.com Subj: EDO MONTHLY REPORT - OCTOBER 1 EDO SEPTEMBER SUMMARY This is a general summary of work at EDO. Details can be found in the individual project status. The blockserver team [PL: Tanner], have successfully installed the Blockserver onto the EDO MOVIES Cluster. The Blockserver will be demonstrated at DECUS next month. One of the highlights this month was a visit by folks from ZK and a determination to get the technology direction set for the storage device. This was done and a cow on write scheme was adopted as the correct direction to proceed in. Communication of this is forthcoming. Kevin Playford has taken on the role of SCD (aka LSD) Project Leader. Chris Whitaker will focus on technical leadership, while Kev will have responsibility for SCD V1 ship. Mark has also been working on the OFST Project Plan. Work proceeds slowly and I don't get the time I would like to work on this. The first draft was not available by the end of September as expected. The Test Team [PL: Compton] continues to provide Spiralog V1+ builds and testing for the planned V1.2 release in November. Further tests have been integrated into the REGRES suite and the F11XQP has been integrated into the EDO build environment. $FS [PL: Hirst] has been reviewing the impact of the change in technology from log structured to copy on write. The change will reduce the amount of work for OFST at the cost of increasing the work in $FS. Also an issue has been the lack of PM focus on OFST V2 and the lack of staffing on $NT. Both of this issues are being addressed. Work on the initial prototype is continuing well. VDC [PL: Palmer] has now closed on high level features and priorities following a feature review with the Storage Server program. VDC now plans to ship as a replacement to VIOC in the Raven release. An initial draft of a full functional specification has been distributed with work continuing to close on details with external groups. The detailed design is progressing well although it is taking longer than originally planned. LSD [PL: Whitaker] A large number of technical issues were open with the log-structured design. In particular the resource requirements and the predictability of performance. These issues lead us to look at possible alternative designs. The LSD team has been working on alternatives and has progressed one such alternative to a prototype. This week we have completed a review exercise to determine the correct path to follow and have chosen a copy-on-write scheme rather than progress the log-structured design. Spiralog V1 [PL: Johnson] work continued to a reduced T1.2 release on Page 2 the OpenVMS 7.1 kit that only includes the improvement to the volume problem. Functional code freeze was achieved 23-Aug-1996, as planned. The team, expanded by new hire Stephen Tweedie is now concentrating on reducing the QAR backlog. Current plan, not yet finalised, is to ship the T1.2 work as a SDK on the back of the OpenVMS 7.1 SSB kit as an unsupported kit. A date (4-Nov-96) has been set for the start of the transition of DECdfs to the US. 1.1 Issues We need to understand and identify key quality aspects for OFST V1 projects. Defect containment will be the key metric we report on to ZK on our progress in this area. DRI: Howell I'm increasing concerned about the size of V1 and the work still remaining to complete the Product Specification to a detailed level. DRI: Howell/Green/Fallon/Flynn Resolution of the Storage Server V1. We need to define the V1 product so that the engineering can proceed in a defined direction. This is now underway. DRI: Green/Flynn Identification of a person who can advise on Storage Server V1 performance. Performance is becoming a major concern for Storage Server V1 and an expert in this field is required to provide guidance to the V1 team on performance management. SteveF and myself have started talking to the performance group on ZK for help and guidance in this area. DRI: Fallon/Howell 1.2 File System Test Team (Compton) The file system test team has the following charter for 1996: To provide testing service up to pre-qualification stage for the Spiralog V1+, Spiralog V2 and XQP projects. This will consist of the running of functional regression tests, load tests, and performance tests, the reporting of results, the initial analysis of the cause of any problems, and the logging of such problems. To run and maintain the Spiralog V1+ builds, and possibly the Spiralog V2 builds. To maintain the Spiralog V1+ kitting procedures, and possibly those for Spiralog V2. To develop new general file system load and stress tests, or extend existing ones. To measure coverage of regression tests. To configure the test hardware for Spiralog V1+, Spiralog V2, and XQP, and organise its use. To monitor the development process for Page 3 Spiralog V1+, Spiralog V2 and XQP, and provide input on any impact on testing requirements. To package new developer-written tests, and make them available for general use. To organise production testing on MOVIES. Specifically -not- part of the charter: no development of functional tests, development of project specific load tests, building or testing for non-VMS platforms - i.e. NT and MSDOS, large scale performance benchmarking, layered product testing, formal qualification testing - this still to be done by QTV and CVG. 1.2.1 Current Status - One build of Spiralog V1+ was done, containing bug fixes. Regression tests were run against this, and found no new regressions. All the validations tests were also run against this build, and three are failing, which is a regression from V1.1. Goal 12, the recovery test, is failing with a kernel stack invalid crash. UETP in a cluster fails with an LFSERR, and SITP with an IOVEC problem. A booby-trapped image was used to try and diagnose the dismount failure with Goal 7, the rename test, which was also a problem in V1.1, and a dump obtained for further analysis. All other validation tests have passed. All existing cluster validation tests have now been integrated into REGRES. A new kit is being tested, which will contain all the goal and load tests used for Spiralog V1 validation. An XQP maintenance release was obtained for stress testing. ALPHA V6.2 and 7.0 testing went well, and VAX V6.2, but VAX V6.1 and 7.0 are failing, and crashing on boot. Investigations into the cause of this are continuing. The XQP maintenance release, and the VAX configuration, has meant that work on a project plan and test plan for the File Stresser V3, as well as the detailed design, has not progressed much this month. The INIT and VERIFY facilities were added to the file system build environment. As there is different code for VAX and ALPHA, only ALPHA builds, for VMS V7.0 and V7.1 are supported, for the time being. A lot of time had to be spent configuring test VAX, first identifying which of the unused systems were usable for testing, and then installing the required versions of VMS, and clustering the systems together. This work is now largely complete, and we have two VAX clusters, one running V6.1/6.2, and one with V7.0/V7.1. The multi-file volume full test was tried with some SCSI stripesets, and then with a single SCSI device, as Spiralog was showing some strange results. It was confirmed that Spiralog creates less files on a volume on a SCSI disk, with a larger capacity, than on a DSSI disk, before it hits volume full. Pausing after reaching volume full, or dismounting and remounting the volume, only allowed a few more files to be created, before volume full is hit again. Investigations are Page 4 continuing into the underlying reasons for this. Work continues on integrating the existing XQP Test Harness scripts into REGRES. 14 are now working on F11, and testing is starting on Spiralog. Investigations started into using PCA to collect coverage and profiling data. It was discovered that there is still a problem with using PCA with usermode Spiralog on ALPHA. Whilst this is pursued with the DECset maintainers again, the avenue of using XPCA with kernel mode Spiralog and F11 will be explored, as well as using PCA with usermode XQP. 1.2.2 Life Of Project Status - Not applicable 1.2.3 Defect Containment Activities - Run regression testing for XQP and Spiralog development. Run load and stress testing for XQP and Spiralog development. Maintain and enhance regression testing system. Maintain and enhance general file system stress testing tests. Configure and manage test hardware for Spiralog, XQP development, and CLD effort. 1.2.4 Dependencies - Spiralog V1+ DRI: Mike Johnson Run build and regression tests and validation tests for Spiralog V1+, as required. Storage Server DRI: Mark Howell Might run build, and will run regression and validation tests for OFST, as required. XQP DRI: Duncan Mclaren Run validation tests for XQP, as required Page 5 HFS DRI: Paul Mcateer Run validation tests for HFS, as required CLD effort DRI: Duncan Mclaren Make test machines available, and run tests, as required, for CLD effort. 1.2.5 Plans Not Met - None 1.3 $FS (Dollar File System) Project (Hirst/Burke) Project Description: $FS, provide a scalable clustered file system for NT interoperability 1.3.1 Current Status - Approved by CPO as part of Storage Server V2 deliverable in December 1998 The change in disk model for LSD will have a significant impact on $FS. The current $FS design assumes LSD will provide atomic writes and be optimised for writes. With a petal model this is not true and $FS will require additional design and implementation work to address the change. Rob has a recommended design for changes in $FS. Chris and Alan are developing an initial prototype, which is near completion. The prototype is being used to validate design ideas and system structure. It may be used as a framework for performance investigations. Estimates for the baselevel schedule are still dependent on additional design activities. Baselevel Original Estimate Actual ============================================================= BL 00 - Planning and Design BL 01 - Infrastructure BL 02 - $QIO access BL 03 - RMS access BL 04 - Performance model BL 05 - Meta-data caching BL 06 - Clusters BL 07 - Directories Page 6 BL 08 - Volumes BL 09 - Files BL 10 - Data caching BL 11 - LSD BL 12 - Quotas BL 13 - Security BL 14 - Damage control BL 15 - Misc items BL 16 - Performance BL 17 - Kitting BL 18 - Field Test 1.3.2 Plans Produce a draft functional specification for the file system to initiate detailed discussions on requirements. Continue with design, looking at the integration of XFS with F64 code taken from Spiralog V1. Hold design walkthroughs to communicate the design among the team. Continue prototype of the file system, working towards a structure that can process a limited number of $QIO requests. Investigate the implications of different design alternatives in LSD, specifically the impact of atomicity and write performance. 1.3.3 Issues The change in LSD model will add significant work to $FS. HFS wave 1 is only providing long name support. The original goals of HFS-1 included manipulation of NT file attributes. $FS is critically dependent on the support of NT attributes by the operating system. The requirements for NT interoperability are unclear, and therefore are not currently being considered during the design of $FS. This may have a significant impact on $FS and $NT nearer to the Storage Server V2 ship date. 1.3.4 Life Of Project Status - Planning and Design Page 7 1.3.5 Defect Containment Activities - Not applicable - no engineering is underway yet The project is looking at design reviews, walkthroughs, code reviews and testing. 1.3.6 Dependencies - By us: HFS waves 1 and 2, LSD, VDC, Backup On us: PATHWORKS, Storage management tools, $NT (NT clerk) 1.3.7 Accomplishments Not Met (and Why) - None 1.4 Virtual Data Cache (VDC) Project (Palmer) VDC is a component project of the Storage Server, whose V1 goal is to improve I/O latency for existing Files-11 mission critical environments. VDC will provide a new scalable high performance file data cache to replace VIOC in the Raven release. Subsequent versions of VDC will provide data caching for the new $FS filesystem, and possibly for PATHWORKS. 1.4.1 Current Status - VDC has continued to make good progress despite most of the team being on vacation for much of the month. 1. VDC is now planned to ship as a replacement to VIOC in the Raven release of OpenVMS AXP. 2. VDC has achieved closure on high level features following a Storage Server review. 3. A first draft of a detailed Functional Specification has been released. 4. Detailed design has continued with a trip to ZKO to work with external dependent groups. 5. Good progress is being made on detailed design work. Page 8 The Storage Server program has decided that VDC should ship as a replacement for the existing VIOC cache in the Raven release. Initial discussions have been held with the Raven team, who are now waiting for a project plan from VDC. The effects of this plan on integration, qualification and schedule are being assessed. The decision to ship in Raven has allowed VDC to move more quickly than expected to close on its functional deliverables. This was is extremely welcome, but requires VDC to rework its original plan for plan. The team has made closure on functionality a top priority to allow it to exit the specification phase. The first draft of a complete functional specification was delivered a week later than originally planned, in part to incorporate extra detail following the Raven decision. A follow-up review was held with Storage Server business and engineering management which was able to close on the features and P0/P1/P2 priorities for the project. Although agreement on the approach was achieved, the last remaining functional issue is to define accurate measurable performance P0s for the project. The VDC team will look to close on this remaining issue in the next week. At this point, functional requirements on VDC will be closed and the project will continue with detailed functional specification with external groups. Detailed design work is continuing well with code flows and internal interfaces now being defined. Closure on the decision to ship in the Raven release has meant some addition design work on integration being required. Some unexpected design issues with the Files-11 XQP were found, but these have now been resolved. Julian Palmer travelled to ZKO to meet with groups where VDC has a dependency. Key outcomes were agreement with the RMS group on functional behaviour with VDC, and a commitment to deliver those capabilities in Raven. A final functional specification is expected from the RMS group in the next week. Agreement was also reached with the Exec group on initial memory management support for VDC in Raven. Further work is continuing with to finalise on deliverables with both these and other groups. The user mode XQP work is progressing on schedule. The usermode stack can now successfully perform basic QIO operations, even running some of the Spiralog V1 regression tests. The next step is to integrate this with the XQP test harness to provide the user mode test environment for VDC regression testing. VDC needs to rework its plan for plan schedule. The plan has proven overly aggressive, underestimating ramp up time, vacation time, rework from the ZKO trip, and extra time spent on external dependencies. A new plan for plan schedule will be generated during the coming week. The key milestones for the next 2 months are closure on performance requirements, a new plan for plan schedule, delivery of versions of the functional and design specification, and completion of the user-mode XQP work. Page 9 Other events over the last month: 1. Congratulations to Douglas and Janet Hanley who got married last month ! 1.4.2 Life Of A Project Status - VDC continues to follow Life of a Project. The VDC problem statement and high level feature requirements are now closed. An initial draft of the functional specification has been delivered, and detailed design is progressing. 1.4.3 Defect Containment Activities - Following a presentation by Ed Maher, VDC has committed to implementing a Defect Containment process as part of its engineering activities. Details will be provided in the Project Plan. A QAR database has been set up to track outstanding project issues and defects. 1.4.4 Dependencies - The list of known dependencies for VDC are listed below. Additional dependencies may be identified from completion of the design. 1. VDC release is now tied to the Raven release of OpenVMS AXP. 2. Latent Support Changes in Raven 3. Exec changes (being managed by VDC with the Exec team) 4. RMS changes (being managed by VDC with Elinor Woods) 5. F11BXQP changes (being managed by VDC within EDO) 6. $FS is dependent on VDC for Storage Server V2 (being managed within EDO). 1.4.5 Other Issues - None Page 10 1.5 Log-structured Disk (LSD) Project (Whitaker) Senior technical staff in EDO as well as Bill Matthews have been working on the pros and cons of the design alternatives as well as making sure that we have covered a broad spectrum of alternatives. This has resulted in a proposal to follow a copy-on-write scheme. The major advantages of the new design are: 1. Predictable performance 2. Major decrease in resource consumption 3. Simpler design reducing risk on Storage Server V1 deliverable Although the technology may have changed, the set of requirements on the LSD project remains the same. The initial V1 deployment solves the following problem: problem: 1. Online Backup for Files-11 by allowing a single clone of a disk to be taken. The next steps in the process are: 1. Produce a document outlining alternatives and proposal 2. Review document with several external groups who have interests in the same areas and directions. (OpenVMS ZK, Storage, UNIX) 3. Produce a schedule for a schedule In the meantime, there are a number of design options that we need to explore with a copy-on-write scheme by analysing various aspects. To make this analysis real, we are developing a prototype. This will be used to assess the various options. Events last month 1. Initial prototype of copy-on-write completed as a proof of concept 2. Latest version of functional specification completed (independent of technology) 3. Met with Argus project to discuss the integration of Argus and LSD 4. Met with QIO Server project to discuss LSD requirements and design issues Page 11 5. Met with Fusion project in storage group to discuss copy-on-write scheme 1.5.1 Current Status - The LSD team will continue to work on the prototype so that we can ensure we have chosen the correct design options. This will feed into the design document. The design we have so far looks like we may be able to remove the QIO server dependency. We have begun working on a schedule for a schedule. 1.5.2 Life Of A Project Status - Given the change in technology, we have restarted detailed design work. Some parts of the original design can be carried over to the copy-on-write scheme. In particular, all the work we have done working out how to integrate this component into OpenVMS and the impact on existing utilities such as SHOW DEVICE. 1.5.3 Dependencies - 1. We are working with the Fusion project to share a common understanding of workloads and ideas around the copy-on-write scheme. 2. OpenVMS Latent support (unchanged by technology change) 3. VDC support for cache flush 4. OpenVMS Backup support for incremental backup via clones 5. QIO Server cluster distribution mechanism 1.5.4 Other Issues - None. Page 12 1.5.5 Accomplishments Not Met - None. 1.5.6 Accomplishments Not Met - None this month. The plan for plan must be reworked from now on though. 1.6 Spiralog V1.2 Project (Johnson) Primary Objective: Volume Full Problem Solution Secondary Objectives: Spiralog V1.0 Maintenance Update Targeted improvements in Spiralog V1.x performance The Spiralog V1.2 release will happen on the OpenVMS V7.1 (GRYPHON) kit as an un-supported kit hopefully on the SSB CD. This release is specifically aimed at solving one problem: recovery from the 'volume full' condition. Additionally, the release will include bug fixes for problems reported against Spiralog V1.0 and Spiralog V1.0-1 from the field. The Spiralog T1.2 release is intended to achieve the following: 1. provide fixes for CLD and SS problems only 2. provide pro-active maintenance offering. 1.6.1 Changes To P.O.R. - With the ramp down of Spiralog T1.2 effort, it is proposed that Spiralog T1.2 will now be delivered on the OpenVMS V7.1 (GRYPHON) kit as an unsupported kit with a reduced level of testing from local sources. There will be no testing from external sources. Spiralog T1.2 will use OpenVMS V7.1 as the primary delivery platform. EDO will perform testing on both OpenVMS V7.0 and OpenVMS V7.1, to ensure the V7.0 compatible build continues to work on OpenVMS V7.1. 1.6.2 Dependencies - EDO test team - the EDO test team need to be advised of any changes in the proposed schedule for Spiralog T1.2. Page 13 OpenVMS Gryphon release changes - the Spiralog T1.2 project needs to track the Gryphon changes to ensure that T1.2 will work on Gryphon when both are released. 1.6.3 Activities Last Month - The team is still in a bugfix mode and is aiming towards providing a kit ready for inclusion on the GRYPHON SSB release CD's. The plan to ship the T1.2 kit on the back of the GRYPHON kit is still not finalised. We are also still awaiting for a date when the final kit needs to be ready. Stephen Tweedie joined the team this from Edinburgh University. He has already been through the internal training for new hires and is now working on a long-standing LFS QAR. Alasdair Baird, recently returned from a weeks holiday has spent much time last month participating in code reviews for the NT Block Server project. He is now completing work on correcting some 128-bit arithmetic anomalies. There have been no further QAR closure since the last report. Outstanding QAR's currently lie at S:6 (+3DF), H:73 (+25DF) in the local database with another 3 highs in the EVMS-GRYPHON database. This represent an increase of 5 highs since the last report although many of these appear to be duplicates of earlier problems. The current priority is to fix showstoppers where possible or failing that fix highs. 1.6.4 Critical Path Events Next Month - Finalisation of Delivery plan Complete T1.2 kit. Dates TBS by Gryphon release team. 1.6.5 Critical Path Events Past Month - None 1.6.6 Quality Initiatives - All code check-ins for Spiralog T1.2 require the following before acceptance into the code stream: 1. peer review of code changes for bug fix Page 14 2. build of code changes for bug fix on Alpha 3. extensive testing of code changes for bug fix 4. SCT conference (DOLLAR-V1R) entry detailing : o description of change o who performed the code change review o sources being changed o what testing was done to confirm fix o module differences 1.7 EDO HFS (McAteer) Goal(s) - Make Changes To QIO Interface And ODS2 To Provide Long File Names and extended attributes in support of the OVMS affinity strategy. 1.7.1 ACCOMPLISHMENTS - o A number of documents were issued for public consumption QIO Interface first review, 29 Sept. it is expected that this will require several passes before it will be finalised. ODS-5 still undergoing changes, due to be reviewed 10 October. Common XQP/RMS routines which could be common to these processes. no review scheduled, informal meeting with RMS take place regularly. Page 15 Project Plan internal to EDO, will be distributed once it has been finalised. Defect Containment Plan a first draft was reviewed by Ed Maher and his comments have been included in the second draft. Passed this to him but he has no time to review it until he returns from EDO. o Other documents produced for internal use Header evolution design this is completed and will be incorporated into the second draft of the collated design specification. Parsing the ACL chain this is expected during the first week of October. Once completed it will be included into the collated design. Design doc first draft completed 4 October, further drafts will be produced as required. Unicode investigation has been completed and the findings included into the design doc. o Design Walkthroughs These have proven to be very successful in flushing out problems/gaps in the current designs. These will be an ongoing process. o INIT/MOUNT Prototypes have been produced that will operate on ODS-5 disks. Page 16 o Trip to ZKO Rod and Paul will travel to ZKO for 10 days of meetings with the folks there who have a dependence on the file system for their HFS work. o Project Plan Originally planned for completion on the 18 October. This may be delayed slightly due to re-planning of the base levels and the outcome of the trip to ZKO by Rod and Paul. An early draft was given to Richard Critz to allow him to work on the interdependancies. 1.7.2 PLANS FOR NEXT MONTH - o Complete and review design document with a view to beginning implementation. o Produce BL0 (basic file naming - for external evaluation only, depending on how useful this is perceived to be this may be dropped) o Subject to design approval: o Start directory lookup algorithm enhancements. begin other implementation o Complete detailed project plan. o Continue to refine document set. o Keep on top of $FS issues. o Continue to review list of requirements. o Keep WWW page up to date 1.7.3 LIFE OF A PROJECT (LOP) STATUS - Problem Statement Complete Investigation Report Complete Outline Project Plan Complete (subject to ongoing review) Page 17 Detailed Project Plan 18-Oct-96 Detailed Design/functional Specs Began 24 June 1996 with three resources. Implementation begins (P0) 21-Oct-96 Implementation ends (P0) 31-Mar-97 Integration testing begins (P0) 9-Jan-97 Integration testing ends (P0) 1-May-97 System testing begins (P0) 9-May-97 NOTE: These dates are provisional until the detailed project plan has been produced and represent the current best guess. 1.7.4 DEPENDENCIES/ISSUES - QIO Interface The QIO interface has been issued as a first draft with the review being fairly uncontentious. Further drafts will be required before the document is finally accepted. Resourcing The HFS team is now at full strength with the addition of Campbell Fraser and the allocation of Paul Randal to the project team. However, due to show stopper QARs in Gryphon, Paul Randal and Ian Brockbank were needed to help with the maintenance effort in the XQP team. This affected the project slightly and will in the future if this circumstance arises again. External Requirements So far there is no indication of when any of the other groups have a dependency on us to produce any specific component of the file system. When and if these become known the project plan may have to be reviewed. An outline of what base levels we are intending to produce has been included in the project plan. Once this has been circulated we may have a better indication of how these external dependencies may be managed. Project Plan Page 18 The project plan is in a constant state of change currently as it is being refined. Only one the design phase has been completed can a detailed project plan be produced. Dependencies Resource availability Other groups requirements Further design/investigation outcome 1.8 NT Blockserver / Spiralog NT Client (Tanner) Goal(s) - Serve blocks from a Log Structured Disk (LSD) from OpenVMS to NT. Present a LSD to NT users as if it were a local disk. 1.8.1 ACCOMPLISHMENTS - o The blockserver has been successfully installed on MOVIES, and will be demonstated at DECUS next month. o Stress tests are continuing to be run on the blockserver baselevel. o The whole team have continued to review all the blockserver code in a parallel effort with the stress testing. o The teams is scheduled to run the Dollar V1 and PROD performance benchmark tests at the beginning of October. 1.8.2 PLANS FOR NEXT MONTH - o Complete gathering initial performance figures using Dollar perf tests. o Sew up the blockserver baselevel by tracking down the final gross problem (Stop Press - A bug in NT was found to be responsible for this problem, and a satisfactory work around is now in place; Kudos to Rudi Martin for tacking this down), cleaning up the build environment and clean up any out-of-date documentation. Page 19 o Dougie to start investigations for productised blockserver. 1.8.3 DEPENDENCIES/ISSUES - To complete the productisation of the blockserver on LSD, we are dependent on the availability of the LSD. Note that all the other $NT blocking issues have been removed by the change to the plan of record by the $NT review. We need to get a clear statement from product management as to how/if the blockserver fits into Storage Server V1. 1.9 XQP/DECdtm/IPC/DECdfs (McLaren) Goal(s) - Maintain the XQP, DECdtm, IPC and DECdfs facilities. o Support the release teams in integrating these facilities in the operating system releases. o Transition DECdtm, IPC and DECdfs to the customer satisfaction and quality group. 1.9.1 ACCOMPLISHMENTS - F11BXQP: o Paul and Duncan have spent most of this month working on CLDs. On the CLD front, items of note are:- 1. R8/R9 problem is still being monitored by the file systems maintainers. 2. The INVALID LOCK ID problem is being worked by Ronnie Millar and Greg Jordan, however, the problems are still being counted against the file system at this time. 3. Images for kits have been tested, and we have problems with the VAX V6.1 and V7.1 builds. Page 20 - Build instructions in the SCT notes were not followed correctly - We have some problem which crashes MV3100 systems during system startup. Larry Griswold is looking at the problem which manifests itself in SYS$PKNDRIVER, to try to give us a pointer to what is being corrupted. 4. An MSCP compatible logging driver has been developed to help us work the problem at UBS. DECdtm / IPC: o Support for any new DECdtm/IPC problems is now carried out in the U.S. Kevin Playford and Alan Dewar are still cleaning up a few minor Gryphon problems. DECdfs: o A date (4-Nov-96) has been set for the start of the transition of DECdfs to the US. o Jim Brankin is working with DECdfs to see if the Phase V version can be changed back to using the ALTSTART interface. This would greatly reduce the support load for DECdfs. 1.9.2 PLANS FOR NEXT MONTH - o Continue to maintain the XQP, DECdtm, IPC and DECdfs facilities. o Plan the transition of DECdfs to new group in the US (Starts 4-Nov-96). 1.9.3 DEPENDENCIES/ISSUES - o Overall staffing for the maintenance is below critical mass, this is due to the amount of work needed in HFS, and back filling for the late transition of components to ZKO. It is unlikely that this will improve well into next year. ================== RFC 822 Headers ================== Return-Path: "movies::chandley"@movies.enet.dec.com Received: by galaxy.zko.dec.com (UCX V4.1-12, OpenVMS V6.2 VAX); Fri, 1 Nov 1996 08:10:19 -0500 Received: from vbormc.vbo.dec.com (vbormc.vbo.dec.com [16.36.208.94]) by mail.vbo.dec.com (8.7.3/8.7) with ESMTP id OAA15826 for ; Fri, 1 Nov 1996 14:10:07 +0100 (MET) Received: from movies.enet (daemon@localhost) by vbormc.vbo.dec.com (8.7.3/8.7) with SMTP id OAA11812 for ; Fri, 1 Nov 1996 14:09:22 +0100 Message-Id: <199611011309.OAA11812@vbormc.vbo.dec.com> Received: from movies.enet; by vbormc.enet; Fri, 1 Nov 96 14:09:24 MET Date: Fri, 1 Nov 96 14:09:24 MET From: PC Fiddler 01-Nov-1996 1237 <"movies::chandley"@movies.enet.dec.com> To: "base_os_extended@star.zko.dec.com"@vbormc.vbo.dec.com Cc: chandley@movies.enet.dec.com Apparently-To: base_os_extended@star.zko.dec.com Subject: EDO MONTHLY REPORT - OCTOBER