1  V73_Features
   The following topics summarize the new features for OpenVMS
   Version 7.3.
 

2  OpenVMS_e-Business
   This section provides information on the e-Business technologies
   that are included in the Compaq OpenVMS e-Business Infrastructure
   Package with OpenVMS Alpha Version 7.3. This package provides
   key Internet and e-Business software technology that enhances
   the base OpenVMS Alpha operating system. These technologies are
   licensed with the OpenVMS Alpha operating system.

   The Compaq OpenVMS e-Business Infrastructure Package Version 1.1
   contains the following software and accompanying documentation:

   o  Compaq Secure Web Server for OpenVMS Alpha Version 1.0-1
      (based on Apache)

   o  Compaq COM for OpenVMS Version 1.1B

   o  Compaq Java 2 SDK, Standard Edition v 1.2.2-3

   o  Compaq Fast Virtual Machine (Fast VM) for the Java 2 Platform
      on OpenVMS Alpha v 1.2.2-1

   o  Compaq XML (Extensible Markup Language) Technology Version 1.0

   o  Attunity Connect "On Platform" Package Version 3.0.0.4

   o  Compaq Enterprise Directory Services for e-Business Version
      5.0

   o  Reliable Transaction Router (RTR) Version 4.0

   o  Compaq BridgeWorks Version 1.0A

   Refer to the Compaq OpenVMS e-Business Infrastructure Package
   Version 1.1 CD-ROM Booklet and the Compaq OpenVMS e-Business
   Infrastructure Package Version 1.1 Software Product Description,
   80.58.00 included in the e-Business package for more detailed
   information.

   For up-to-date information on OpenVMS e-Business technologies,
   refer to the following web site:

   http://www.openvms.compaq.com/business/index.html

   The following sections briefly describe the e-Business software
   and provide pointers and web sites for further information. Refer
   to the  Compaq OpenVMS e-Business Infrastructure Package SPD
   for technology descriptions, other software requirements, and
   licensing information. The technology on the e-Business CD-ROM
   has been tested and qualified with OpenVMS Alpha Version 7.2-1
   and later.
 

3  Secure_Web_Server
   Compaq Secure Web Server for OpenVMS Alpha (CSWS) is based
   on the popular Apache Web Server from the Apache Software
   Foundation. Building on the source code from the Apache Software
   Foundation (http://www.apache.org), Compaq OpenVMS engineering
   has incorporated and fully integrated OpenSSL with mod_ssl, the
   most popular open-source implementations of SSL.

   The product is also available to download from the CSWS web site:

   http://www.openvms.compaq.com/openvms/products/ips/apache/csws.html
 

3  COM_for_OpenVMS
   Component Object Model (COM) is a technology from Microsoft that
   allows developers to create distributed network objects. Digital
   Equipment Corporation and Microsoft jointly developed the COM
   specification. The Compaq COM for OpenVMS kit included on the e-
   Business CD-ROM provides all the code and documentation you need
   to install Compaq COM for OpenVMS on your system and to develop
   COM applications.
 

3  Java_2_SDK,_Standard_Edition
   The Java Software Development Kit (SDK) provides an environment
   in which to develop and deploy Java applications on OpenVMS
   Alpha. Java applications can be written once and run on any
   operating system that implements the Java run-time environment,
   which consists primarily of the Java Virtual Machine (JVM).

   The Java 2 SDK, Standard Edition, for OpenVMS Alpha kit is
   included on the e-Business CD-ROM, or you can download this kit
   from the Compaq Java home page at the following web address:

   http://www.compaq.com/java/download/index.html
 

3  Fast_Virtual_Machine_for_Java_2
   The Compaq Fast VM for Java 2 is new Just-In-Time (JIT) compiler
   technology designed to provide optimal Java run-time performance
   on OpenVMS Alpha systems. The Fast VM for Java 2 offers
   significant performance advantages over the Classic JIT provided
   with the Compaq Java 2 SDK, Standard Edition.

   The Fast VM for OpenVMS Alpha kit is included on the e-Business
   CD-ROM, or you can also download this kit from the Compaq Java
   home page at the following web address:

   http://www.compaq.com/java/download/index.html
 

3  Compaq_XML_Technology
   The following components are provided on the e-Business CD-ROM
   using open source software from the Apache Software Foundation:

   o  XML parsers in Java and C++

   o  XSLT style sheet processors in Java and C++

   This technology provides applications the ability to parse,
   generate, manipulate, validate, and transform Extensible Markup
   Language (XML) documents and data.
 

3  Attunity_Connect_On_Platform
   Attunity Connect is object-oriented middleware that facilitates
   the development of applications that access, integrate, and
   update data from multiple, heterogeneous sources across a wide
   range of operating system platforms. With Attunity Connect, you
   can extend the life of your existing data and applications and
   preserve your significant IT investments.

   The e-Business CD-ROM contains the Attunity Connect "On Platform"
   Package for OpenVMS Alpha. You can also download the Attunity
   Connect "On Platform" Package from the following OpenVMS web
   site:

   http://www.openvms.compaq.com/openvms/products/ips/attunity/
 

3  Enterprise_Directory_Services
   Compaq OpenVMS Enterprise Directory for e-Business combines the
   best of both industry standard LDAPv3 and X.500 capabilities, and
   delivers robust and scalable directory services across intranets,
   extranets, and the Internet to customers, suppliers and partners.
   Lightweight Directory Access Protocol (LDAP) support allows
   access by a myriad of LDAP-based clients, user agents, and
   applications. The X.500 support brings high performance,
   resilience, advanced access controls, and easy replication across
   the enterprise.

   For further information, refer to the Compaq OpenVMS Enterprise
   Directory for e-Business Software Product Description (SPD
   40.77.xx) included on the e-Business CD-ROM in the Enterprise
   Directory Services documentation directory.
 

3  Reliable_Transaction_Router_(RTR)
   Reliable Transaction Router (RTR) is fault tolerant transactional
   messaging middleware used to implement large, distributed
   applications using client/server technology. Reliable Transaction
   Router enables computing enterprises to deploy distributed
   applications on OpenVMS Alpha and VAX systems.

   Refer to the Reliable Transaction Router for OpenVMS Software
   Product Description (SPD 51.04.xx) included on the e-Business
   CD-ROM for additional information; or you can access the RTR web
   site at:

   http://www.compaq.com/rtr/
 

3  Compaq_BridgeWorks
   Compaq BridgeWorks is a distributed application development
   and deployment tool for OpenVMS 3GL applications. BridgeWorks
   consists of a GUI development tool on the Windows NT desktop, a
   server manager component on OpenVMS, and extensive online help.
   BridgeWorks provides developers with an easy means to create
   distributed applications using OpenVMS as the enterprise server
   and Windows NT as the departmental server.

   For more information on Compaq BridgeWorks, refer to the Compaq
   OpenVMS e-Business Infrastructure Package Software Product
   Description.
 

2  User_Features
   This section describes new features of interest to OpenVMS
   users.
 

3  DCL_Commands_and_Lexical_Functions
   This section describes new and changed DCL commands, qualifiers,
   and lexical functions for OpenVMS Version 7.3. The following
   table contains a summary of these changes.

   For more information, refer to the OpenVMS DCL Dictionary.

   DCL Command      Documentation Update

   ANALYZE/IMAGE    A new qualifier, /SELECT, has been added, along
                    with an example.
   ANALYZE/OBJECT   A new qualifier, /SELECT, has been added, along
                    with an example.
   ANALYZE/PROCESS  A new qualifier, /[NO]IMAGE_PATH, has been
                    added, along with an example.
   DELETE           A new qualifier, /BITMAP, has been added to
                    support Write Bitmap.
   DELETE/INTRUSION A new qualifier, /NODE, has been added, along
                    with an example, to support Cluster-wide
                    Intrusion.
   DIRECTORY        A new qualifier, /CACHING_ATTRIBUTE, has been
                    added to support Extended File Cache (XFC).
   DISMOUNT         A new qualifier, /POLICY, has been added to
                    support Write Bitmap.

                    A new qualifier, /FORCE_REMOVAL, has been added
                    to support Volume Shadowing.
   DUMP             A new qualifier, /PROCESS, has been added.
   INITIALIZE       The INITIALIZE description has been updated to
                    include information about Extended File Cache
                    (XFC).

                    A new qualifier, /SHADOW, has been added to
                    support Volume Shadowing.
   MOUNT            The MOUNT command has been moved to the
                    OpenVMS DCL Dictionary from the OpenVMS System
                    Management Utilities Reference Manual.

                    The MOUNT description has been updated to
                    include information about Extended File Cache
                    (XFC).

                    A new qualifier, /POLICY, has been added to
                    support Write Bitmap.
   SET AUDIT        A new keyword, SERVER, has been added under the
                    LOGFAILURE, LOGIN, and LOGOUT keywords.

                    New text has been added to the /NEW_LOG
                    qualifier.
   SET              This new DCL command has been added to support
   CACHE/RESET      Extended File Cache (XFC).
   SET DEVICE       The following new qualifiers have been added
                    to support Volume Shadowing: /FORCE_REMOVAL,
                    /MEMBER_TIMEOUT, /MVTIMEOUT, /READ_COST, /SITE,
                    /COPY_SOURCE, /ABORT_VIRTUAL_UNIT.
   SET DISPLAY      The logical, DECW$SETDISPLAY_DEFAULT_TRANSPORT,
                    has been added to this command.
   SET FILE         Two new qualifiers, /SHARE and /CACHING_
                    ATTRIBUTE, have been added. The /CACHING_
                    ATTRIBUTE qualifier supports Extended File Cache
                    (XFC).
   SET PROCESS      The functionality of the qualifier, /[NO]DUMP,
                    has been extended to include other processes.
                    The /DUMP qualifier also has a new option,
                    NOW, to initiate an immediate dump of another
                    process.
   SET RMS_         Two new qualifiers, /CONTENTION_POLICY and
   DEFAULT          /QUERY_LOCK have been added, and the examples
                    have been updated.
   SET SERVER       Added support for the Registry, including new
                    qualifiers and examples.
   SET VOLUME       A new qualifier, /[NO]WRITETHROUGH, has been
                    added to support Extended File Cache (XFC).

                    The /HIGHWATER qualifier is valid for Files-11
                    On-Disk Structure Level 5 disks.
   SHOW CPU         The following new qualifiers have been added:
                    /EXACT, /HIGHLIGHT, /OUTPUT, /PAGE, /SEARCH, and
                    /WRAP.
   SHOW DEVICES     A new qualifier, /BITMAP, has been added to
                    support Write Bitmap, along with examples.

                    The /FULL qualifier now displays the worldwide
                    identifier (WWID) for Fibre Channel tape
                    devices.
   SHOW INTRUSION   A new qualifier, /NODE, has been added, along
                    with an example, to support Cluster-wide
                    Intrusion.
   SHOW LICENSE     The qualifier, /CHARGE_TABLE, has been added as
                    a synonym for the /UNIT_REQUIREMENTS qualifier.
   SHOW MEMORY      The /CACHE qualifier and examples have been
                    updated for Extended File Cache (XFC).

                    The /FILES and /FULL qualifiers and examples
                    have been updated for Large Page Files.
   SHOW RMS_        The example has been updated.
   DEFAULT
   SHOW SERVER      This command has been added in support of the
                    Registry.
   UNLOCK           This command is now obsolete. Use the SET
                    FILE/UNLOCK command.

   DCL Lexical      Documentation Update

   F$GETDVI         The item codes, MT3_DENSITY, MT3_SUPPORTED, and
                    WWID have been added, and the MOUNTCNT item code
                    has been updated.

                    The item codes, DEVTYPE, DEVCLASS, and DEVICE_
                    TYPE_NAME have been updated, and an example
                    has been added. Tables 1-7 and 1-8 have been
                    removed.
   F$GETQUI         The JOB_STATUS item code list has been updated.
   F$GETJPI         The MULTITHREAD item code has been added.
   F$GETSYI         The MULTITHREAD and DECNET_VERSION items have
                    been added.
 

3  Utility_Routines_Online_Help
   As of Version 7.3, online help now includes all the OpenVMS
   utility routines that are described in OpenVMS Utility Routines
   Manual, including the following:

      ACL_Editor
      BACKUP_API
      CLI_Routines
      CONV$_Routines
      CQUAL_Routines
      DCX_Routines
      DECTPU
      EDT_Routines
      FDL_Routines
      LBR_Routines
      LDAP_Routines
      LGI_Routines
      MAIL_Routines
      NCS_Routines
      PSM_Routines
      SMB_Routines
      SOR_Routines

   For OpenVMS Version 7.3, several online help topics have been
   renamed, as follows:

   Old Topic
   Name           New Topic Name

   BACKUP         BACKUP_Command
   FDL            FDL_Files
   MAIL           MAIL_Command
   NCS            NCS_Command
 

3  MIME_Utility
   The following new commands and qualifiers have been added to the
   Multipurpose Internet Mail Extension (MIME) utility:

   Command        Description

   ADD/BINARY     Sets the Content-Type to application/octet-stream
                  and Content-Transfer-Encoding to Base64. This
                  format can be used to represent an arbitrary
                  binary data stream.
   SHOW option    Displays information about the MIME environment.
                  Possible options are CONTENT_TYPE, FILE_TYPES, and
                  VERSION.

   For more information about the MIME utility commands and
   qualifiers, refer to the OpenVMS User's Manual.
 

3  WWPPS_Utility
   The World-Wide PostScript Printing Subsystem (WWPPS) is a utility
   that allows you to print a PostScript file with various language
   characters on any PostScript printer. By embedding font data
   within the PostScript printable file, you can print the language
   characters even if the printer does not have the local language
   character fonts.

   For detailed instructions about using the WWPPS utility, refer to
   the OpenVMS User's Manual.

   For more information about the installation and administration of
   the WWPPS utility, refer to the OpenVMS System Manager's Manual.
 

2  System_Management_Features
   This topic describes new features of interest to OpenVMS system
   managers.
 

3  AlphaServer_GS_Series
   OpenVMS Version 7.3 provides support for Compaq's AlphaServer
   GS80, GS160 and GS320 systems, which was introduced in OpenVMS
   Version 7.2-1H1, and includes:

   o  OpenVMS support for hard and soft partitions (Galaxy) on
      AlphaServer GS160 and GS320 systems

   o  OpenVMS Resource Affinity Domain (RAD) support for
      applications

   o  CPU Online Replace
 

4  Hard_and_Soft_Partitions
   Hard partitioning is a physical separation of computing resources
   by hardware-enforced access barriers. It is impossible to read
   or write across a hard partition boundary. There is no resource
   sharing between hard partitions.

   Soft partitioning is a separation of computing resources by
   software-controlled access barriers. Read and write access across
   a soft partition boundary is controlled by the operating system.
   OpenVMS Galaxy is an implementation of soft partitioning.

   The way customers choose to partition their systems depends on
   their computing environments and application requirements. For
   more information about using hard partitions and OpenVMS Galaxy,
   refer to the OpenVMS Alpha Partitioning and Galaxy Guide.
 

4  Resource_Affinity_Domain_(RAD)_Support
   OpenVMS Alpha Version 7.3 provides non-uniform memory awareness
   (NUMA) in OpenVMS memory management and process scheduling,
   which was introduced in OpenVMS Version 7.2-1H1. This capability
   provides application support for resource affinity domains
   (RADs), to ensure that applications running on a single instance
   of OpenVMS on multiple quad building blocks (QBBs) can execute
   as efficiently as possible in a NUMA environment. A RAD is a
   set of hardware components (CPU, memory, IO) with common access
   characteristics, and corresponds to a QBB in an AlphaServer GS160
   or GS320 system.

   For more information about using the OpenVMS RAD support for
   application features, refer to the OpenVMS Alpha Partitioning and
   Galaxy Guide.
 

3  Daylight_Savings_Time
   System parameter AUTO_DLIGHT_SAV controls whether OpenVMS
   will automatically change system time to and from Daylight
   Savings Time when appropriate. A value of 1 tells OpenVMS to
   automatically make the change. The default is 0 (off). This is a
   static parameter.

   However, if you have a time service (such as DTSS), that time
   service continues to control time changes, and OpenVMS does not
   interfere. Do not enable automatic daylight savings time if you
   have another time service.

   For more information, refer to the OpenVMS System Manager's
   Manual.
 

3  CPU_Online_Replace_(Alpha)
   With OpenVMS Alpha Version 7.3, you can replace secondary CPUs on
   a running system without rebooting, which provides increased
   system maintainability and serviceability. This feature is
   supported only on AlphaServer GS160/320 systems. Note that
   replacing the primary CPU requires rebooting.

   To use this feature, you must first download console firmware
   Version 5.9B from the following location:

   http://ftp.digital.com/pub/DEC/Alpha/firmware/

   You can then use the following DCL commands to replace a CPU
   without rebooting:

   1. Direct OpenVMS to stop scheduling processes on the CPU:

      $ STOP/CPU n

      (n is the number of the CPU to be stopped.)

   2. Power off the running CPU:

      $ SET CPU/POWER=OFF n

   3. When the light on the CPU module has turned from green to
      amber, physically remove the CPU module from the system. Then
      put in a new CPU.

   4. Power on the CPU:

      $ SET CPU/POWER=ON  n

   OpenVMS automatically adds the CPU to the active set of
   processors.

   Note that the Galaxy Configuration Utility (GCU) also supports
   this capability.
 

3  Class_Scheduler
   With OpenVMS Version 7.3, there is a new SYSMAN-based interface
   for class scheduling. This new class scheduler, implemented on
   both VAX and Alpha systems, gives you the ability to designate
   the amount of CPU time that a system's users may receive by
   placing the users into scheduling classes. Each class is assigned
   a percentage of the overall system's CPU time. As the system
   runs, the combined set of users in a class are limited to the
   percentage of CPU execution time allocated to their class. The
   users may get some additional CPU time if /windfall is enabled
   for their scheduling class. Enabling the /windfall allows the
   system to give a small amount of CPU time to a scheduling class
   when a CPU is idle and the scheduling class' allotted time has
   been depleted.

   To invoke the class scheduler, you use the SYSMAN interface.
   SYSMAN allows you to create, delete, modify, suspend, resume, and
   display scheduling classes. SYSMAN command: class_schedule shows
   the SYSMAN command, CLASS_SCHEDULE, and its sub-commands.

   Table 4-1 SYSMAN command: class_schedule

   Sub-
   command     Meaning

   ADD         Creates a new scheduling class
   DELETE      Deletes a scheduling class
   MODIFY      Modifies the characteristics of a scheduling class
   SHOW        Shows the characteristics of a scheduling class
   SUSPEND     Suspends temporarily a scheduling class
   RESUME      Resumes a scheduling class

   By implementing the class scheduler using the SYSMAN interface,
   you create a permanent database that allows OpenVMS to class
   schedule processes automatically after a system has been booted
   and rebooted. This database resides on the system disk in
   SYS$SYSTEM:VMS$CLASS_SCHEDULE.DATA. SYSMAN creates this file
   as an RMS indexed file when the first scheduling class is created
   by the SYSMAN command, CLASS_SCHEDULE ADD.

   In a cluster environment, SYSMAN creates this database file in
   the SYS$COMMON root of the [SYSEXE] directory. As a result, the
   database file is shared among all cluster members. By using
   SYSMAN's SET ENVIRONMENT command, you can define scheduling
   classes either on a cluster-wide or per-node basis.

   If desired, a system manager (or application manager) uses the
   permanent class scheduler to place a process into a scheduling
   class at process creation time. When a new process is created,
   Loginout determines whether this process belongs to a scheduling
   class. Given process information from the SYSUAF file, Loginout
   then class schedules the process if Loginout determines that the
   process belongs to a scheduling class.

   By using the SYSMAN utility to perform class scheduling
   operations instead of $SCHED system service, you gain the
   following benefits:

   o  You need not modify individual program images to control
      class scheduling. You can add, delete, and modify scheduling
      classifications from the SYSMAN utility.

   o  You can use SYSMAN to create a permanent class scheduling
      database file which allows processes to be class scheduled
      at process creation time and allows class definitions to be
      preserved in case of a system reboot.

   For more detailed information, refer to the following manuals:

      OpenVMS Programming Concepts Manual, Volume I
      OpenVMS DCL Dictionary: N-Z
      OpenVMS System Services Reference Manual: A-GETUAI
 

3  Dedicated_CPU_Lock_Manager_(Alpha)
   The Dedicated CPU Lock Manager is a new feature that improves
   performance on large SMP systems that have heavy lock manager
   activity. The feature dedicates a CPU to performing lock manager
   operations.

   A dedicated CPU has the following advantages for overall system
   performance as follows:

   o  Reduces the amount of MP_SYNCH time

   o  Provides good CPU cache utilization
 

4  Implementing
   For the Dedicated CPU Lock Manager to be effective, systems
   must have a high CPU count and a high amount of MP_SYNCH due
   to the lock manager. Use the MONITOR utility and the MONITOR
   MODE command to see the amount of MP_SYNCH. If your system has
   more than five CPUs and if MP_SYNCH is higher than 200%, your
   system may be able to take advantage of the Dedicated CPU Lock
   Manager. You can also use the spinlock trace feature in the
   System Dump Analyzer (SDA) to help determine if the lock manager
   is contributing to the high amount of MP_SYNCH time.

   The Dedicated CPU Lock Manager is implemented by a LCKMGR_SERVER
   process. This process runs at priority 63. When the Dedicated
   CPU Lock Manager is turned on, this process runs in a compute
   bound loop looking for lock manager work to perform. Because
   this process polls for work, it is always computable; and with
   a priority of 63 the process will never give up the CPU, thus
   consuming a whole CPU.

   If the Dedicated CPU Lock Manager is running when a program calls
   either the $ENQ or $DEQ system services, a lock manager request
   is placed on a work queue for the Dedicated CPU Lock Manager.
   While a process waits for a lock request to be processed, the
   process spins in kernel mode at IPL 2. After the dedicated CPU
   processes the request, the status for the system service is
   returned to the process.

   The Dedicated CPU Lock Manager is dynamic and can be turned off
   if there are no perceived benefits. When the Dedicated CPU Lock
   Manager is turned off, the LCKMGR_SERVER process is in a HIB
   (hibernate) state. The process may not be deleted once started.
 

4  Enabling
   To use the Dedicated CPU Lock Manager, set the LCKMGR_MODE
   system parameter. Note the following about the LCKMGR_MODE system
   parameter:

   o  Zero (0) indicates the Dedicated CPU Lock Manager is off (the
      default).

   o  A number greater than zero (0) indicates the number of CPUs
      that should be active before the Dedicated CPU Lock Manager is
      turned on.

   Setting LCKMGR_MODE to a number greater than zero (0) triggers
   the creation of a detached process called LCKMGR_SERVER. The
   process is created, and it starts running if the number of active
   CPUs equals the number set by the LCKMGR_MODE system parameter.

   In addition, if the number of active CPUs should ever be reduced
   below the required threshold by either a STOP/CPU command or by
   CPU reassignment in a Galaxy configuration, the Dedicated CPU
   Lock Manager automatically turns off within one second, and the
   LCKMGR_SERVER process goes into a hibernate state. If the CPU is
   restarted, the LCKMGR_SERVER process again resumes operations.
 

4  Using_with_Affinity
   The LCKMGR_SERVER process uses the affinity mechanism to set
   the process to the lowest CPU ID other than the primary. You can
   change this by indicating another CPU ID with the LCKMGR_CPUID
   system parameter. The Dedicated CPU Lock Manager then attempts
   to use this CPU. If this CPU is not available, it reverts back to
   the lowest CPU other than the primary.

   The following shows how to dynamically change the CPU used by the
   LCKMGR_SERVER process:

   $RUN SYS$SYSTEM:SYSGEN
   SYSGEN>USE ACTIVE
   SYSGEN>SET LCKMGR_CPUID 2
   SYSGEN>WRITE ACTIVE
   SYSGEN>EXIT
To verify the CPU dedicated to the lock manager, use the following
show system command:

   $ SHOW SYSTEM/PROCESS=LCKMGR_SERVER

   This change applies to the currently running system. A reboot
   reverts back to the lowest CPU other than the primary. To
   permanently change the CPU used by the LCKMGR_SERVER process,
   set LCKMGR_CPUID in your MODPARAMS.DAT file.

   Compaq highly recommends that a process not be given hard
   affinity to the CPU used by the Dedicated CPU Lock Manager.
   With hard affinity when such a process becomes computable, it
   cannot obtain any CPU time, because the LCKMGR_SERVER process
   is running at the highest possible real-time priority of 63.
   However, the LCKMGR_SERVER detects once per second if there are
   any computable processes that are set by the affinity mechanism
   to the dedicated lock manager CPU. If so, the LCKMGR_SERVER
   switches to a different CPU for one second to allow the waiting
   process to run.
 

4  Using_with_Fast_Path_Devices
   OpenVMS Version 7.3 also introduces Fast Path for SCSI and Fibre
   Channel Controllers along with the existing support of CIPCA
   adapters. The Dedicated CPU Lock Manager supports both the
   LCKMGR_SERVER process and Fast Path devices on the same CPU.
   However, this may not produce optimal performance.

   By default, the LCKMGR_SERVER process runs on the first available
   nonprimary CPU. Compaq recommends that the CPU used by the
   LCKMGR_SERVER process not have any Fast Path devices. This can
   be accomplished in either of the following ways:

   o  You can eliminate the first available nonprimary CPU as an
      available Fast Path CPU. To do so, clear the bit associated
      with the CPU ID from the IO_PREFER_CPUS system parameter.

      For example, let's say your system has eight CPUs with CPU IDs
      from zero to seven and four SCSI adapters that will use Fast
      Path. Clearing bit 1 from IO_PREFER_CPUs would result in the
      four SCSI devices being bound to CPUs 2, 3, 4, and 5. CPU 1,
      which is the default CPU the lock manager will use, will not
      have any Fast Path devices.

   o  You can set the LCKMGR_CPUID system parameter to tell the
      LCKMGR_SERVER process to use a CPU other than the default. For
      the above example, setting this system parameter to 7 would
      result in the LCKMGR_SERVER process running on CPU 7. The Fast
      Path devices would by default be bound to CPUs 1, 2, 3, and 4.
 

4  Using_on_AlphaServer_GS_Series_Systems
   The new AlphaServer GS Series Systems (GS80, GS160, and the
   GS320) have NUMA memory characteristics. When using the Dedicated
   CPU Lock Manager on one of these systems, the best performance is
   obtained by utilizing a CPU and memory from within a single Quad
   Building Block (QBB).

   For OpenVMS Version 7.3, the Dedicated CPU Lock Manager does not
   yet have the ability to decide from where QBB memory should be
   allocated. However, there is a method to preallocate lock manager
   memory from the low QBB. This can be done with the LOCKIDTBL
   system parameter. This system parameter indicates the initial
   size of the Lock ID Table, along with the initial amount of
   memory to preallocate for lock manager data structures.

   To preallocate the proper amount of memory, this system parameter
   should be set to the highest number of locks plus resources
   on the system. The command MONITOR LOCK can provide this
   information. If MONITOR indicates the system has 100,000 locks
   and 50,000 resources, then setting LOCKIDTBL to the sum of these
   two values will ensure that enough memory is initially allocated.
   Adding in some additional overhead may also be beneficial.
   Setting LOCKIDTBL to 200,000 thus might be appropriate.

   If necessary, use the LCKMGR_CPUID system parameter to ensure
   that the LCKMGR_SERVER runs on a CPU in the low QBB.
 

3  Enterprise_Directory_for_e-Business_(Alpha)
   OpenVMS Enterprise Directory for e-Business is a massively
   scalable directory service, providing both X.500 and LDAPv3
   services on OpenVMS Alpha with no separate license fee. OpenVMS
   Enterprise Directory for e-Business provides the following:

   o  Large percentage of the Fortune 500 already deploy Compaq
      X.500 Directory Service (the forerunner of OpenVMS Enterprise
      Directory for e-Business)

   o  World's first 64-bit directory service

   o  Seamlessly combines the scalability and distribution features
      of X.500 with the popularity and interoperability offered by
      LDAPv3

   o  Inherent replication/shadowing features may be exploited to
      guarantee 100% up-time

   o  Systems distributed around the world can be managed from a
      single point

   o  Ability to store all types of authentication and security
      certificates across the enterprise accessible from any
      location

   o  Highly configurable schema

   o  In combination with AlphaServer technology and in-memory
      database delivers market leading performance and low
      initiation time

   For more detailed information, refer to the Compaq OpenVMS e-
   Business Infrastructure CD-ROM package which is included in the
   OpenVMS Version 7.3 CD-ROM kit.
 

3  Extended_File_Cache_(Alpha)
   The Extended File Cache (XFC) is a new virtual block data cache
   provided with OpenVMS Alpha Version 7.3 as a replacement for the
   Virtual I/O Cache.

   Similar to the Virtual I/O Cache, the XFC is a clusterwide, file
   system data cache. Both file system data caches are compatible
   and coexist in an OpenVMS Cluster.

   The XFC improves I/O performance with the following features that
   are not available with the Virtual I/O Cache:

   o  Read-ahead caching

   o  Automatic resizing of the cache

   o  Larger maximum cache size

   o  No limit on the number of closed files that can be cached

   o  Control over the maximum size of I/O that can be cached

   o  Control over whether cache memory is static or dynamic

   For more information, refer to the chapter on Managing Data
   Caches in the OpenVMS System Manager's Manual, Volume 2: Tuning,
   Monitoring, and Complex Systems.
 

3  ARB_SUPPORT_Qualifier_Added_to_INSTALL_Utility
   Beginning with OpenVMS Alpha Version 7.3, you can use the /ARB_
   SUPPORT qualifier with the ADD, CREATE, and REPLACE commands in
   the INSTALL utility. The ARB_SUPPORT qualifier provides Access
   Rights Block (ARB) support to products that have not yet been
   updated the per-thread security Persona Security Block (PSB) data
   structure.

   This new qualifier is included in the INSTALL utility
   documentation in the OpenVMS System Management Utilities
   Reference Manual.
 

3  MONITOR_Utility
   The MONITOR utility has two new class names, RLOCK and TIMER,
   which you can use as follows:

   o  MONITOR RLOCK: the dynamic lock remastering statistics of a
      node

   o  MONITOR TIMER: Timer Queue Entry (TQE) statistics

   These enhancements are discussed in more detail in the MONITOR
   section of the OpenVMS System Management Utilities Reference
   Manual and in the appendix that discusses MONITOR record formats
   in that manual.

   Also in the MONITOR utility, the display screens of MONITOR
   CLUSTER, PROCESSES/TOPCPU, and SYSTEM now have new and higher
   scale values. Refer to the OpenVMS System Management Utilities
   Reference Manual: M-Z for more information.
 

3  OpenVMS_Cluster_Systems
   The following OpenVMS Cluster features are discussed in this
   section:

   o  Clusterwide intrusion detection

   o  Fast Path for SCSI and Fibre Channel (Alpha)

   o  Floppy disks served in an OpenVMS Cluster system (Alpha)

   o  New Fibre Channel support (Alpha)

   o  Switched LAN as a cluster interconnect

   o  Warranted and migration support
 

4  Clusterwide_Intrusion_Detection
   OpenVMS Version 7.3 includes clusterwide intrusion detection,
   which extends protection against attacks of all types throughout
   the cluster. Intrusion data and information from each system are
   integrated to protect the cluster as a whole. Member systems
   running versions of OpenVMS prior to Version 7.3 and member
   systems that disable this feature are protected individually
   and do not participate in the clusterwide sharing of intrusion
   information.

   You can modify the SECURITY_POLICY system parameter on the
   member systems in your cluster to maintain either a local or a
   clusterwide intrusion database of unauthorized attempts and the
   state of any intrusion events.

   If bit 7 in SECURITY_POLICY is cleared, all cluster members
   are made aware if a system is under attack or has any intrusion
   events recorded. Events recorded on one system can cause another
   system in the cluster to take restrictive action. (For example,
   the person attempting to log in is monitored more closely and
   limited to a certain number of login retries within a limited
   period of time. Once a person exceeds either the retry or time
   limitation, he or she cannot log in.) The default for bit 7 in
   SECURITY_POLICY is clear.

   For more information on the system services $DELETE_INTRUSION,
   $SCAN_INTRUSION, and $SHOW_INTRUSION, refer to the OpenVMS System
   Services Reference Manual.

   For more information on the DCL commands DELETE/INTRUSION_RECORD
   and SHOW INTRUSION, refer to the OpenVMS DCL Dictionary.

   For more information on clusterwide intrusion detection, refer to
   the OpenVMS Guide to System Security.
 

4  Fast_Path_for_SCSI_and_Fibre_Channel_(Alpha)
   Fast Path for SCSI and Fibre Channel (FC) is a new feature with
   OpenVMS Version 7.3. This feature improves the performance of
   Symmetric Multi-Processing (SMP) machines that use certain SCSI
   ports, or FC.

   In previous versions of OpenVMS, SCSI and FC I/O completion was
   processed solely by the primary CPU. When Fast Path is enabled,
   the I/O completion processing can occur on all the processors in
   the SMP system. This substantially increases the potential I/O
   throughput on an SMP system, and helps to prevent the primary CPU
   from becoming saturated.

   See FAST_PATH_PORTS for information about the SYSGEN parameter,
   FAST_PATH_PORTS, that has been introduced to control Fast Path
   for SCSI and FC.
 

4  Floppy_Disks_Served
   Until this release, MSCP was limited to serving disks. Beginning
   with OpenVMS Version 7.3, serving floppy disks in an OpenVMS
   Cluster system is supported, enabled by MSCP.

   For floppy disks to be served in an OpenVMS Cluster system,
   floppy disk names must conform to the naming conventions for
   port allocation class names. For more information about device
   naming with port allocation classes, refer to the OpenVMS Cluster
   Systems manual.

   OpenVMS VAX clients can access floppy disks served from OpenVMS
   Alpha Version 7.3 MSCP servers, but OpenVMS VAX systems cannot
   serve floppy disks. Client systems can be any version that
   supports port allocation classes.
 

4  New_Fibre_Channel_Support_(Alpha)
   Support for new Fibre Channel hardware, larger configurations,
   Fibre Channel Fast Path, and larger I/O operations is included in
   OpenVMS Version 7.3. The benefits include:

   o  Support for a broader range of configurations: the lower cost
      HSG60 controller supports two SCSI buses instead of six SCSI
      buses supported by the HSG80; multiple DSGGB 16-port Fibre
      Channel switches enable very large configurations.

   o  Backup operations to tape, enabled by the new Modular Data
      Router (MDR), using existing SCSI tape subsystems

   o  Distances up to 100 kilometers between systems, enabling
      more configuration choices for multiple-site OpenVMS Cluster
      systems

   o  Better performance for certain types of I/O due to Fibre
      Channel Fast Path and support for larger I/O requests

   The following new Fibre Channel hardware has been qualified on
   OpenVMS Version 7.2-1 and on OpenVMS Version 7.3:

   o  KGPSA-CA host adapter

   o  DSGGB-AA switch (8 ports) and DSGGB-AB switch (16 ports)

   o  HSG60 storage controller (MA6000 storage subsystem)

   o  Compaq Modular Data Router (MDR)

   OpenVMS now supports Fibre Channel fabrics. A Fibre Channel
   fabric is multiple Fibre Channel switches connected together.
   (A Fibre Channel fabric is also known as cascaded switches.)

   Configurations that use Fibre Channel fabrics can be extremely
   large. Distances up to 100 kilometers are supported in a
   multisite OpenVMS Cluster system. OpenVMS supports the Fibre
   Channel SAN configurations described in the Compaq StorageWorks
   Heterogeneous Open SAN Design Reference Guide, available at the
   following Compaq web site:

   http://www.compaq.com/storage

   Enabling Fast Path for Fibre Channel can substantially increase
   the I/O throughput on an SMP system. For more information about
   this new feature, see Fast Path for SCSI and Fibre Channel
   (Alpha).

   Prior to OpenVMS Alpha Version 7.3, I/O requests larger than
   127 blocks were segmented by the Fibre Channel driver into
   multiple I/O requests. Segmented I/O operations generally have
   lower performance than one large I/O. In OpenVMS Version 7.3,
   I/O requests up to and including 256 blocks are done without
   segmenting.

   For more information about Fibre Channel usage in OpenVMS Cluster
   configurations, refer to the Guidelines for OpenVMS Cluster
   Configurations.
 

5  Tape_Support
   Fibre Channel tape functionality refers to the support of SCSI
   tapes and SCSI tape libraries in an OpenVMS Cluster system with
   shared Fibre Channel storage. The SCSI tapes and libraries are
   connected to the Fibre Channel by a Fibre-to-SCSI bridge known as
   the Modular Data Router (MDR).

   For configuration information, refer to the Guidelines for
   OpenVMS Cluster Configurations.
 

4  LANs_as_Cluster_Interconnects
   An OpenVMS Cluster system can use several LAN interconnects for
   node-to-node communication, including Ethernet, Fast Ethernet,
   Gigabit Ethernet, ATM, and FDDI.

   PEDRIVER, the cluster port driver, provides cluster
   communications over LANs using the NISCA protocol. Originally
   designed for broadcast media, PEDRIVER has been redesigned to
   exploit all the advantages offered by switched LANs, including
   full duplex transmission and more complex network topologies.

   Users of LANs for their node-to-node cluster communication will
   derive the following benefits from the redesigned PEDRIVER:

   o  Removal of restrictions for using Fast Ethernet, Gigabit
      Ethernet, and ATM as cluster interconnects

   o  Improved performance due to better path selection, multipath
      load distribution, and support of full duplex communication

   o  Greater scalability

   o  Ability to monitor, manage, and display information needed to
      diagnose problems with cluster use of LAN adapters and paths
 

5  SCA_Control_Program
   The SCA Control Program (SCACP) utility is designed to monitor
   and manage cluster communications. (SCA is the abbreviation
   of Systems Communications Architecture, which defines the
   communications mechanisms that enable nodes in an OpenVMS Cluster
   system to communicate.)

   In OpenVMS Version 7.3, you can use SCACP to manage SCA use
   of LAN paths. In the future, SCACP might be used to monitor
   and manage SCA communications over other OpenVMS Cluster
   interconnects.

   This utility is described in more detail in a new chapter in the
   OpenVMS System Management Utilities Reference Manual: M-Z.
 

5  Packet_Loss_Error
   Prior to OpenVMS Version 7.3, an SCS virtual circuit closure
   was the first indication that a LAN path had become unusable. In
   OpenVMS Version 7.3, whenever the last usable LAN path is losing
   packets at an excessive rate, PEDRIVER displays the following
   console message:

   %PEA0, Excessive packet losses on LAN Path from local-device-name -
    _  to device-name on REMOTE NODE node-name

   This message is displayed after PEDRIVER performs an excessively
   high rate of packet retransmissions on the LAN path consisting of
   the local device, the intervening network, and the device on the
   remote node. The message indicates that the LAN path has degraded
   and is approaching, or has reached, the point where reliable
   communications with the remote node are no longer possible. It is
   likely that the virtual circuit to the remote node will close if
   the losses continue. Furthermore, continued operation with high
   LAN packet losses can result in a significant loss in performance
   because of the communication delays resulting from the packet
   loss detection timeouts and packet retransmission.

   The corrective steps to take are:

   1. Check the local and remote LAN device error counts to see if a
      problem exists on the devices. Issue the following commands on
      each node:

      $ SHOW DEVICE local-device-name
      $ MC SCACP
      SCACP> SHOW LAN device-name
      $ MC LANCP
      LANCP> SHOW DEVICE device-name/COUNT

   2. If device error counts on the local devices are within normal
      bounds, contact your network administrators to request that
      they diagnose the LAN path between the devices.

      If necessary, contact your COMPAQ support representative for
      assistance in diagnosing your LAN path problems.

   For additional PEDRIVER troubleshooting information, see Appendix
   F of the OpenVMS Cluster Systems manual.
 

4  Warranted_and_Migration_Support
   Compaq provides two levels of support, warranted and migration,
   for mixed-version and mixed-architecture OpenVMS Cluster systems.

   Warranted support means that Compaq has fully qualified the two
   versions coexisting in an OpenVMS Cluster and will answer all
   problems identified by customers using these configurations.

   Migration support is a superset of the Rolling Upgrade support
   provided in earlier releases of OpenVMS and is available for
   mixes that are not warranted. Migration support means that Compaq
   has qualified the versions for use together in configurations
   that are migrating in a staged fashion to a newer version of
   OpenVMS VAX or of OpenVMS Alpha. Problem reports submitted
   against these configurations will be answered by Compaq. However,
   in exceptional cases, Compaq may request that you move to a
   warranted configuration as part of answering the problem.

   Compaq supports only two versions of OpenVMS running in a cluster
   at the same time, regardless of architecture. Migration support
   helps customers move to warranted OpenVMS Cluster version mixes
   with minimal impact on their cluster environments.

   The following table shows the level of support provided for all
   possible version pairings.

   Table 4-2 Warranted and Migration Support

                            Alpha
               Alpha/VAX    V7.2-xxx/
               V7.3         VAX V7.2    Alpha/VAX V7.1

   Alpha/VAX   WARRANTED    Migration   Migration
   V7.3
   Alpha       Migration    WARRANTED   Migration
   V7.2-xxx/
   VAX V7.2
   Alpha/VAX   Migration    Migration   WARRANTED
   V7.1

   In a mixed-version cluster with OpenVMS Version 7.3, you must
   install remedial kits on earlier versions of OpenVMS. For OpenVMS
   Version 7.3, two new features, XFC and Volume Shadowing minicopy,
   cannot be run on any node in a mixed version cluster unless all
   nodes running earlier versions of OpenVMS have installed the
   required remedial kit or upgrade. Remedial kits are available
   now for XFC. An upgrade for systems running OpenVMS Version 7.2-
   xx that supports minicopy will be made available soon after the
   release of OpenVMS Version 7.3.

   For a complete list of required remedial kits, refer to the
   OpenVMS Version 7.3 Release Notes.
 

3  SMP_Performance_Improvements_(Alpha)
   OpenVMS Alpha Version 7.3 contains software changes that improve
   SMP scaling. Designed for applications running on the new
   AlphaServer GS-series systems, many of these improvements will
   benefit all customer applications. The OpenVMS SMP performance
   improvements in Version 7.3 include the following:

   o  Improved MUTEX Acquisition

      Mutexes are used for synchronization of numerous events on
      OpenVMS. The most common use of a mutex is for synchronization
      of the logical names database and I/O base. In releases prior
      to OpenVMS Alpha Version 7.3, the manipulation of a mutex
      was completed with the SCHED spinlock held. Because the SCHED
      spinlock is a heavily used spinlock with high contention on
      large SMP systems and only a single CPU could manipulate a
      mutex, bottlenecks often occurred.

      OpenVMS Alpha Version 7.3 changes the way mutexes are
      manipulated. The mutex itself is now manipulated with atomic
      instructions. Thus multiple CPUs manipulate different mutexes
      in parallel. In most cases, the need to acquire the SCHED
      spinlock has been avoided. In cases where a process must be
      placed into a mutex wait state or when mutex waiters must wake
      up, SCHED will still need to be acquired.

   o  Improved Process Scheduling

      Changes made to the OpenVMS process scheduler reduce
      contention on the SCHED spinlock. Prior to OpenVMS Version
      7.3, when a process became computable, the scheduler released
      all IDLE CPUs to attempt to execute the process. On NUMA
      systems, all idle CPUs in the RAD were released. These idle
      CPUs competed for the SCHED spinlock, which added to the
      contention on the SCHED spinlock. As of OpenVMS Version 7.3,
      the scheduler only releases a single CPU. In addition, the
      scheduler releases high numbered CPUs first. This has the
      effect of avoiding scheduling processes on the primary CPU
      when possible.

      To use the modified scheduler, users must set the system
      parameter SCH_CTLFLAGS to 1. This parameter is dynamic.

   o  Improved SYS$RESCHED

      A number of applications and libraries use the SYS$RESCHED
      system service, which requests a CPU to reschedule another
      process. In releases prior to OpenVMS Version 7.3, this
      system service would lock the SCHED spinlock and attempt to
      reschedule another computable process on the CPU.

      Prior to OpenVMS Version 7.3, when heavy contention existed
      on the SCHED spinlock, using SYS$RESCHED system increased
      resources contention. As of OpenVMS Version 7.3, the
      SYS$RESCHED system service attempts to acquire the SCHED
      spinlock with a NOSPIN routine. Thus, if the SCHED spinlock
      is currently locked, this thread will not spin. It will return
      back to the caller.

   o  Lock Manager 2000 and 180 improvements

      There are several changes to the lock manager. For OpenVMS
      Clusters, the lock manager no longer uses IOLOCK8 for
      synchronization. It now uses the LCKMGR spinlock, which allows
      locking and I/O operations to occur in parallel.

      Remaster operations can be performed much faster now. The
      remaster code sends large messages with data from many locks
      when remastering as opposed to sending a single lock per
      message.

      The lock manager supports a Dedicated CPU mode. In cases
      where there is very heavy contention on the LCKMGR spinlock,
      dedicating a single CPU to performing locking operations
      provides a much more efficient mechanism.

   o  Enhanced Spinlock Tracing capability

      The spinlock trace capability, which first shipped in V7.2-
      1H1, can now trace forklocks. In systems with heavy contention
      on the IOLOCK8 spinlock, much of the contention occurs in fork
      threads. Collecting traditional spinlock data only indicates
      that the fork dispatcher locked IOLOCK8.

      As of OpenVMS Version 7.3, the spinlock trace has a hook in
      the fork dispatcher code. This allows the trace to report
      the routine that is called by the fork dispatch, which
      indicates the specific devices that contribute to heavy
      IOLOCK8 contention.

   o  Mailbox driver change

      Prior to OpenVMS Version 7.3, the mailbox driver FDT routines
      called a routine that locked the MAILBOX spinlock and
      delivered any required attention ASTs. In most cases, this
      routine did not require any attention ASTs to be delivered.
      Because the OpenVMS code that makes these calls already has
      the MAILBOX spinlock locked, the spinlock acquisition was also
      an unneeded second acquire of the spinlock.

      As of OpenVMS Version 7.3, OpenVMS now first checks to see
      if any ASTs may need to be delivered prior to calling the
      routine. This avoids both the call overhead and the overhead
      of relocking the MAILBOX spinlock that was already owned.
 

3  SYSMAN_Commands_and_Qualifiers
   The SYSMAN utility has the following new commands:

   o  CLASS_SCHEDULE commands

      The class scheduler provides the ability to limit the amount
      of CPU time that a system's users receive by placing users in
      scheduling classes.

      Command                 Description

      CLASS_SCHEDULE ADD      Creates a new scheduling class
      CLASS_SCHEDULE DELETE   Deletes a scheduling class
      CLASS_SCHEDULE MODIFY   Modifies the characteristics of a
                              scheduling class
      CLASS_SCHEDULE RESUME   Resumes a scheduling class that has
                              been suspended
      CLASS_SCHEDULE SHOW     Displays the characteristics of a
                              scheduling class
      CLASS_SCHEDULE SUSPEND  Temporarily suspends a scheduling
                              class

   o  IO FIND_WWID and IO_REPLACE_WWID (Alpha-only)

      These commands support Fibre Channel tapes, which are
      discussed in Tape Support.

      Command                 Description

      IO FIND_WWID            Detects all previously undiscovered
                              tapes and medium changers
      IO REPLACE_WWID         Replaces one worldwide identifier
                              (WWID) with another

   o  POWER_OFF qualifier for SYSMAN command SHUTDOWN NODE

      The /POWER_OFF qualifier specifies that the system is to power
      off after shutdown is complete.

   For more information, refer to the SYSMAN section of the OpenVMS
   System Management Utilities Reference Manual: M-Z.
 

3  New_System_Parameters
   This section contains definitions of system parameters that are
   new in OpenVMS Version 7.3.
 

4  AUTO_DLIGHT_SAV
   AUTO_DLIGHT_SAV is set to either 1 or 0. The default is 0.

   If AUTO_DLIGHT_SAV is set to 1, OpenVMS automatically makes the
   change to and from daylight saving time.
 

4  FAST_PATH_PORTS
   FAST_PATH_PORTS is a static parameter that deactivates Fast Path
   for specific drivers.

   FAST_PATH_PORTS is a 32-bit mask. If the value of a bit in the
   mask is 1, Fast Path is disabled for the driver corresponding to
   that bit. A value of -1 specifies that Fast Path is disabled for
   all drivers that the FAST_PATH_PORTS parameter controls.

   Bit position zero controls Fast Path for PKQDRIVER (for parallel
   SCSI), and bit position one controls Fast Path for FGEDRIVER
   (for Fibre Channel). Currently, the default setting for FAST_
   PATH_PORTS is 0, which means that Fast Path is enabled for both
   PKQDRIVER and FGEDRIVER.

   In addition, note the following:

   o  CI drivers are not controlled by FAST_PATH_PORTS. Fast Path
      for CI is enabled and disabled exclusively by the FAST_PATH
      system parameter.

   o  FAST_PATH_PORTS is relevant only if the FAST_PATH system
      parameter is enabled (equal to 1). Setting FAST_PATH to zero
      has the same effect as setting FAST_PATH_PORTS to -1.

   For additional information, see FAST_PATH and IO_PREFER_CPUS.
 

4  GLX_SHM_REG
   On Galaxy systems, GLX_SHM_REG is the number of shared memory
   region structures configured into the Galaxy Management Database
   (GMDB). If you set GLX_SHM_REG to 0, the default number of shared
   memory regions are configured.
 

4  LCKMGR_CPUID_(Alpha)
   The LCKMGR_CPUID parameter controls the CPU that the Dedicated
   CPU Lock Manager runs on. This is the CPU that the LCKMGR_SERVER
   process will utilize if you turn this feature on with the LCKMGR_
   MODE system parameter.

   If the specified CPU ID is either the primary CPU or a
   nonexistent CPU, the LCKMGR_SERVER process will utilize the
   lowest nonprimary CPU.

   LCKMGR_CPUID is a DYNAMIC parameter.

   For more information, see the LCKMGR_MODE system parameter.
 

4  LCKMGR_MODE_(Alpha)
   The LCKMGR_MODE parameter controls usage of the Dedicated CPU
   Lock Manager. Setting LCKMGR_MODE to a number greater than zero
   (0) indicates the number of CPUs that must be active before the
   Dedicated CPU Lock Manager is turned on.

   The Dedicated CPU Lock Manager performs all locking operations
   on a single dedicated CPU. This can improve system performance
   on large SMP systems with high MP_Synch associated with the lock
   manager.

   For more information about usage of the Dedicated CPU Lock
   Manager, see the OpenVMS Performance Management manual.

   Specify one of the following:

   Value    Description

   0        Indicates the Dedicated CPU Lock Manager is off. (The
            default.)
   >0       Indicates the number of CPUs that must be active before
            the Dedicated CPU Lock Manager is turned on.

   LCKMGR_MODE is a DYNAMIC parameter.
 

4  NPAGECALC
   NPAGECALC controls whether the system automatically calculates
   the initial size for nonpaged dynamic memory.

   Compaq sets the default value of NPAGECALC to 1 only during the
   initial boot after an installation or upgrade. When the value of
   NPAGECALC is 1, the system calculates an initial value for the
   NPAGEVIR and NPAGEDYN system parameters. This calculated value is
   based on the amount of physical memory in the system.

   NPAGECALC's calculations do not reduce the values of NPAGEVIR and
   NPAGEDYN from the values you see or set at the SYSBOOT prompt.
   However, NPAGECALC's calculation might increase these values.

   AUTOGEN sets NPAGECALC to 0. NPAGECALC should always remain 0
   after AUTOGEN has determined more refined values for the NPAGEDYN
   and NPAGEVIR system parameters.
 

4  NPAGERAD_(Alpha)
   NPAGERAD specifies the total number of bytes of nonpaged pool
   that will be allocated for Resource Affinity Domains (RADs) other
   than the base RAD. For platforms that have no RADs, NPAGERAD
   is ignored. Notice that NPAGEDYN specifies the total amount of
   nonpaged pool for all RADs.

   Also notice that the OpenVMS system might round the specified
   values higher to an even number of pages for each RAD, which
   prevents the base RAD from having too little nonpaged pool. For
   example, if the hardware is an AlphaServer GS160 with 4 RADs:

   NPAGEDYN = 6291456 bytes
   NPAGERAD = 2097152 bytes

   In this case, the OpenVMS system allocates a total of
   approximately 6,291,456 bytes of nonpaged pool. Of this amount,
   the system divides 2,097,152 bytes among the RADs that are not
   the base RAD. The system then assigns the remaining 4,194,304
   bytes to the base RAD.
 

4  RAD_SUPPORT_(Alpha)
   RAD_SUPPORT enables RAD-aware code to be executed on systems
   that support Resource Affinity Domains (RADs); for example,
   AlphaServer GS160 systems.

   A RAD is a set of hardware components (CPUs, memory, and I/O)
   with common access characteristics. For more information
   about using OpenVMS RAD features, refer to the OpenVMS Alpha
   Partitioning and Galaxy Guide.
 

4  SHADOW_MAX_UNIT
   SHADOW_MAX_UNIT specifies the maximum number of shadow sets that
   can exist on a node. The setting must be equal to or greater
   than the number of shadow sets you plan to have on a system.
   Dismounted shadow sets, unused shadow sets, and shadow sets with
   no write bitmaps allocated to them are included in the total.

   This system parameter is not dynamic; that is, a reboot is
   required when you change the setting.

   The default setting on OpenVMS Alpha systems is 500; on OpenVMS
   VAX systems, the default is 100. The minimum value is 10, and the
   maximum value is 10,000.

   Note that this parameter does not affect the naming of shadow
   sets. For example, with the default value of 100, a device name
   such as DSA999 is still valid.
 

4  VCC_MAX_IO_SIZE_(Alpha)
   The dynamic system parameter VCC_MAX_IO_SIZE controls the maximum
   size of I/O that can be cached by the Extended File Cache. It
   specifies the size in blocks. By default, the size is 127 blocks.

   Changing the value of VCC_MAX_IO_SIZE affects reads and writes to
   volumes currently mounted on the local node, as well as reads and
   writes to volumes mounted in the future.

   If VCC_MAX_IO_SIZE is 0, the Extended File Cache on the local
   node cannot cache any reads or writes. However, the system is
   not prevented from reserving memory for the Extended File Cache
   during startup if a VCC$MIN_CACHE_SIZE entry is in the reserved
   memory registry.

   VCC_MAX_IO_SIZE is a DYNAMIC parameter.
 

4  VCC_READAHEAD_(Alpha)
   The dynamic system parameter VCC_READAHEAD controls whether
   the Extended File Cache can use read-ahead caching. Read-
   ahead caching is a technique that improves the performance of
   applications that read data sequentially.

   By default VCC_READAHEAD is 1, which means that the Extended File
   Cache can use read-ahead caching. The Extended File Cache detects
   when a file is being read sequentially in equal-sized I/Os, and
   fetches data ahead of the current read, so that the next read
   instruction can be satisfied from cache.

   To stop the Extended File Cache from using read-ahead caching,
   set VCC_READAHEAD to 0.

   Changing the value of VCC_READAHEAD affects volumes currently
   mounted on the local node, as well as volumes mounted in the
   future.

   Readahead I/Os are totally asynchronous from user I/Os and only
   take place if sufficient system resources are available.

   VCC_READAHEAD is a DYNAMIC parameter.
 

4  WBM_MSG_INT
   WBM_MSG_INT is one of three system parameters that are available
   for managing the update traffic between a master write bitmap
   and its corresponding local write bitmaps in an OpenVMS Cluster
   system. (Write bitmaps are used by the volume shadowing software
   for minicopy operations.) The others are WBM_MSG_UPPER and
   WBM_MSG_LOWER. These parameters set the interval at which the
   frequency of sending messages is tested and also set an upper and
   lower threshold that determine whether the messages are grouped
   into one SCS message or are sent one by one.

   In single-message mode, WBM_MSG_INT is the time interval in
   milliseconds between assessments of the most suitable write
   bitmap message mode. In single-message mode, the writes issued by
   each remote node are, by default, sent one by one in individual
   SCS messages to the node with the master write bitmap. If the
   writes sent by a remote node reach an upper threshold of messages
   during a specified interval, single-message mode switches to
   buffered-message mode.

   In buffered-message mode, WBM_MSG_INT is the maximum time a
   message waits before it is sent. In buffered-message mode, the
   messages are collected for a specified interval and then sent
   in one SCS message. During periods of increased message traffic,
   grouping multiple messages to send in one SCS message to the
   master write bitmap is generally more efficient than sending each
   message separately.

   The minimum value of WBM_MSG_INT is 10 milliseconds. The maximum
   value is -1, which corresponds to the maximum positive value that
   a longword can represent. The default is 10 milliseconds.

   WBM_MSG_INT is a DYNAMIC parameter.
 

4  WBM_MSG_LOWER
   WBM_MSG_LOWER is one of three system parameters that are
   available for managing the update traffic between a master write
   bitmap and its corresponding local write bitmaps in an OpenVMS
   Cluster system. (Write bitmaps are used by the volume shadowing
   software for minicopy operations.) The others are WBM_MSG_INT
   and WBM_MSG_UPPER. These parameters set the interval at which the
   frequency of sending messages is tested and also set an upper and
   lower threshold that determine whether the messages are grouped
   into one SCS message or are sent one by one.

   WBM_MSG_LOWER is the lower threshold for the number of messages
   sent during the test interval that initiates single-message mode.
   In single-message mode, the writes issued by each remote node
   are, by default, sent one by one in individual SCS messages
   to the node with the master write bitmap. If the writes sent
   by a remote node reach an upper threshold of messages during a
   specified interval, single-message mode switches to buffered-
   message mode.

   The minimum value of WBM_MSG_LOWER is 0 messages per interval.
   The maximum value is -1, which corresponds to the maximum
   positive value that a longword can represent. The default is
   10.

   WBM_MSG_LOWER is a DYNAMIC parameter.
 

4  WBM_MSG_UPPER
   WBM_MSG_UPPER is one of three system parameters that are
   available for managing the update traffic between a master write
   bitmap and its corresponding local write bitmaps in an OpenVMS
   Cluster system. (Write bitmaps are used by the volume shadowing
   software for minicopy operations.) The others are WBM_MSG_INT
   and WBM_MSG_LOWER. These parameters set the interval at which the
   frequency of sending messages is tested and also set an upper and
   lower threshold that determine whether the messages are grouped
   into one SCS message or are sent one by one.

   WBM_MSG_UPPER is the upper threshold for the number of messages
   sent during the test interval that initiates buffered-message
   mode. In buffered-message mode, the messages are collected for a
   specified interval and then sent in one SCS message.

   The minimum value of WBM_MSG_UPPER is 0 messages per interval.
   The maximum value is -1, which corresponds to the maximum
   positive value that a longword can represent. The default is
   100.

   WBM_MSG_UPPER is a DYNAMIC parameter.
 

4  WBM_OPCOM_LVL
   WBM_OPCOM_LVL controls whether write bitmap system messages are
   sent to the operator console. (Write bitmaps are used by the
   volume shadowing software for minicopy operations.) Possible
   values are shown in the following table:

   Value Description

   0     Messages are turned off.
   1     The default; messages are provided when write bitmaps are
         started, deleted, and renamed, and when the SCS message
         mode (buffered or single) changes.
   2     All messages for a setting of 1 are provided plus many
         more.

   WBM_OPCOM_LVL is a DYNAMIC parameter.
 

3  Volume_Shadowing_for_OpenVMS
   Volume Shadowing for OpenVMS introduces three new features, the
   minicopy operation enabled by write bitmaps, new qualifiers for
   disaster tolerant support for OpenVMS Cluster systems, and a new
   /SHADOW qualifier to the INITIALIZE command. These features are
   described in this section.
 

4  Minicopy_in_Compaq_Volume_Shadowing_(Alpha)
   This new minicopy feature of Compaq Volume Shadowing for OpenVMS
   and its enabling technology, write bitmaps, are fully implemented
   on OpenVMS Alpha systems. OpenVMS VAX nodes can write to shadow
   sets that use this feature but they can neither create master
   write bitmaps nor manage them with DCL commands. The minicopy
   operation is a streamlined copy operation. Minicopy is designed
   to be used in place of a copy operation when you return a shadow
   set member to the shadow set. When a member has been removed from
   a shadow set, a write bitmap tracks the changes that are made to
   the shadow set in its absence, as shown in Application Writes to
   a Write Bitmap.

   When the member is returned to the shadow set, the write bitmap
   is used to direct the minicopy operation, as shown in Member
   Returned to the Shadow Set (Virtual Unit). While the minicopy
   operation is taking place, the application continues to read and
   write to the shadow set.


   Thus, minicopy can significantly decrease the time it takes
   to return the member to membership in the shadow set and can
   significantly increase the availability of the shadow sets that
   use this feature.

   Typically, a shadow set member is removed from a shadow set to
   back up the data on the disk. Before the introduction of the
   minicopy feature, Compaq required that the virtual unit (the
   shadow set) be dismounted to back up the data from one of the
   members. This requirement has been removed, provided that the
   guidelines for removing a shadow set member for backup purposes,
   as documented in Volume Shadowing for OpenVMS, are followed.

   For more information about this new feature, including additional
   memory requirements for this version of Compaq Volume Shadowing
   for OpenVMS, refer to Volume Shadowing for OpenVMS.
 

4  Multiple-Site_OpenVMS_Cluster_Systems
   OpenVMS Version 7.3 introduces new command qualifiers for the
   DCL commands DISMOUNT and SET for use with Volume Shadowing for
   OpenVMS. These new command qualifiers provide disaster tolerant
   support for multiple-site OpenVMS Cluster systems. Designed
   primarily for multiple-site clusters that use Fibre Channel for
   a site-to-site storage interconnect, they can be used in other
   configurations as well. For more information about using these
   new qualifiers in a multiple-site OpenVMS Cluster system, see the
   white paper Using Fibre Channel in a Disaster-Tolerant OpenVMS
   Cluster System, which is posted on the OpenVMS Fibre Channel web
   site at:

   http://www.openvms.compaq.com/openvms/fibre/

   The new command qualifiers are described in this section. Using
   DISMOUNT and SET Qualifiers describes how to use these new
   qualifiers.

   DISMOUNT/FORCE_REMOVAL ddcu:

   One new qualifier to the DISMOUNT command, DISMOUNT/FORCE_REMOVAL
   ddcu:, is provided. If connectivity to a device has been lost and
   the shadow set is in mount verification, /FORCE_REMOVAL ddcu: can
   be used to immediately expell a named shadow set member (ddcu:)
   from the shadow set. If you omit this qualifier, the device is
   not dismounted until mount verification completes. Note that this
   qualifier cannot be used in conjunction with the /POLICY=MINICOPY
   (=OPTIONAL) qualifier.

   The device specified must be a member of a shadow set that is
   mounted on the node where the command is issued.

   SET DEVICE

   The following new qualifiers to the SET DEVICE command have
   been created for managing shadow set members located at multiple
   sites:

   o  /FORCE_REMOVAL ddcu:

      If connectivity to a device has been lost and the shadow set
      is in mount verification, this qualifier causes the member to
      be expelled from the shadow set immediately.

      If the shadow set is not currently in mount verification, no
      immediate action is taken. If connectivity to a device has
      been lost but the shadow set is not in mount verification,
      this qualifier lets you flag the member to be expelled from
      the shadow set, as soon as it does enter mount verification.

      The device specified must be a member of a shadow set that is
      mounted on the node where the command is issued.

   o  /MEMBER_TIMEOUT=xxxxxx ddcu:

      Specifies the timeout value to be used for a member of a
      shadow set.

      The value supplied by this qualifier overrides the SYSGEN
      parameter SHADOW_MBR_TMO for this specific device. Each member
      of a shadow set can be assigned a different MEMBER_TIMEOUT
      value.

      The valid range for xxxxxx is 1 to 16,777,215 seconds.

      The device specified must be a member of a shadow set that is
      mounted on the node where the command is issued.

   o  /MVTIMEOUT=yyyyyy DSAnnnn:

      Specifies the mount verification timeout value to be used for
      this shadow set, specified by its virtual unit name, DSAnnnn.

      The value supplied by this qualifier overrides the SYSGEN
      parameter MVTIMEOUT for this specific shadow set.

      The valid range for yyyyyy is 1 to 16,777,215 seconds.

      The device specified must be a shadow set that is mounted on
      the node where the command is issued.

   o  /READ_COST=zzz ddcu:

      The valid range for zzz is 1 to 4,294,967,295 units.

      The device specified must be a member of a shadow set that is
      mounted on the node where the command is issued.

      This qualifier allows you to modify the default "cost"
      assigned to each member of a shadow set, so that reads are
      biased or prioritized toward one member versus another.

      The shadowing driver assigns default READ_COST values to
      shadow set members when each member is initially mounted.
      The default value depends on the device type, and its
      configuration relative to the system mounting it. There are
      default values for a DECRAM device; a directly connected
      device in the same physical location; a directly connected
      device in a remote location; a DECram served device; and a
      default value for other served devices.

      The value supplied by this qualifier overrides the default
      assignment. The shadowing driver adds the value of the current
      queue depth of the shadow set member to the READ_COST value
      and then reads from the member with the lowest value.

      Different systems in the cluster can assign different costs to
      each shadow set member.

      If the /SITE command qualifier has been specified, the
      shadowing driver will take site values into account when it
      assigns default READ_COST values. Note that in order for the
      shadowing software to determine if a device is in the category
      of "directly connected device in a remote location," the /SITE
      command qualifier must have been applied to both the shadow
      set and to the individual device.

      Reads requested for a shadow set from a system at Site 1 are
      performed from a shadow set member that is also at Site 1.
      Reads requested for the same shadow set from Site 2 can read
      from the member located at Site 2.

   o  /READ_COST=y DSAnnnn

      The valid range for y is any non-zero number. The value
      supplied has no meaning in itself. The purpose of this
      qualifier is to switch the read cost setting for all shadow
      set members back to the default read cost settings established
      automatically by the shadowing software. DSAnnnn must be a
      shadow set that is mounted on the node from which this command
      is issued.

   o  /SITE=(nnn, logical_name) (ddcu: DSAnnnn:)

      This qualifier indicates to the shadowing driver the site
      location of the shadow set member or of the shadow set
      (represented by its virtual unit name). Prior to using
      this qualifier, you can define the site location in the
      SYLOGICALS.COM command procedure to simplify its use.

      The valid range for nnn is 1 through 255.

      The following example shows the site locations defined,
      followed by the use of the /SITE qualifier:

      $ DEFINE/SYSTEM/EXEC ZKO 1
      $ DEFINE/SYSTEM/EXEC LKG 2
      $!
      $! At the ZKO site ...
      $ MOUNT/SYSTEM DSA0/SHAD=($1$DGA0:,$1$DGA1:) TEST
      $ SET DEVICE/SITE=ZKO  DSA0:
      $!
      $! At the LKG site ...
      $ MOUNT/SYSTEM DSA0/SHAD=($1$DGA0,$1$DGA1) TEST
      $ SET DEVICE/SITE=LKG  DSA0:
      $!
      $! At both sites, the following would be used:
      $ SET DEVICE/SITE=ZKO  $1$DGA0:
      $ SET DEVICE/SITE=LKG  $1$DGA1:

   o  /COPY_SOURCE (ddcu:,DSAnnnn:)

      Controls whether one or both source members of a shadow
      set are used as the source for read data during full copy
      operations, when a third member is added to the shadow
      set. This only affects copy operations that do not use DCD
      operations.

      HSG80 controllers have a read-ahead cache, which significantly
      improves single-disk read performance. Copy operations
      normally alternate reads between the two source members, which
      effectively nullifies the benefits of the read-ahead cache.
      This qualifier lets you force all reads from a single source
      member for a copy operation.

      If the shadow set is specified, then all reads for full copy
      operations will be performed from whichever disk is the
      current "master" member, regardless of physical location of
      the disk.

      If a member of the shadow set is specified, then that member
      will be used as the source of all copy operations. This allows
      you to choose a local source member, rather than a remote
      master member.

   o  /ABORT_VIRTUAL_UNIT DSAnnnn:

      To use this qualifier, the shadow set must be in mount
      verification. When you specify this qualifier, the shadow
      set aborts mount verification immediately on the node from
      which the qualifier is issued. This qualifier is intended to
      be used when it is known that the unit cannot be recovered.
      Note that after this command completes, the shadow set must
      still be dismounted. Use the following command to dismount the
      shadow set:

      DISMOUNT/ABORT   DSAnnnn
 

5  Using_DISMOUNT_and_SET_Qualifiers
   The diagram in this section depicts a typical multiple-site
   cluster using Fibre Channel. It is used to illustrate the steps
   which must be taken to manually recover one site when the site-
   to-site storage interconnect fails. Note that with current Fibre
   Channel support, neither site can use the MSCP server to regain a
   path to the DGA devices.

   To prevent the shadowing driver from automatically recovering
   shadow sets from connection-related failures, three steps must be
   taken prior to any failure:

   1. Every device that is a member of a multiple-site shadow set
      must have its member_timeout setting raised to a high value,
      using the following command:

      $ SET DEVICE /MEMBER_TIMEOUT= x  ddcu:

      This command will override the SHADOW_MBR_TMO value, which
      would normally be used for a shadow set member. A value for x
      of 259200 would be a seventy-two hour wait time.

   2. Every shadow set that spans multiple sites must have its mount
      verification timeout setting raised to a very high value,
      higher than the MEMBER_TIMEOUT settings for each member of the
      shadow set.

      Use the following command to increase the mount verification
      timeout setting for the shadow set:

      $ SET DEVICE /MVTIMEOUT = y  DSAnnnn

      The y value of this command should always be greater than the
      x value of the $ SET DEVICE/MEMBER_TIMEOUT= x ddcu:.

      The $ SET DEVICE /MVTIMEOUT = y command will override the
      MVTIMEOUT value, which would normally be used for the shadow
      set. A value for y of 262800 would be a seventy-three hour
      wait.

   3. Every shadow set and every shadow set member must have a site
      qualifier. As already noted, a site qualifier will ensure that
      the read cost is correctly set. The other critical factor is
      three-member shadow sets. When they are being used, the site
      qualifier will ensure that the master member of the shadow set
      will be properly maintained.

   In the following diagram, shadow set DSA42 is made up of
   $1$DGA1000 and $1$DGA2000

            <><><><><><><><><><><>  LAN   <><><><><><><><><><><>
            Site A                                Site B
               |                                     |
            F.C. SWITCH  <><><><> XYZZY <><><><>  F.C. SWITCH
               |                                     |
            HSG80 <><> HSG80                      HSG80 <><> HSG80
               |                                     |
            $1$DGA1000  --------- DSA42 --------- $1$DGA2000

   This diagram illustrates that systems at Site A or Site B have
   direct access to all devices at both sites via Fibre Channel
   connections. XYZZY is a theoretical point between the two sites.
   If the Fibre Channel connection were to break at this point,
   each site could access different "local" members of DSA42 without
   error. For the purpose of this example, Site A will be the sole
   site chosen to retain access to the shadow set.

   The following actions must be taken to recover the shadow set at
   Site A.

   On Site A:

   $ DISMOUNT /FORCE_REMOVAL= $1$DGA2000:

   Once the command has completed, the shadow set will be available
   for use only at site A.

   On Site B:

   $ SET DEVICE /ABORT_VIRTUAL_UNIT DSA42:

   Once the command completes, the shadow set status will be
   MntVerifyTimeout.

   Next, issue the following command to free up the shadow set:

   $ DISMOUNT/ABORT DSA42:

   These steps must be taken for all affected multiple-site shadow
   sets.
 

4  Using_INITIALIZE_With_SHADOW_and_ERASE_Qualifiers
   The new /SHADOW qualifier to the DCL INITIALIZE command is
   available. The use of the INITIALIZE /SHADOW command to
   initialize multiple members of a future shadow set eliminates
   the requirement for a full copy operation when you later create a
   shadow set.

   Compaq strongly recommends that you also specify the /ERASE
   qualifier with the INITIALIZE/SHADOW command when initializing
   multiple members of a future shadow set. Whereas the /SHADOW
   qualifier eliminates the need for a full copy operation when
   you later create a shadow set, the /ERASE qualifier reduces the
   amount of time a full merge will take.

   If you omit the /ERASE qualifier, and a merge operation of the
   shadow set is subsequently required (because a system on which
   the shadow set is mounted fails), the resulting merge operation
   will take much longer to complete.

   The INITIALIZE command with the /SHADOW and /ERASE qualifiers
   performs the following operations:

   o  Formats up to six devices with one command, so that any three
      can be subsequently mounted together as members of a new host-
      based shadow set.

   o  Writes a label on each volume.

   o  Deletes all information from the devices except for the system
      files containing identical file structure information. All
      former contents of the disks are lost.

   You can then mount up to three of the devices that you have
   initialized in this way as members of a new host-based shadow
   set.

   For more information, refer to Volume Shadowing for OpenVMS.
 

2  Programming_Features
   This topic describes new features of interest to application and
   system programmers.
 

3  3D_Graphics_Support
   The PowerStorm 300 (PBXGD-AD) and PowerStorm 350 (PBXGD-AE)
   graphics cards are now supported on Alpha-based systems. The
   OpenGL 3D graphics API is now provided as part of the OpenVMS
   base operating system. The version of OpenGL supported on the
   PowerStorm 300 and PowerStorm 350 graphics cards is Version 1.1.

   The implementation of OpenGL Version 1.1 for the PowerStorm 300
   or PowerStorm 350 is designed to coexist with installations
   of the Open3D layered product for older graphics cards. The
   images shipped with OpenVMS are named DECW$OPENGLSHR_V11 and
   DECW$OPENGLUSHR_V11. The _V11 suffix is used to distinguish the
   OpenGL Version 1.1 images from the OpenGL Version 1.0 images
   shipped with Open3D (DECW$OPENGLSHR and DECW$OPENGLUSHR).

   Applications using only OpenGL V1.0 features may be linked
   against either the Open3D images or the new Version 1.1 images.
   Applications using OpenGL Version 1.1 features should be linked
   explicitly against the Version 1.1 images.

   For further information on OpenGL support for the PowerStorm 300
   and PowerStorm 350, refer to the PowerStorm 300/350 Installation
   Guide and Release Notes documentation shipped with the graphics
   card.

                                WARNING

      If 3D graphics will be used extensively, particularly in
      an environment using multiple PowerStorm 300 and PowerStorm
      P350s in a single system, read and strictly observe the
      guidelines for setting SYSGEN parameters and account quotas
      contained in the PowerStorm 300/350 OpenVMS Graphics Support
      Release Notes Version 1.1 and the Compaq PowerStorm 300/350
      Graphics Controllers Installation Guide shipped with the
      graphics card. The Release notes can also be accessed on the
      OpenVMS Documentation CD-ROM in the following directory:

      Directory                File Name

      [73.DOCUMENTATION.PS_    P300_350_REL_NOTES.PS,TXT
      TXT]
 

3  3X-DAPBA-FA_and_3X-DAPCA-FA_ATM_LAN_Adapters_(Alpha)
   The 3X-DAPBA-FA (HE155) and 3X-DAPCA-FA (HE622) are PCI based
   ATM LAN adapters for Alpha based systems that provide high
   performance PCI-to-ATM capability. The 3X-DAPBA-FA adapter offers
   a 155 Mbps fiber connection; the 3X-DAPCA-FA adapter offers a 622
   Mbps fiber connection.

   The datalink drivers for these adapters function in a new
   OpenVMS ATM environment. The new OpenVMS ATM environment is fully
   compatible with the existing legacy ATM support and allows both
   ATM environments to be configured on a single system. Also, the
   LANCP management interface is the same for both ATM environments.

   For additional information about the 3X-DAPBA-FA PCI HE155
   ATM and 3X-DAPCA-FA PCI HE622 ATM LAN adapters, refer to the
   following URL:

   http://www.compaq.com/alphaserver/products/options
 

3  COBOL_RTL_Enhancements
   The COBOL RTL for both Alpha and VAX supports five new intrinsic
   functions with four-digit year formats:

      YEAR-TO-YYYY
      DATE-TO-YYYYMMDD
      DAY-TO-YYYYDDD
      TEST-DATE-YYYYMMDD
      TEST-DAY-YYYYDDD

   The COBOL RTL for Alpha has improved performance for the DISPLAY
   statement redirected to a file and for programs compiled with the
   /MATH=CIT3 and /MATH=CIT4 qualifiers.

   This RTL's handling of ON SIZE ERROR is now more compatible with
   that of Compaq COBOL for OpenVMS VAX.
 

3  Compaq_C_Run-Time_Library_Enhancements
   The following sections describe the Compaq C RTL enhancements
   included in OpenVMS Version 7.3. For more details, refer to the
   revision of the Compaq C RTL Reference Manual that ships with
   Compaq C Version 6.3 or later.
 

4  Strptime_Function_Is_XPG5-Compliant
   The strptime function has been modified to be compliant with
   X/Open CAE Specification System Interfaces and Headers Issue
   5 (commonly known as XPG5). The change for XPG5 is in how the
   strptime function processes the "%y" directive for a two-digit
   year within the century if no century is specified.

   When a century is not otherwise specified, XPG5 requires that
   values for the "%y" directive in the range 69-99 refer to years
   in the twentieth century (1969 to 1999 inclusive), while values
   in the range 00-68 refer to years in the twenty-first century
   (2000 to 2068 inclusive). Essentially, for the "%y" directive,
   strptime became a "pivoting" function, with 69 being a pivoting
   year.

   Before this change, the strptime function interpreted a two-digit
   year with no century as a year within twentieth century.

   With OpenVMS Version 7.3, XPG5-compliant strptime becomes a
   default strptime function in the Compaq C RTL. However, the
   previous nonpivoting XPG4-compliant strptime function is retained
   for compatibility.

   The pivoting is controlled by the DECC$XPG4_STRPTIME logical
   name. To use the nonpivoting version of strptime, either:

   o  Define DECC$XPG4_STRPTIME to any value before invoking the
      application.

      OR

   o  Call the nonpivoting strptime directly as the function
      decc$strptime_xpg4.
 

4  Nested_Directory_Levels_Limitation_Lifted_(Alpha)
   The Compaq C RTL I/O subsystem was enhanced to remove the
   restriction of eight nested directory levels for an ODS-5 device.
   This affects Compaq C RTL functions such as access, mkdir,
   opendir, rmdir, and stat.
 

4  Improved_Support_for_Extended_File_Specifications_(Alpha)
   The following sections describe improved Compaq C RTL support for
   extended file specifications.
 

5  Case_Preservation
   Programs linked against the Compaq C Run-Time Library DECC$SHR
   can now preserve the case of file names on ODS level 5 disks.
   This applies when creating or reporting file names. By default,
   this feature is disabled. To enable this feature, enter the
   following command:

   $ DEFINE DECC$EFS_CASE_PRESERVE ENABLE

   If file names are all in uppercase, use the following command to
   convert the names to lowercase when reporting the name in UNIX
   style:

   $ DEFINE DECC$EFS_CASE_SPECIAL ENABLE

   If file names are not all in uppercase, then DEFINE DECC$EFS_
   CASE_SPECIAL ENABLE preserves case.

   The commands to disable the preceding logical-name settings are:

   $ DEFINE DECC$EFS_CASE_PRESERVE DISABLE
   $ DEFINE DECC$EFS_CASE_SPECIAL DISABLE

   The setting for the DECC$EFS_CASE_SPECIAL logical name, if not
   set to DISABLE, supersedes any setting for the DECC$EFS_CASE_
   PRESERVE logical name.

   The DECC$EFS_CASE_PRESERVE and DECC$EFS_CASE_SPECIAL logicals
   are checked only once per image activation, not on a file-by-file
   basis.
 

5  Long_File_Names
   For OpenVMS Alpha Version 7.2, some basic Compaq C RTL I/O
   functions (creat, stat, and the functions from the open family
   of functions) were enhanced to accept long OpenVMS-style file
   names for an ODS-5 device.

   For OpenVMS Alpha Version 7.3, all other Compaq C RTL functions,
   except chdir and the functions from the exec family of functions,
   were also enhanced to accept long OpenVMS-style file names for an
   ODS-5 device.

   All C RTL functions that accept or report full file
   specifications will process file specifications up to 4095 bytes
   long, subject to the rules defined for the media format. For
   file specifications in OpenVMS format, there are no special
   restrictions. In situations where a full file specification
   cannot be reported because the buffer is too short, the function
   attempts to report the abbreviated name.

   UNIX file names have the following restrictions:

   o  Names containing special characters, such as multiple periods,
      caret, or multinational characters, may be rejected.

   o  A function call may report failure if the output buffer is
      not large enough to receive the full name. For OpenVMS style
      names, the reported name would contain a file ID-abbreviated
      name. There is no representation of file ID-abbreviated names
      defined for UNIX.
 

4  Exact_Case_Argv_Arguments_(Alpha)
   Nonquoted command-line arguments passed to C and C++ programs
   (argv arguments) can now optionally have their case preserved,
   rather than being lowercased as in previous versions.

   By default, this feature is disabled.

   To enable this case preservation feature, define the logical
   name DECC$ARGV_PARSE_STYLE to "ENABLE" and set the process-level
   DCL parse style flag to "EXTENDED" in the process running the
   program:

   $ DEFINE DECC$ARGV_PARSE_STYLE ENABLE
   $ SET PROCESS/PARSE_STYLE=EXTENDED

   Enabling this feature also ensures that the image name returned
   in argv[0] is also case-preserved.

   To disable this feature, use any one of the following commands:

    $ SET PROCESS/PARSE_STYLE=TRADITIONAL

   or

   $ DEFINE/SYSTEM DECC$ARGV_PARSE_STYLE DISABLE

   or

   $ DEASSIGN DECC$ARGV_PARSE_STYLE

   The value of the DECC$ARGV_PARSE_STYLE logical is case-
   insensitive.
 

4  Implicitly_Opening_Files_for_Shared_Access
   The Compaq C RTL was enhanced to open all files for shared access
   as if the "shr=del,get,put,upd" option was specified in the open*
   or creat call.

   To enable this feature, define the logical name DECC$FILE_SHARING
   to the value "ENABLE". The value is case-insensitive.

   DECC$FILE_SHARING is checked only once per image activation, not
   on a file-by-file basis.
 

4  Translating_UNIX_File_Specifications
   The Compaq C RTL was enhanced to allow interpreting the leading
   part of a UNIX-style file specification as either a subdirectory
   name or a device name.

   The default translation of a "foo/bar" UNIX-style name to a
   "foo:bar" VMS-style name remains the default.

   To translate a "foo/bar" UNIX-style name to a "[.foo]bar" VMS-
   style name, define the logical name DECC$DISABLE_TO_VMS_LOGNAME_
   TRANSLATION to ENABLE.

   DECC$DISABLE_TO_VMS_LOGNAME_TRANSLATION is checked only once per
   image activation, not on a file-by-file basis.
 

4  New_Functions
   The Compaq C RTL has added the following functions in OpenVMS
   Version 7.3:

   fchown
   link
   utime
   utimes
   writev
 

3  Fortran_Support_for_64-Bit_Address_(Alpha)
   Support has been added to OpenVMS Alpha to allow Fortran
   developers to use static data in 64-bit address space.

   For more information about how to use this feature, refer to the
   Fortran documentation.
 

3  Large_Page-File_Sections_(Alpha)
   Page-file sections are used to store temporary data in private
   or global (shared) sections of memory. In previous releases of
   OpenVMS Alpha, the maximum amount of data that could be backed up
   to page files was 32 GB per process (4 process page files, each 8
   GB) and 504 GB per system (63 page files, each 8 GB).

   With OpenVMS Alpha Version 7.3, the previous limits for page-file
   sections were extended significantly to take advantage of larger
   physical memory. Now images that use 64-bit addressing can map
   and access an amount of dynamic virtual memory that is larger
   than the amount of physical memory available on the system.

   With the new design, if a process requires additional page-
   file space, page files can be allocated dynamically. Space is
   no longer reserved in a distinct page file, and pages are no
   longer bound to an initially assigned page file. Instead, if
   modified pages must be written back, they are written to the best
   available page file.

   Each page or swap file can hold approximately 16 million pages
   (128 GB), and up to 254 page or swap files can be installed.
   Files larger than 128 GB are installed as multiple files.
 

3  Multipath_System_Services
   The new Multipath system services provide the capability to
   return path information and allow you to enable, disable, and
   switch specific I/O paths to any device.

   The concept of multiple I/O paths to storage devices was
   introduced in OpenVMS Version 7.2-1. It is now possible to select
   more than one I/O path to a device in the event that the path in
   use should fail.

   To assist in decision making when configuring a system's I/O
   structure, the following DCL commands were made available to
   allow you to display I/O path information and change the current
   settings affecting these paths:

   o  SET DEVICE device-name/PATH=path-description-string/SWITCH

   o  SET DEVICE device-name/PATH=path-description-string/[NO]ENABLE

   o  SHOW DEVICE/MULTIPATH device-name

   In OpenVMS Version 7.3, the capability to return path information
   and allow you to enable, disable, and switch specific I/O paths
   to any device is now implemented in the following new system
   services:

   o  SYS$DEVICE_PATH_SCAN

      This service returns path information for a given Multipath
      I/O device. Each call to the service returns the name of one
      of the paths to the device. A context argument is used to
      maintain continuity between calls. This mechanism is similar
      to the one currently used for SYS$GETDVI.

   o  SYS$SET_DEVICE[W]

      Use this service to switch the selected path that handles I/O
      to a device, or to enable or disable a path for future use in
      the event of failover. When switching a path, the path change
      is initiated at the time the request is made by the system
      service.

      The current functions of this service include forcing an
      immediate path switch and enabling or disabling paths.

      A synchronous version of this service, SYS$SET_DEVICEW, is
      also provided. This service returns to the caller only after
      the path switch attempt has been made. Should the path switch
      fail, an error condition is returned to the caller.

      Currently, $SET_DEVICE allows only one valid item list entry.

   For additional information, refer to the OpenVMS System Services
   Reference Manual.
 

3  Multiprocess_Debugging_(Alpha)
   For Version 7.3, debugger support for multiprocess programs has
   been extensively overhauled. Problems have been corrected and the
   user interface has been improved.

   The multiprocess debugging enhancements include the following
   features:

   o  Greater control over individual process and groups of
      processes, including:

         Execution of processes (or groups of processes)
         Suspension of processes (or groups of processes)
         Exiting processes (or groups of processes), with or without
         exit handler execution

   o  Ability to create user-defined groups of processes

   o  Easier to start a multiprocess debugging session; the default
      configuration of the kept debugger is for a multiprocess
      session

   o  Applications that use $HIBER WAIT (LIB$WAIT, $SCHDWK, and so
      on) can now be debugged in a multiprocess debugging session

   These enhancements make it much easier to debug multiprocess
   programs.
 

3  Performance_API
   The Performance Application Programming Interface (API) provides
   a documented functional interface-the $GETRMI system service-that
   allows performance software engineers to access a predefined list
   of performance data items.

   For more information about $GETRMI, refer to the OpenVMS System
   Services Manual.
 

3  POLYCENTER_Software_Installation_Utility_Enhancements
   PDL Changes shows the changes made to the product description
   language (PDL) for the POLYCENTER Software Installation
   utility.

   Table 5-1 PDL Changes

   Statement          Description

   execute upgrade    New statement.
   execute            Modified to execute on a reconfigure
   postinstall        operation.
   file               Refinements made to their conflict detection
   module             and resolution algorithms. For example, when a
                      file from the kit contains the same non-zero
                      generation number as the same file already
                      installed, the file from the kit is selected
                      to replace the file on disk. Previously,
                      in this tie situation, the file on disk was
                      retained to resolve the conflict.
   bootstrap block    Obsolete. However, the utility will continue
   execute release    to process these statements in a backward
   patch image        compatible manner to support existing kits
   patch text         that might have used them.



   Function           Description

   upgrade            Enhanced to fully support version range
                      checking.

   The POLYCENTER Software Installation Utility Developer's Guide
   has been extensively revised for this release. Major improvements
   include:

   o  Updated descriptions for most PDL statements.

   o  A comprehensive presentation on using custom command
      procedures with execute statements (added to the Advanced
      Topics chapter).

   o  New tables, diagrams, and examples.
 

3  New_Process_Dump_Tools_(Alpha)
   OpenVMS Version 7.3 contains new tools for processing dump files.
   Note that these new-style process dump and process dump analysis
   tools are not compatible with the old-style process dumps. That
   is, if you have a problem you want to analyze with the new tools,
   you must generate a new process dump using the new process dump
   image.

   The following sections describe the new tools.
 

4  DCL_ANALYZE_PROCESS_DUMP_(Alpha)
   The DCL ANALYZE/PROCESS_DUMP command invokes the OpenVMS debugger
   to analyze a process dump, giving you access to debugger commands
   for your analysis. In OpenVMS Version 7.3, most of the old DCL
   ANALYZE/PROCESS_DUMP qualifiers have no effect. Only the /FULL
   and /IMAGE qualifiers are still valid. Both these qualifiers are
   still optional.

   /FULL now causes the debugger to execute the debugger SHOW IMAGE,
   SHOW CALL, and SHOW THREAD/ALL commands after a process dump file
   has been opened.

   /IMAGE has been renamed to /IMAGE_PATH, and is now a directory
   specification, rather than a file specification. /IMPAGE_PATH
   specifies a directory in which to look for the debug symbol
   information files (.DSF or .EXE files, in that order) that belong
   to the process dump file. The name of the symbol file must be
   the same as the image name in the process dump file. For example,
   for MYIMAGE.DMP, the debugger searches for file MYIMAGE.DSF or
   MYIMAGE.EXE.

   Version 7.3 and later debuggers check for dumpfile image
   specification and DST file link date-time mismatches and issue
   a warning if one is discovered.

   For more information about the DCL ANALYZE/PROCESS_DUMP command,
   refer to the OpenVMS DCL Dictionary: A-M.
 

4  Debugger_ANALYZE_PROCESS_DUMP_Command
   The debugger has a new command:

   ANALYZE/PROCESS_DUMP/IMAGE_PATH[=directory-spec] dumpfile.

   This command is available only in the kept debugger. The kept
   debugger is the image you invoke with the command DEBUG/KEEP,
   which allows you to run and rerun programs from the same
   debugging session.

   The qualifier /PROCESS_DUMP is required.

   For more information, refer to the OpenVMS Debugger Manual.
 

4  Debugger_SDA_Command
   The new debugger SDA command invokes the System Dump Analyzer
   (SDA) to allow you to look at a process dump from within the
   OpenVMS debugger. For example:

    DBG> SDA

    OpenVMS (TM) Alpha process dump analyzer

    SDA> ..
    .
    .
    SDA> EXIT
    DBG>

   This allows you to use SDA to analyze a process dump without
   terminating a debugger session.

   For more information, refer to the OpenVMS Debugger Manual.
 

4  Analyzing_Process_Dumps_on_Different_Systems
   You can analyze a process dump file on a system different from
   the one on which it was generated. However, if there is a base
   image link date/time mismatch between the generating system
   and analyzing system, you must copy SYS$BASE_IMAGE.EXE from the
   generating system and point to it with the SDA$READ_DIR logical
   name.

   For threaded process dump analysis on a system different from the
   one on which it was generated, it may also be necessary to copy
   and logically point to the generating system's PTHREAD$RTL and
   PTHREAD$DBGSHR (POSIX Threads Library debug assistant).
 

4  Forcing_a_Process_Dump
   You can force a process dump with the DCL command
   SET PROCESS/DUMP=NOW process-spec. This command causes the
   contents of the address space occupied by process-spec to be
   written immediately to the file named image-name.DMP in the
   current directory (image-name is the same as the file name).

   For more information about the DCL SET PROCESS/DUMP command,
   refer to the OpenVMS DCL Dictionary: N-Z.
 

4  Security_and_Diskquotas
   A process dump is either complete or partial. A complete process
   dump contains all of process space and all process-pertinent data
   from system space. A partial process dump contains only user-
   readable data from process space and only those data structures
   from system space that are not deemed sensitive. Privileged
   or protected data, such as an encryption key in third-party
   software, might be considered sensitive.

   In general, nonprivileged users should not be able to read
   complete process dumps, and by default they cannot do so.
   However, certain situations require nonprivileged users to be
   able to read complete process dumps. Other situations require
   enabling a user to create a complete process dump while at
   the same time preventing that user from being able to read the
   complete process dump.

   By default, process dumps are written to the current default
   directory of the user. The user can override this by defining
   the logical name SYS$PROCDMP to identify an alternate directory
   path. Note that the name of the process dump file is always the
   same as the name of the main image at the time the process dump
   is written, with the file extension .DMP.
 

5  Special_Rights_Identifiers
   You can use the new rights identifier IMGDMP$READALL to allow
   a nonprivileged user to read a complete process dump. You
   can use the new rights identifier IMGDMP$PROTECT to protect
   a complete process dump from being read by the user that
   created the process dump. These rights identifiers are created
   during the installation of OpenVMS Version 7.3 by the image
   SYS$SYSTEM:IMGDMP_RIGHTS.EXE, which is also run automatically
   during system startup to ensure that these rights identifiers
   exist with the correct values and attributes.

   If these rights identifiers have been deleted, you can run
   SYS$SYSTEM:IMGDMP_RIGHTS.EXE to recreate them.

   Note that IMGDMP$READALL has no attributes, but IMGDMP$PROTECT is
   created with the RESOURCE attribute.
 

5  Privileged_Users
   For this discussion, a privileged user is one who satisfies one
   of the following conditions:

   o  Has one or more of the privileges CMKRNL, CMEXEC, SYSPRV,
      READALL, or BYPASS

   o  Is a member of a system UIC group (by default [10,n] or
      lower). Such users are treated as though they hold SYSPRV
      privilege.

   Holders of CMKRNL or CMEXEC can write complete process dumps.
   Holders of any of the other privileges can read a process dump
   wherever it has been written.
 

5  Nonprivileged_Users
   To allow a nonprivileged user to write and read complete process
   dumps, grant the rights identifier IMGDMP$READALL to the user.
   If the IMGDMP$READALL rights identifier does not exist, run the
   image SYS$SYSTEM:IMGDMP_RIGHTS.EXE to create it (see Special
   Rights Identifiers). Then use AUTHORIZE to grant the rights
   identifier to the user. For example:

       $ DEFINE /USER SYSUAF SYS$SYSTEM:SYSUAF.DAT  !if necessary
       $ RUN SYS$SYSTEM:AUTHORIZE
       UAF> GRANT /IDENTIFIER IMGDMP$READALL <user>
       UAF> EXIT

   Note that the user must log out and log in again to be able to
   receive the rights identifier. A nonprivileged user with rights
   identifier IMGDMP$READALL can read and write complete process
   dumps without restriction.
 

5  Protecting_Process_Dumps
   You can allow a nonprivileged user to write a complete process
   dump and at the same time prevent the user from reading the
   process dump just written. To do so, perform the following
   procedure:

   1. If the IMGDMP$PROTECT rights identifier does not exist, run
      the image SYS$SYSTEM:IMGDMP_RIGHTS.EXE to create it (see
      Special Rights Identifiers).

   2. Create a protected directory with rights identifier
      IMGDMP$PROTECT.

   3. Define protected logical name SYS$PROTECTED_PROCDMP to point
      to the protected directory.

      If DISKQUOTA is to be used on the disk containing the
      protected directory, specify the maximum disk space to be
      used for process dumps.

                                CAUTION

      Do not grant IMGDMP$PROTECT to any user. It is granted and
      revoked as needed by SYS$SHARE:IMGDMP.EXE from executive
      mode while writing a process dump. If you grant it
      permanently to a user, that user has access to all process
      dumps written to the protected directory.

   You can choose to set up additional ACLs on the protected
   directory to further control which users are allowed to read
   and write process dumps there.

   Note that to take a process dump when the image is installed with
   elevated privileges or belongs to a protected subsystem, the user
   must hold CMKRNL privilege, and is by definition a privileged
   user (see Privileged Users).
 

3  RMS_Locking_Enhancements
   This section introduces the new Record Management Services (RMS)
   enhancements provided in this release.
 

4  RMS_Locking_Performance_(Alpha)
   The following sections describe RMS locking performance
   enhancements that are in OpenVMS Alpha Version 7.2-1H1 and in
   OpenVMS Version 7.3.
 

5  RMS_Global_Buffer_Read-Mode_Locking
In the RMS run-time processing environment, the use of global
buffers can minimize I/O operations for shared files. This release
introduces read-mode bucket locking that minimizes locking for
shared access to global buffers. This new functionality:

   o  Allows concurrent read access to the global buffers. Accesses
      are no longer serialized, waiting to acquire an exclusive lock
      for a read access.

   o  Caches the read-mode lock as a system lock, which is retained
      over accesses and only lowered to null when the lock is
      blocking an exclusive write request. This functionality
      significantly reduces both local and remote lock request
      traffic (the number of $ENQ and $DEQ system service calls)
      as well as associated IPL-8 spinlock activity and System
      Communications Services (SCS) messages for a cluster.

   o  Does not increase lock resource names or the number of active
      system or process locks on the system.

   o  Is functionally compatible in mixed version clusters that
      include both Alpha and VAX computers.

   This new functionality applies to read operations (using the $GET
   and $FIND services) for all three file organizations: sequential,
   relative, and indexed. It also applies to a write operation
   (using the $PUT service) for the read accesses used for index
   buckets the first time through an index tree for the write.

   You do not need to make changes to existing applications to
   implement the read-only global bucket locks. However, global
   buffers must be set on a data file to take advantage of the
   enhancement. Use the following DCL command, where n is the number
   of buffers:

   $ SET FILE/GLOBAL_BUFFER=n <filename>

   For information about specifying the number of buffers, refer
   to the OpenVMS DCL Dictionary. For general information about
   using global buffers, refer to the section entitled Using
   Global Buffers for Shared Files in the Guide to OpenVMS File
   Applications.

   In a mixed cluster environment where there may be high contention
   for specific buckets, the Alpha nodes that are using read-mode
   global bucket locking may dominate accesses to write-shared
   files, thereby preventing timely access by other nodes.

   With the new /CONTENTION_POLICY=keyword qualifier to the SET RMS_
   DEFAULT command, you can specify the level of locking fairness at
   the process or system level for environments that experience high
   contention conditions.

   For more information about using the /CONTENTION_POLICY=keyword
   qualifier, refer to the SET RMS_DEFAULT section of the OpenVMS
   DCL Dictionary.
 

5  No_Query_Record_Locking_Option
   This release introduces new functionality that can minimize
   record locking for read accesses to shared files, thereby
   avoiding the processing associated with record locking calls
   to the Lock Manager.

   In previous releases, if a file is opened allowing write sharing,
   an exclusive record lock is taken out for all record operations
   (both read and write). Applications may obtain record locking
   modes other than the exclusive lock (default) by specifying
   certain options to the RAB$L_ROP field. However, all the options
   involve some level of record locking. That is, the options
   require $ENQ or $DEQ system service calls to the Lock Manager.

   The user record locking options include the RAB$V_NLK (no lock)
   query locking option, which requests that RMS take out a lock
   to probe for status and not hold the lock for synchronization.
   If the lock is not granted (exclusive lock held) and the read-
   regardless (RAB$V_RRL) option is not set, the record access fails
   with an RMS$_RLK status. Otherwise, the record is returned with
   one of the following statuses:

   o  RMS$_SUC - No other writers

   o  RMS$_OK_RLK - Record can be read but not written

   o  RMS$_OK_RRL - Exclusive lock is held (lock request denied) but
      the read-regardless (RAB$V_RRL) option is set

   When only the RAB$V_NLK option is specified, record access can
   be denied. When both the RAB$V_NLK and RAB$V_RRL options are
   specified, an application can guarantee the return of any record
   with a success or alternate success status.

   This release introduces the no query record locking option,
   which allows applications to read records (using $GET or $FIND
   services) without any consideration of record locking. This
   option:

   o  Does not make a call to the Lock Manager

   o  Is equivalent to both RAB$V_NLK and RAB$V_RRL being set
      except that the RMS$_OK_RLK or RMS$_OK_RRL status will not
      be returned

   This functionality is independent of bucket locks. It applies to
   both local and global buffers and to all three file organizations
   (sequential, relative, and indexed).

   Three alternate methods for specifying the no query record
   locking option are outlined in Methods Available for Specifying
   No Query Record Locking.

   Note the following:

   o  The first method allows the option to be enabled externally,
      potentially without any application change.

   o  You should use any of the methods only as appropriate for
      the application. In particular, you should check for any
      dependency in an existing application on the alternate success
      status RMS$_OK_RLK or RMS$_OK_RRL.

   Table 5-2 Methods Available for Specifying No Query Record
             Locking

   To...                  Use This Method...

   Disable query record   Enter the following DCL command to request
   locking at the         that RMS use no query record locking for
   process or system      any read operation with both RAB$V_NLK
   level.                 and RAB$V_RRL options set in the RAB$L_ROP
                          field:

                          $ SET RMS_DEFAULT/QUERY_LOCK=
                                            DISABLE[/SYSTEM]

                          Keys on RAB$V_NLK and RAB$V_RRL options in
                          existing applications.
   Enable no query        Set the RAB$V_NQL option in the RAB$W_ROP_
   record locking on      2 field.
   a per-record read
   operation.             The RAB$V_NQL option takes precedence
                          over all other record locking options. Use
                          only if the current read ($GET or $FIND)
                          operation is not followed by an $UPDATE or
                          $DELETE call.
   Enable no query        Set the FAB$V_NQL option in the FAB$B_SHR
   record locking at      field to request that RMS use no query
   the file level.        locking for the entire period the file is
                          open for any read record operation with
                          both RAB$V_NLK and RAB$V_RRL options set
                          in the RAB$L_ROP field.

                          This option can be used with any
                          combination of the other available FAB$B_
                          SHR sharing options. Keys on RAB$V_NLK and
                          RAB$V_RRL options in applications.

   RMS precedence for the no query record locking option is as
   follows:

   o  The RAB$V_NQL option set in the RAB$W_ROP_2 field

   o  At file open (and applied, if RAB$V_NLK and RAB$V_RRL are set
      for the read operation):

      -  The FAB$V_NQL option set in the FAB$B_SHR field

      -  The SET RMS_DEFAULT/QUERY_LOCK=DISABLE setting at the
         process level

      -  The SET RMS_DEFAULT/QUERY_LOCK=DISABLE setting at the
         system level. If the process /QUERY_LOCK setting equals
         SYSTEM_DEFAULT (the default when the process is created),
         RMS uses the system specified value.

   For more information, see OpenVMS Record Management Services
   Reference Manual.
 

4  Record_Locking_Options
   RMS uses the distributed Lock Manager ($ENQ system service) for
   record locking.

   To help prevent false deadlocks, the distributed Lock Manager
   uses the following flags for lock requests:

   Flag           Purpose

   LCK$M_         When set, the lock management services do not
   NODLCKWT       consider this lock when trying to detect deadlock
                  conditions.
   LCK$M_         When set, the lock management services do not
   NODLCKBLK      consider this lock as blocking other locks when
                  trying to detect deadlock conditions.

   In previous releases, RMS did not set these flags in its record
   lock requests.

   With this release, you can optionally request that RMS set
   these flags in record lock requests by setting the corresponding
   options RAB$V_NODLCKWT and RAB$V_NODLCKBLK in the new RAB$W_ROP_2
   field. For more information about using these options, refer to
   the flag information in the $ENQ section of the OpenVMS System
   Services Reference Manual: A-GETUAI.
 

3  OpenVMS_Registry
   Beginning in OpenVMS Version 7.3, the $REGISTRY system service
   and the OpenVMS Registry server have been enhanced to use the
   Intra-Cluster Communications (ICC) protocol. ICC provides a
   high-performance communication mechanism that is ideal for large
   transfers. Using ICC eases restrictions on the amount of data
   that can be transferred between the $REGISTRY system service
   and the Registry server. These restrictions previously prevented
   large key values from being stored and retrieved, and prevented
   full searches of large databases. The changes made in OpenVMS
   Version 7.3 result in an incompatibility between the OpenVMS
   Version 7.2 $REGISTRY service and Registry server and the OpenVMS
   Version 7.3 $REGISTRY service and Registry server. However, these
   changes substantially benefit OpenVMS customers in this release
   and in future releases, when we plan to further reduce these
   restrictions.

   Also in OpenVMS Version 7.3, registry operations are
   client/server based, and as such require some length of time
   for the server to respond to a request. If the server is too
   busy or the timeout value is too small, or both, the server
   will not respond in time and the $REGISTRY service will return
   a REG$_NORESPONSE error. This does not necessarily mean that
   the operation failed; it only means that the server was not able
   to respond before the time expired. Most operations complete
   immediately. However, Compaq recommends that you specify the
   timeout value be a minimum of 5 seconds.

   The new format of the $registry system service is:

   $REGISTRY [efn], func, [ntcredentials], itmlst, [iosb] [,astadr]
   [,astprm] [,timeout]

   Note that astadr, astprm and timeout are optional arguments.
   These optional arguments cannot be defaulted, which means that to
   specify the timeout argument, you must specify astadr and astprm
   (or specify them as 0). Some languages, such as Bliss and Macro,
   provide macros to do this for you.
 

4  REG$CP_Registry_Utility
   The REG$CP Registry Utility has been enhanced to use the timeout
   argument. REG$CP commands now support a /WAIT=numberofseconds
   qualifier, allowing you to specify the number of seconds to
   wait for the Registry Server to respond to the command. /WAIT is
   negatable (by using /NOWAIT). However, like the timeout argument,
   Compaq recommends that you specify a minimum of 5 seconds.

   The REG$CP Registry Utility has also been enhanced to display
   security descriptors. The LIST command can now be used to display
   the security descriptor associated with a particular key. This
   includes the security descriptor structure itself, and may also
   include Security Identifiers (SIDs), System Access-Control Lists
   (SACLs), and Discretionary Access-Control Lists (DACLs). You
   must have access to the key to display the security descriptor;
   in other words, you must have proper credentials to read the
   security information, or you must be suitably privileged.

   For more information, refer to the OpenVMS Connectivity Developer
   Guide, which is available on the OpenVMS Alpha CD-ROM in
   directory [COM_ALPHA_011A].
 

3  Alpha_SDA_Commands,_Parameters,_and_Qualifiers
   The OpenVMS Version 7.3 software release offers a number of new
   Alpha SDA commands, parameters, and qualifiers. OpenVMS Version
   7.3 also offers many new parameters and qualifiers for existing
   commands.

   For more detailed information, refer to the OpenVMS Alpha System
   Analysis Tools Manual.
 

4  New_Alpha_SDA_Commands
   The following section lists and defines the new System Dump
   Analyzer commands with their parameters and qualifiers.
 

5  DUMP
   The DUMP command displays the contents of a range of memory
   formatted as a comma-separated variable (CSV) list, suitable
   for inclusion in a spreadsheet.

   The following table shows the parameter for the DUMP command:

   Parameter          Meaning

   range              The range of locations to be displayed. The
                      range is specified in one of the following
                      formats:  Meaning

                      m:n       Range from address m to address n
                                inclusive
                      m;n       Range from address m for n bytes

   The following table shows the qualifiers for the DUMP command:

   Qualifier                   Meaning

   /COUNT=[{ALL|records}]      Gives the number of records to be
                               displayed. The default is to display
                               all records.
   /DECIMAL                    Outputs data as decimal values.
   /FORWARD                    Causes SDA to display the records
                               in the history buffer in ascending
                               address order. This is the default.
   /HEXADECIMAL                Outputs data as hexadecimal values.
                               This is the default.
   /INDEX_ARRAY [={LONGWORD    Indicates to SDA that the range
   (default)|QUADWORD}]        of addresses given is a vector
                               of pointers to the records to be
                               displayed. The vector can be a
                               list of longwords (default) or
                               quadwords. The size of the range
                               must be an exact number of longwords
                               or quadwords as appropriate.
   /INITIAL_POSITION           Indicates to SDA which record is
   ={ADDRESS=address|RECORD=numtorbe displayed first. The default
                               is the lowest addressed record if
                               /FORWARD is used, and the highest
                               addressed record if /REVERSE is used.
                               The initial position may be given as
                               a record number within the range, or
                               the address at which the record is
                               located.
   /LONGWORD                   Outputs each data item as a longword.
                               This is the default.
   /PHYSICAL                   Indicates to SDA that all addresses
                               (range and/or start position) are
                               physical addresses. By default,
                               virtual addresses are assumed.
   /QUADWORD                   Outputs each data item as a quadword.
   /RECORD_SIZE=size           Indicates the size of each record
                               within the history buffer, the
                               default being 512 bytes. Note that
                               this size must exactly divide into
                               the total size of the address range
                               to be displayed, unless /INDEX_ARRAY
                               is specified.
   /REVERSE                    Causes SDA to display the records
                               in the history buffer in descending
                               address order.
 

5  SET_SYMBOLIZE
   The SET SYMBOLIZE command enables or disables symbolization of
   addresses in the display from an EXAMINE command.

   The following shows the parameters for the SET SYMBOLIZE command:

   Parameter  Meaning

   ON         Enables symbolization of addresses
   OFF        Disables symbolization of addresses

   There are no qualifiers for this command.
 

5  SHOW_MEMORY
   The SHOW MEMORY command displays the availability and usage of
   those memory resources that are related to memory.

   There are no parameters for this command. The following shows the
   qualifiers for the SHOW MEMORY command, which are the same as for
   the existing DCL command:

   Qualifier            Meaning

   /ALL                 Displays all available information;
                        that is, information displayed by the
                        /FILES, /PHYSICAL_PAGES, /POOL, and /SLOTS
                        qualifiers. This is the default display.
   /BUFFER_OBJECTS      Displays information about system resources
                        used by buffer objects.
   /CACHE               Displays information about the Virtual
                        I/O Cache facility. The cache facility
                        information is displayed as part of the SHOW
                        MEMORY and SHOW MEMORY/CACHE/FULL commands.
   /FILES               Displays information about the use of
                        each paging and swapping file currently
                        installed.
   /FULL                Displays additional information about
                        each pool area or paging or swapping
                        file currently installed, when used with
                        the /POOL or the /FILES qualifier. This
                        qualifier is ignored unless the /FILES or
                        the /POOL qualifier is specified explicitly.
                        When used with the /CACHE qualifier, /FULL
                        displays additional information about the
                        use of the Virtual I/O Cache facility.
   /GH_REGIONS          Displays information about the granularity
                        hint regions (GHR) that have been
                        established. For each of these regions,
                        information is displayed about the size of
                        the region, the amount of free memory, the
                        amount of memory in use, and the amount of
                        memory released to OpenVMS from the region.
                        The granularity hint regions information is
                        also displayed as part of SHOW MEMORY, SHOW
                        MEMORY/ALL, and SHOW MEMORY/FULL commands.
   /PHYSICAL_PAGES      Displays information about the amount of
                        physical memory and the number of free and
                        modified pages.
   /POOL                Displays information about the usage of each
                        dynamic memory (pool) area, including the
                        amount of free space and the size of the
                        largest contiguous block in each area.
   /RESERVED            Displays information about memory
                        reservations.
   /SLOTS               Displays information about the availability
                        of partition control block (PCB) vector
                        slots and balance slots.
 

5  SHOW_RAD
   The SHOW RAD command displays the settings and explanations of
   the RAD_SUPPORT system parameter fields, and the assignment
   of CPUs and memory to the Resource Affinity Domains (RADs).
   This command is only useful on platforms that support RADs. By
   default, the SHOW RAD command displays the settings of the RAD_
   SUPPORT system parameter fields.

   The following shows the parameter for the SHOW RAD command:

   Parameter   Meaning

   number      Displays information on CPUs and memory for the
               specified RAD

   The following shows the qualifier for the SHOW RAD command:

   Qualifier  Meaning

   /ALL       Displays settings of the RAD_SUPPORT parameter fields
              and the CPU and memory assignments for all RADs
 

5  SHOW_TQE
   The SHOW TQE command displays the entries in the Timer Queue. The
   default output is a summary display of all timer queue entries
   (TQEs) in chronological order.

   There are no parameters for this command. The following shows the
   qualifiers for the SHOW TQE command:

   Qualifier       Meaning

   /ADDRESS=n      Outputs a detailed display of the TQE at the
                   specified address
   /ALL            Outputs a detailed display of all TQEs
   /BACKLINK       Outputs the display of TQEs, either detailed
                   (/ALL) or brief (default), in reverse order,
                   starting at the entry furthest into the future
   /PID=n          Limits the display of the TQEs that affect the
                   process with the specified internal PID
   /ROUTINE=n      Limits the display of the TQEs for which the
                   specified address is the fork PC
 

5  UNDEFINE
   The UNDEFINE command causes SDA to remove the specified symbol
   from its symbol table.

   The following shows the parameter for the UNDEFINE command:

   Parameter       Meaning

   symbol          The name of the symbol to be deleted from SDA's
                   symbol table. A symbol name is required.

   There are no qualifiers for this command.
 

4  Parameters_and_Qualifiers_for_Existing_Commands
   The following section lists and defines new parameters and
   qualifiers for existing commands.
 

5  REPEAT
   The REPEAT command has the following new parameter:

   Parameter        Meaning

   count            The number of times the previous command is to
                    be repeated. The default is a single repeat.

   The REPEAT command has the following new qualifier:

   Qualifier        Meaning

   /UNTIL=condition Defines a condition that terminates the REPEAT
                    command. By default, there is no terminating
                    condition.
 

5  SEARCH
   The /STEPS qualifier of the SEARCH command now allows any step
   size. In addition to the keywords QUADWORD, LONGWORD (default),
   WORD, or BYTE, any value can be specified.

   Qualifier                       Meaning

   /STEPS={QUADWORD|LONGWORD|WORD  Specifies the step factor of
   |BYTE|value}                    the search through the specified
                                   memory range. After the SEARCH
                                   command has performed the
                                   comparison between the value of
                                   expression and memory location,
                                   it adds the specified step factor
                                   to the address of the memory
                                   location. The resulting location
                                   is the next location to undergo
                                   the comparison. If you do not
                                   specify the /STEPS qualifier, the
                                   SEARCH command uses a step factor
                                   of a longword.
 

5  SET_OUTPUT
   The SET OUTPUT command has the following new qualifiers:

   Qualifier       Meaning

   /[NO]HEADER     The /HEADER qualifier causes SDA to include a
                   heading at the top of each page of the output
                   file. This is the default. The /NOHEADER
                   qualifier causes SDA to omit the page headings.
                   Use of /NOHEADER implies /NOINDEX.
   /SINGLE_        Indicates to SDA that the output for a single
   COMMAND         command is to be written to the specified file
                   and that subsequent output should be written to
                   the terminal.
 

5  SET_PROCESS
   The SET PROCESS command has the following new qualifier:

   Qualifier   Meaning

   /NEXT       Causes SDA to locate the next valid process in the
               process list and select that process. If there are
               no further valid processes in the process list, SDA
               returns an error.
 

5  SHOW_DEVICE
   The SHOW DEVICE command has the following new qualifiers:

   Qualifier      Meaning

   /CDT=address   Identifies the device by the address of its
                  Connector Descriptor Table (CDT). This applies
                  to cluster port devices only.
   /PDT           Displays the Memory Channel Port Descriptor Table.
                  This qualifier is ignored for devices other than
                  memory channel.
   /UCB=ucb-      This is a synonym for /ADDRESS=ucb-address.
   address
 

5  SHOW_GCT
   The SHOW GCT command has the following new qualifier:

   Qualifier     Meaning

   /CHILDREN     When used with /ADDRESS=n or /HANDLE=n, the
                 /CHILDREN qualifier causes SDA to display all nodes
                 in the configuration tree that are children of the
                 specified node.
 

5  SHOW_LOCK
   The SHOW LOCK command's qualifier /STATUS has the following new
   keyword:

   Keyword    Meaning

   DPC        Indicates a delete pending cache lock
 

5  SHOW_PFN_DATA
   The SHOW PFN_DATA command has the following new qualifier:

   Qualifier        Meaning

   /RAD             Displays data on the disposition of pages among
   [={n|ALL}]       the Resource Affinity Domain on applicable
                    systems
 

5  SHOW_POOL
   The SHOW POOL command has the following new qualifiers:

   Qualifier          Meaning

   /BRIEF             Displays only general information about pool
                      and its addresses.
   /CHECK             Checks all free packets for POOLCHECK-style
                      corruption, in exactly the same way that
                      the system does when generating a POOLCHECK
                      crashdump.
   /MAXIMUM_BYTES     Displays only the first n bytes of a pool
   [=n]               packet; default is 64 bytes.
   /STATISTICS [=     Displays usage statistics about each lookaside
   ALL]               list and the variable free list. For each
                      lookaside list, its queue header address,
                      packet size, the number of packets, attempts,
                      fails, and deallocations are displayed. (If
                      pool checking is disabled, the attempts,
                      fails, and deallocations are not displayed.)
                      For the variable free list, its queue header
                      address, the number of packets and the
                      size of the smallest and largest packets
                      are displayed. /STATISTICS can be further
                      qualified by using either /NONPAGED, /BAP, or
                      /PAGED to display statistics for a specified
                      pool area. (Note that for paged pool, only
                      variable free list statistics are displayed.)

                      If /STATISTICS is specified without the ALL
                      keyword, only active lookaside lists are
                      displayed. Use /STATISTICS = ALL to display
                      all lookaside lists.
   /UNUSED            Displays only variable free packets and
                      lookaside list packets, not used packets.
 

5  SHOW_PROCESS
   The SHOW PROCESS command has the following new qualifiers:

   Qualifier          Meaning

   /FID_ONLY          When used with /CHANNEL or /PROCESS_SECTION_
                      TABLE (/PST), the /FID_ONLY qualifier causes
                      SDA to not attempt to translate the FID
                      (File ID) to a file name when invoked with
                      ANALYZE/SYSTEM.
   /GSTX=index        When used with the /PAGE_TABLES qualifier, it
                      causes SDA to only display page table entries
                      for the specific global section.
   /IMAGES [= ALL]    By default, /IMAGES now only displays the
                      address of the image control block, the start
                      and end addresses of the image, the activation
                      code, the protected and shareable flags, the
                      image name, and the major and minor IDs of the
                      image. If /IMAGES = ALL qualifier is used, it
                      also displays the base, end, image offset, and
                      section type for installed resident images in
                      use by this process.
   /NEXT              Causes SDA to locate the next valid process
                      in the process list and select that process.
                      It there are no further valid processes in the
                      process list, SDA returns an error.
   /PST               This is a synonym for /PROCESS_SECTION_TABLE.
 

5  SHOW_RESOURCE
   The SHOW RESOURCE command has the following new qualifier:

   Qualifier   Meaning

   /OWNED      Causes SDA to only display owned resources
 

5  SHOW_SPINLOCKS
   The SHOW SPINLOCKS command has the following new qualifier:

   Qualifier     Meaning

   /COUNTS       Produces a display of Acquire, Spin, and Wait
                 counts for each spinlock
 

5  SHOW_SUMMARY
   The SHOW SUMMARY command has the following new qualifier:

   Qualifier              Meaning

   /PROCESS_              Displays only processes with the specified
   NAME=process_name      process name. Wildcards can be used in
                          process_name, in which case SDA displays
                          all matching processes. The default
                          action is for SDA to display data for all
                          processes, regardless of process name.
 

3  SDA_Commands_for_Spinlock_Tracing
   The OpenVMS Version 7.3 software release includes the new
   Spinlock Tracing utility. With the implementation of this
   utility, you can now tell which spinlock is heavily used, and who
   is acquiring and releasing the contended spinlocks. The Spinlock
   Tracing utility allows a characterization of spinlock usage, as
   well as collection of performance data for a given spinlock on
   a per-CPU basis. The tracing ability can be enabled or disabled
   while the system is running, allowing the collection of spinlock
   data for a given period of time without system interruption.

   To use the Spinlock Tracing utility, SDA has implemented new
   commands and qualifiers. These SDA commands and qualifiers are
   described as follows:
 

4  SPL_LOAD
   This command loads the SPL$DEBUG execlet. This must be done prior
   to starting spinlock tracing. It has no qualifiers.
 

4  SPL_SHOW_COLLECT
   This command displays the data collected for a specific spinlock.
   It has no qualifiers.
 

4  SPL_SHOW_TRACE
   This command displays spinlock tracing information. Qualifiers
   for the SPL SHOW TRACE Command shows the qualifiers for this
   command.

   Table 5-3 Qualifiers for the SPL SHOW TRACE Command

   Qualifier          Meaning

   /SPINLOCK=spinlock Specifies the display of a specific
                      spinlock, for example, /SPINLOCK=LCKMGR or
                      /SPINLOCK=SCHED.
   /NOSPINLOCK        Specifies that no spinlock trace information
                      be displayed. If omitted, all spinlock trace
                      entries are decoded and displayed.
   /FORKLOCK=forklock Specifies the display of a specific
                      forklock, for example, /FORKLOCK=IOLOCK8 or
                      /FORKLOCK=IPL8.
   /NOFORKLOCK        Specifies that no forklock trace information
                      be displayed. If omitted, all fork trace
                      entries are decoded and displayed.
   /ACQUIRE           Displays any spinlock acquisitions.
   /NOACQUIRE         Ignores any spinlock acquisitions.
   /RELEASE           Displays any spinlock releases.
   /NORELEASE         Ignores any spinlock releases.
   /WAIT              Displays any spinwait operations.
   /NOWAIT            Ignores any spinwait operations.
   /FRKDSPTH          Displays all invocations of fork routines
                      within the fork dispatcher. This is the
                      default.
   /NOFRKDSPTH        Ignores all of the operations of the /FRKDSPTH
                      qualifier.
   /FRKEND            Displays all returns from fork routines within
                      the fork dispatcher. This is the default.
   /NOFRKEND          Ignores all operations of the /FRKEND
                      qualifier.
   /SUMMARY           Stops the entire trace buffer and displays a
                      summary of all spinlock and forklock activity.
                      It also displays the top ten callers.
   /CPU=n             Specifies the display of information for a
                      specific CPU only, for example, /CPU=5 or
                      /CPU=PRIMARY. By default, all trace entries
                      for all CPUs are displayed.
   /TOP=n             Displays a different number other than the
                      top ten callers or fork PC. By default, the
                      top ten are displayed. This qualifier is only
                      useful when you also specify the /SUMMARY
                      qualifier.
 

4  SPL_START_COLLECT
   This command accumulates information for a specific spinlock.
   Qualifiers for the SPL START COLLECT Command shows the qualifiers
   for this command:

   Table 5-4 Qualifiers for the SPL START COLLECT Command

   Qualifier          Meaning

   /SPINLOCK=spinlock Specifies the tracing of a specific
                      spinlock, for example, /SPINLOCK=LCKMGR or
                      /SPINLOCK=SCHED
   /ADDRESS=n         Specifies the tracing of a specific spinlock
                      by address
 

4  SPL_START_TRACE
   This command enables spinlock tracing. Qualifiers for the SPL
   START TRACE Command shows the qualifiers for this command.

   Table 5-5 Qualifiers for the SPL START TRACE Command

   Qualifier          Meaning

   /SPINLOCK=spinlock Specifies the tracing of a specific spinlock.
   /NOSPINLOCK        Disables spinlock tracing and does not collect
                      any spinlock data. If omitted, all spinlocks
                      are traced.
   /FORKLOCK=forklock Specifies the tracing of a specific
                      forklock, for example, /FORKLOCK=IOLOCK8 or
                      /FORKLOCK=IPL8.
   /NOFORKLOCK        Disables forklock tracing and does not collect
                      any forklock data. If omitted, all forks are
                      traced.
   /BUFFER=pages      Specifies the size of the trace buffer (in
                      Alpha page units). It defaults to 128 pages,
                      which is equivalent to 1MB, if omitted.
   /ACQUIRE           Traces any spinlock acquisitions. This is the
                      default.
   /NOACQUIRE         Ignores any spinlock acquisitions.
   /RELEASE           Traces any spinlock releases. This is the
                      default.
   /NORELEASE         Ignores any spinlock releases.
   /WAIT              Traces any spinwait operations. This is the
                      default.
   /NOWAIT            Ignores any spinwait operations.
   /FRKDSPTH          Traces all invocations of fork routines within
                      the fork dispatcher. This is the default.
   /NOFRKDSPTH        Ignores all of the /FRKDSPTH operations.
   /FRKEND            Traces all returns from fork routines within
                      the fork dispatcher. This is the default.
   /NOFRKEND          Ignores all of the operations of the /FRKEND
                      qualifier.
   /CPU=n             Specifies the tracing of a specific CPU
                      only, for example, /CPU=5 or /CPU=PRIMARY.
                      By default, all CPUs are traced.
 

4  SPL_STOP_COLLECT
   This command stops the spinlock collection, but does not stop
   spinlock tracing. It has no qualifiers.
 

4  SPL_STOP_TRACE
   This command disables spinlock tracing, but it does not
   deallocate the trace buffer. It has no qualifiers.
 

4  SPL_UNLOAD
   This command unloads and cleans up the SPL$DEBUG execlet. Tracing
   is automatically disabled and the trace buffer deallocated. It
   has no qualifiers.

   For more information, refer to the OpenVMS Alpha System Analysis
   Tools Manual.
 

3  System_Services
   The following table describes new and updated system services for
   OpenVMS Version 7.3.

   For additional information, refer to the OpenVMS System Services
   Reference Manual.

   System Service      Documentation Update

   $CHECK_PRIVILEGES   The description of the 'prvadr' argument has
                       been updated.
   $CLRAST             This service has been documented for Version
                       7.3.
   $DCLEXH             The description has been updated, and a BASIC
                       example has been added.
   $DELETE_INTRUSION   This service has been updated in support of
                       Clusterwide Intrusion.
   $DEVICE_PATH_SCAN   This is a new service in support of
                       Multipath.
   $DISMOU             The following item codes have been added:
                       DMT$M_MINICOPY_REQUIRED, DMT$M_MINICOPY_
                       OPTIONAL, and DMT$M_FORCE.
   $EXPREG             The text for condition value, SS$_ILLPAGCNT,
                       has been updated.
   $GETDVI             The item codes, MT3_DENSITY and MT3_
                       SUPPORTED, have been added.

                       The item codes, DVI$_FC_NODE_NAME, DVI$_FC_
                       PORT_NAME, and DVI$_WWID, have been added.

                       The description for the DVI$_MOUNTCNT item
                       code has been updated.
   $GETJPI             The following item codes have been added:
                       JPI$_RMS_DFMBC, JPI$_RMS_DFMBFIDX, JPI$_
                       RMS_DFMBFREL, JPI$_RMS_DFMBFSDK, JPI$_RMS_
                       DFMBFSMT, JPI$_RMS_DFMBFSUR, JPI$_RMS_DFNBC,
                       JPI$_RMS_EXTEND_SIZE, JPI$_RMS_FILEPROT, and
                       JPI$_RMS_PROLOGUE.

                       The following item codes have been added for
                       Multithreads support: JPI$_INITIAL_THREAD_
                       PID, JPI$_KT_COUNT, JPI$_MULTITHREAD, and
                       JPI$_THREAD_INDEX.

                       The code example has been updated for VAX and
                       Alpha usage.
   $GETRMI             This is a new service in support of
                       Performance API.
   $GETQUI             The item code, QUI$V_JOB_REQUEUE, has been
                       added.
   $GETSYI             The item code, SYI$_SERIAL_NUMBER, has been
                       added.
   $IO_PERFORM         The 'porint' argument in the format section
                       has been changed to 'devdata, to match the C
                       prototype.
   $MGBLSC             The text for the 'inadr' argument has been
                       updated, and the SS$_INVARG condition value
                       has been added.
   $MOUNT              The following item codes have been added:
                       MNT$M_MINICOPY_OPTIONAL, MNT$M_MINICOPY_
                       REQUIRED, MNT$M_REQUIRE_MEMBERS, and MNT$M_
                       VERIFY_LABELS.
   $PERSONA_QUERY      Tables for Common, General, and NT item codes
                       have been added.
   $PROCESS_SCAN       The following item codes have been added for
                       Multithreads support: PSCAN$_KT_COUNT and
                       PSCAN$_MULTITHREAD.
   $REGISTRY           This service is now documented in the OpenVMS
                       System Services Reference Manual: GETUTC-Z
                       and online help.
   $SCAN_INTRUSION     This service has been updated in support of
                       Clusterwide Intrusion.
   $SCHED              The condition value, SS$_INCLASS, has been
                       added, and SS$_ILLSER has been deleted.
   $SET_DEVICE         This is a new service in support of
                       Multipath.
   $SET_SECURITY       The condition value, SS$_INVFILFOROP, has
                       been added.
   $SET_SYSTEM_EVENT   A new item code, SYSEVT$C_TDF_CHANGE, has
                       been added.
   $SHOW_INTRUSION     This service has been updated in support of
                       Clusterwide Intrusion.
   $WAKE               This service now accepts 64-bit addresses.
 

3  TCPIP_Files_for_SDA_READ
   Modules Containing Global Symbols and Data Structures Used by SDA
   shows the TCP/IP files that contain global symbols for the VAX
   and Alpha SDA READ commands.

   Table 5-6 Modules Containing Global Symbols and Data Structures
             Used by SDA

   File                    Contents

   TCPIP$NET_GLOBALS.STB   Contains data structure definitions for
                           TCP/IP Internet driver, execlet, and ACP
                           data structures
   TCPIP$NFS_GLOBALS.STB   Contains data structure definitions for
                           TCP/IP NFS server
   TCPIP$PROXY_            Contains data structure definitions for
   GLOBALS.STB             TCP/IP proxy execlet
   TCPIP$PWIP_GLOBALS.STB  Contains data structure definitions
                           for TCP/IP PWIP driver, and ACP data
                           structures
   TCPIP$TN_GLOBALS.STB    Contains data structure definitions for
                           TCP/IP TELNET/RLOGIN server driver data
                           structures

   These files are only available if TCP/IP services has
   been installed. They are found in SYS$SYSTEM, and are not
   automatically read in when you issue a READ/EXEC command.

   Modules Defining Global Locations Within the Executive Image
   shows the TCP/IP files that define global locations within the
   Executive Image for the VAX SDA command.

   Table 5-7 Modules Defining Global Locations Within the Executive
             Image

   File                    Contents

   TCPIP$BGDRIVER.STB      TCP/IP Internet driver
   TCPIP$INETACP.STB       TCP/IP Internet ACP
   TCPIP$INTERNET_         TCP/IP Internet execlet
   SERVICES.STB
   TCPIP$NFS_SERVICES.STB  Symbols for the TCP/IP NFS server
   TCPIP$PROXY_            Symbols for the TCP/IP proxy execlet
   SERVICES.STB
   TCPIP$PWIPACP.STB       TCP/IP PWIP ACP
   TCPIP$PWIPDRIVER.STB    TCP/IP PWIP driver
   TCPIP$TNDRIVER.STB      TCP/IP TELNET/RLOGIN server driver

   These files are only available if TCP/IP services has
   been installed. They are found in SYS$SYSTEM, and are not
   automatically read in when you issue a READ/EXEC command.

   For more detailed information, refer to the OpenVMS VAX System
   Dump Analyzer Utility Manual and the OpenVMS Alpha System
   Analysis Tools Manual.
 

3  Visual_Threads_Version_2.1_(Alpha)
   Visual Threads is a unique tool that lets you debug and
   analyze multithreaded applications. You can use Visual Threads
   to automatically diagnose common problems associated with
   multithreading including deadlock, mutex, and thread usage
   errors. Also, you can use Visual Threads to monitor the thread-
   related performance of an application, helping you to identify
   bottlenecks or locking granularity problems. Visual Threads
   helps you identify problem areas in an application even if the
   application does not show specific symptoms.

   Visual Threads includes the following features:

   o  Collects detailed information about significant thread-related
      state changes ("events").

   o  Analyzes common threading problems automatically based on
      predefined rules applied to the event stream.

   o  Rule customization for application-specific parameters and
      actions.

   o  Automatic statistics gathering, by sampling the event stream.

   o  Categories of analysis: data protection errors (race
      conditions), deadlocks, programming errors, lock activity,
      performance.

   o  Graphical visualization of the frequency of thread-related
      events and thread state, snapshots of historical program
      state, and object-specific graphs for each collected
      statistic.

   o  Lock activity profiling to reveal where various types of lock
      activity are occurring in your application, including: Number
      of Locks, Contended Locks, Locked Time, and Wait Time. Lock
      activity is collected and displayed for individual locks.

   o  Summarizes the program run and provides reports.

   o  Threads Snapshot view displays the historical state of threads
      represented at specific times in the main thread overview
      graph.

   o  Find and Filter support in the Event Window to allow you to
      quickly locate particular events.

   o  CPU Utilization Window shows the CPU percentage used by each
      thread.

   o  Thread Transitions Window depicts each state change for a
      detailed view.

   For more information about these features, refer to the Visual
   Threads product documentation, which is available on the OpenVMS
   Alpha CD-ROM in directory [VISUAL_THREADS_021], or by using the
   online Help system.
 

2  Associated_Products_Features
   This topic describes new features of Compaq OpenVMS operating
   system associated products. For a listing and directory
   information on the OpenVMS associated products, refer to the
   Guide to OpenVMS Version 7.3 CD-ROMs.
 

3  Availability_Manager
   OpenVMS Version 7.3 contains Availability Manager Version 1.4.
   Soon after the release of OpenVMS Version 7.3, Availability
   Manager Version 2.0 will be announced on the following
   Availability Manager web site:

   http://www.openvms.compaq.com/openvms/products/availman/

   Version 2.0 will include the following new features:

   o  A new internal infrastructure supports new operating system
      features more easily and quickly.

   o  To support NUMA or OpenVMS "RADs" and to provide preliminary
      support for Wildfire/Galaxy, the following features have been
      implemented:

      -  A new Memory view of OpenVMS Alpha V7.3 nodes displays
         RAD-related data.

      -  When monitoring OpenVMS Alpha V7.3 nodes, Availability
         Manager displays a new single-process memory tab called
         "RAD Counters."

      -  The CPU modes display includes the RAD for a CPU.

      -  The CPU process list shows the home RAD for each process.

      -  The Node summary display now includes the number of
         configured RADs, the system serial number, and the Galaxy
         ID of a node, if any.

   o  Displays now include additional switched LAN and NISCA data,
      when available.

   o  New user-defined event notifications have been implemented.

   o  A built-in browser now displays online help.

   o  A built-in Java runtime environment is now included. (In other
      words, you no longer need to install Java on the system.)

   o  ODS-5 file system support has been added.

   o  A new PGFLQUOTA process-level "fix" has been implemented.

   o  A simpler mechanism for site-specific configuration setup now
      exists.
 

3  Compaq_Advanced_Server_(Alpha)
   The Compaq Advanced Server Version 7.3 for OpenVMS is supported
   on Alpha systems only, and is the only version of the Advanced
   Server for OpenVMS supported on OpenVMS Alpha Version 7.3. New
   features include the following:

   o  Member server role (allowing the server to participate in
      Windows 2000 native-mode domains)

   o  Greater compatibility with a wide variety of clients and
      legacy applications, with support of:

      -  Extended character sets, in addition to Extended File
         Specifications

      -  Alias file names, created for shared files whose names do
         not comply with the more restricted file naming conventions
         of legacy applications such as MS-DOS

   o  Remote Windows NT printer management (SpoolSS) for printers
      shared on the Advanced Server for OpenVMS

   o  DNS for resolving NetBIOS names

   o  Cluster load balancing using DNS to resolve the server cluster
      alias name

   o  PCSI for installing the server

   o  Windows 2000 client and domain support

   Earlier versions of the Advanced Server for OpenVMS (Versions
   7.2 and 7.2A) must be upgraded to Version 7.3 to run on OpenVMS
   Alpha Version 7.3. Both the current and earlier versions of the
   Advanced Server for OpenVMS also run on OpenVMS Alpha Version
   7.2-1.

   For information about installing Advanced Server for OpenVMS,
   refer to the Compaq Advanced Server for OpenVMS Server
   Installation and Configuration Guide provided with the kit
   documentation.

   To access Advanced Server V7.3 for OpenVMS on OpenVMS Alpha
   Version 7.3, clients must be licensed using the new Advanced
   Server V7.3 license PAK: PWLMXXXCA07.03. For more information,
   refer to the Compaq Advanced Server for OpenVMS Guide to Managing
   Advanced Server Licenses.

   For information about the latest release of the PATHWORKS for
   OpenVMS (Advanced Server) product, supported on both OpenVMS
   Alpha and VAX Version 7.3 systems, see Compaq PATHWORKS V6.0D for
   OpenVMS (Advanced Server).
 

3  Compaq_DECwindows_Motif
   The Compaq DECwindows Motif for OpenVMS (DECwindows Motif),
   Version 1.2-6 kit for OpenVMS VAX and OpenVMS Alpha is now
   available. DECwindows Motif, Version 1.2-6 is a maintenance
   release that delivers a full range of changes and enhancements
   to your desktop. From faster batch scrolling to support for
   the Common Desktop Environment (CDE) screen saver and lock
   extensions, these changes are intended to provide you with a
   more efficient, flexible DECwindows Motif environment that is
   more in line with the OSF/Motif, MIT X11 Release 5 (X11 R5), and
   Common Desktop Environment (CDE) standards. For a full list of
   specific changes, enhancements, and corrections implemented in
   this release, refer to the Compaq DECwindows Motif for OpenVMS
   Release Notes.
 

3  Compaq_DCE_for_OpenVMS
   This section describes the enhancements in Compaq Distributed
   Computing Environment (DCE) for OpenVMS Version 7.3.
 

4  Compaq_DCE_Remote_Procedure_Call_(RPC)
   Beginning with OpenVMS Version 7.2-1, the NT Lan Manager security
   in DCE RPC is fully functional.
 

4  New_Ethernet_Device_Support
   If DCE RPC does not recognize the Ethernet device in your system,
   one new device may be added to the table of known devices by
   defining the system logical DCE$IEEE_802_DEVICE to be the device
   name of your Ethernet device.

   For example, to define a single DE500 Ethernet device, set the
   logical as follows:

   $ DEFINE/SYSTEM DCE$IEEE_802_DEVICE EWA0
 

4  For_More_DCE_Information
   Refer to the OpenVMS Version 7.3 Release Notes for important
   information about Compaq DCE for OpenVMS.

   If you have the full DCE kit installed, you can use online help
   for additional information:

   $ HELP DCE
   $ HELP DCE$SETUP
   $ HELP DCE_CDS
   $ HELP DCE_DTS
   $ HELP DCE_IDL
   $ HELP DCE_RPC
   $ HELP DCE_SECURITY
   $ HELP DCE_THREADS

   You can also refer to the following documentation:

   o  Compaq DCE for OpenVMS VAX and OpenVMS Alpha Installation and
      Configuration Guide (order number AA-PV4CE-TE)

   o  Compaq DCE for OpenVMS VAX and OpenVMS Alpha Product Guide
      (order number AA-PV4FE-TE)

   o  Compaq DCE for OpenVMS VAX and OpenVMS Alpha Reference Guide
      (order number AA-QHLZB-TE)
 

3  DECram_(Alpha)
   DECram Version 3.0 supports the OpenVMS for Alpha platform only.
   The following are the new features that can be found in this
   release:

   o  In DECram for OpenVMS Alpha Version 3.0, DECram's capability
      supports the use of shared memory for creation of RAM disks in
      an Adaptive Partitioned MultiProcessing (APMP) environment.
      This environment is also know as Compaq Galaxy Software
      Architecture.

   o  On OpenVMS Version 7.2-1H1 or higher, the limit on the DECram
      file size has been extended to 4,294,967,296 blocks.

   o  DECram for OpenVMS Version 3.0 is fully compatible with
      DECram Version 2.3. There can be any combination of these
      two versions of DECRam in a VMScluster.

   o  Multiple DECram devices can be members of a Volume Shadowing
      for OpenVMS shadow set and can be served by Mass Storage
      Control Protocol (MSCP) or QIO served.

   o  Volume Shadowing for OpenVMS will support shadow sets composed
      of DECram devices and other disk class devices.

   o  A new DECram command interface (DECRAM>) can be used for
      creating, initializing, and mounting DECram disks.

   DECram Version 3.0 and supporting documentation are included in
   the OpenVMS Version 7.3 CD-ROM in the [.DECRAM_030] directory.
   

3  Enterprise_Capacity_and_Performance_(ECP)
   Beginning with OpenVMS Version 7.3, the following Enterprise
   Capacity and Performance (ECP) management tools will be provided
   at no additional costs. The ECP Data Collector for OpenVMS
   and ECP Performance Analyzer for OpenVMS will be available to
   customers who have a valid license to operate OpenVMS Version 6.2
   or later. These products are available from the following World
   Wide Web site:

    http://www.openvms.compaq.com/openvms/system_management.html

   Software Support Services for these products are sold separately
   and are available on an incremental basis. Please contact your
   Compaq Services representative for further details.
 

4  ECP_Collector
   ECP Collector for OpenVMS Version 5.4 gathers performance and
   capacity planning data on OpenVMS operating systems. OpenVMS data
   collection has three main criteria; the amount of performance
   data collected, the time interval, and the efficiency or amount
   of overhead impacting the system. ECP Collector for OpenVMS
   provides the following:

   o  Robust data collection set. It collects system metrics on over
      250 OpenVMS performance parameters.

   o  Flexible data collection. The sampling rate of data can be
      tuned down to sub-second intervals.

   o  Low overhead. Audited production systems have routinely shown
      that ECP Collector for OpenVMS has less than a 1.5% impact on
      the CPU.

   Satisfying the needs of Enterprise Management, ECP Collector for
   OpenVMS also contains an API that provides an interface for the
   access of performance data. This interface converts the contents
   of the .CPC data file generated by the data collector into a
   formatted, comma-separated ASCII file that can then be used for
   performance analysis and reporting programs.
 

4  ECP_Performance_Analyzer
   Compaq's ECP Analyzer for OpenVMS Version 5.4, which runs under
   Motif, analyzes the data provided by the ECP Collector for
   OpenVMS data collector. ECP Analyzer for OpenVMS provides the
   entry point into the data collector, and allows the user to
   select the sampling rate and to view the performance data in
   graphical format. The product provides historical information in
   standard graphs based upon the requested time interval. Graphs
   are provided for all common performance issues that need to be
   analyzed including the CPU, the memory, and the I/O. ECP Analyzer
   for OpenVMS provides both graphic (MOTIF-based) and tabular
   reports for the data.
 

3  Kerberos_for_OpenVMS
   Kerberos Version 1.0 for OpenVMS Alpha and OpenVMS VAX, based on
   MIT Kerberos Version 5 Release 1.0.5, is included on the OpenVMS
   Version 7.3 distribution media. (Kerberos documentation provided
   by MIT is included on the OpenVMS documentation CD-ROM in HTML
   format.)

   Kerberos is a network authentication protocol designed to provide
   strong authentication for client/server applications by using
   secret-key cryptography.

   Kerberos was created by the Massachusetts Institute of Technology
   as a solution to network security problems. The Kerberos
   protocol uses strong cryptography so that a client can prove its
   identity to a server (and vice versa) across an insecure network
   connection. After a client and server have used Kerberos to prove
   their identity, they can also encrypt all of their communications
   to assure privacy and data integrity.

   General information about Kerberos is available from the
   following World Wide Web address:

   http://web.mit.edu/kerberos/www/
 

4  New_DCL_Command_KERBEROS
   OpenVMS Kerberos is an authentication security product. It
   allows for user authentication for a wide range of communication
   programs such as RLOGIN, TELNET, and FTP.

   Format:

   KERBEROS [/ADMIN | /USER]
            [/INTERFACE=[DECWINDOWS | CHARACTER_CELL]]

   Qualifiers:

   /ADMIN

   Activates the Kerberos administration utility for the selected
   interface.

   /USER (default)

   Activates the Kerberos user utility for the selected interface.

   /INTERFACE=CHARACTER_CELL (default)
   /INTERFACE=DECWINDOWS

   Activates the display device requested, if available.

   For more information, refer to the Kerberos for OpenVMS
   Installation Guide and Release Notes.
 

3  Universal_LDAPv3_API_(Alpha)
   OpenVMS Version 7.3 includes the Lightweight Directory Access
   Protocol (LDAPv3) Application Programming Interface (API) that
   allows OpenVMS application developers, third-party applications,
   and users to access LDAP directories anywhere in the enterprise,
   intranet, extranet or Internet hosted by non-OpenVMS systems.
   The multi-threaded API will automatically support both 64-bit and
   32-bit applications and be Common Object Model (COM) aware.

   The universal LDAPv3 API is certified with Microsoft's Active
   Directory, Novell's NDS and Compaq's X.500 Version 4.0, and
   supports various security mechanisms including Kerberos V5 and
   Public Key Infrastructure (PKI).

   The LDAPv3 kits are available from the following World Wide Web
   address:

   http://www.openvms.compaq.com/openvms/products/mgmt_agents/index.html

   For additional information on the LDAPv3 API, refer to the
   OpenVMS Utility Routines Manual.
 

3  Compaq_PATHWORKS_V6.0D_for_OpenVMS_(Advanced_Server)
   Compaq PATHWORKS V6.0D for OpenVMS (Advanced Server) is the only
   PATHWORKS for OpenVMS server supported on OpenVMS Version 7.3
   (in addition to Compaq Advanced Server V7.3 for OpenVMS). Earlier
   versions of PATHWORKS for OpenVMS servers must be upgraded. For
   more information, refer to the OpenVMS Version 7.3 Release Notes.

   You can run PATHWORKS V6.0D for OpenVMS (Advanced Server) on
   either OpenVMS Alpha Versions 7.3, 7.2-1, or 6.2, or on OpenVMS
   VAX Versions 7.3, 7.2, or 6.2.

   To access PATHWORKS V6.0D for OpenVMS (Advanced Server) on
   OpenVMS Version 7.3, clients must be licensed using the license
   PAK PWLMXXXCA06.00, PWLMXXXCA07.02, or PWLMXXXCA07.03. For more
   information, refer to the Compaq Advanced Server for OpenVMS
   Guide to Managing Advanced Server Licenses.

   For information about the latest release of Compaq Advanced
   Server Version 7.3 for OpenVMS see Compaq Advanced Server
   (Alpha).
 

3  Compaq_Service_Tools_and_DECevent
   Compaq Services new web-based service tool functionality is
   known as Web-Based Enterprise Services (WEBES). The Compaq System
   Tools CD-ROM included in the OpenVMS Version 7.3 CD-ROM package
   includes WEBES. (WEBES includes the Compaq Crash Analysis Tool
   (CCAT) and Compaq Analyze components.) This is the supported
   service tools for all AlphaServer DS, ES, and GS systems running
   OpenVMS, except for the AlphaServer GS60 and AlphaServer GS140
   platforms. The AlphaServer GS60 and GS140 platforms must continue
   to use the DECevent diagnostic tool.

   In addition to WEBES, the Compaq System Tools CD-ROM includes
   DECevent, DSNLINK, and the Revision and Configuration Management
   (RCM) tools.

   DECevent and WEBES can be used together in a cluster.

   Installation and documentation on the service tools are included
   on the Compaq System Tools CD-ROM. Use the following web site to
   access the most up-to-date service tool information:

   http://www.support.compaq.com/svctools/
 

3  TCPIP_Services_V5.1
   The Compaq TCP/IP Services for OpenVMS product is the Compaq
   implementation of the TCP/IP protocol suite and internet services
   for OpenVMS Alpha and OpenVMS VAX systems.

   TCP/IP Services provides a comprehensive suite of functions
   and applications that support industry-standard protocols for
   heterogeneous network communications and resource sharing.
 

4  New_Features_and_Changes
   The new features of Compaq TCP/IP Services for OpenVMS Version
   5.1 include:

   o  A new kernel, based on Compaq Tru64 UNIX Version 5.1.

   o  Support for Internet Protocol Version 6 (IPv6).

   o  DHCP client support.

   o  Xterminal support using XDM.

   o  Services that can be restarted individually.

   o  GATED enhancements.

   o  BIND dynamic updates management enhancements.

   o  Cluster failover for the BIND server.

   o  Cluster failover for the load broker.

   o  Updated SNMP that supports AgentX.

   o  SMTP enhancements, including:

      -  AntiSPAM (configuration to control mail relay)

      -  SMTP SFF (Send From File)

      -  SMTP outbound alias

   o  Metric server logicals that can be changed without restarting
      the Metric server.

   o  The DHCP server can be configured to dynamically update the
      BIND database.

   o  TELNET client enhancements to support SNDLOC and NAWS.

   o  Support for the NFS V3 protocol in addition to the NFS V2
      protocol in the NFS server.

   o  TCP options for improving certain performance characteristics.

   For more information about configuring and managing these
   services, refer to the Compaq TCP/IP Services for OpenVMS
   Management guide provided with the TCP/IP Services for OpenVMS
   Version 5.1 software.
 

4  Documentation
   For installation instructions, refer to the Compaq TCP/IP
   Services for OpenVMS Installation and Configuration manual.

   The TCP/IP Services for OpenVMS Release Notes provide version-
   specific information that supersedes the information in the
   documentation set. The features, restrictions, and corrections in
   this version of the software are described in the release notes.
   Always read the release notes before installing the software.

   The TCP/IP Services for OpenVMS documentation set includes the
   following new items:

   o  Compaq TCP/IP Services for OpenVMS Guide to IPv6

      This manual describes the IPv6 environment, the roles of
      systems in this environment, the types and function of the
      different IPv6 addresses, and how to configure TCP/IP Services
      to access the 6bone network.

   o  Compaq TCP/IP Services for OpenVMS Tuning and Troubleshooting

      This manual provides information about how to isolate the
      causes of network problems and how to tune the TCP/IP Services
      software for the best performance.

   o  Compaq TCP/IP Services for OpenVMS Management Command Quick
      Reference Card

      This reference card summarizes the TCP/IP management commands,
      organizing them by function and component.

   o  Compaq TCP/IP Services for OpenVMS UNIX Command Reference Card

      This reference card describes how to use UNIX utilities on
      OpenVMS to manage TCP/IP services.

   The following existing TCP/IP Services for OpenVMS manuals have
   been updated for V5.1:

   o  Compaq TCP/IP Services for OpenVMS Installation and
      Configuration

   o  Compaq TCP/IP Services for OpenVMS Management

   o  Compaq TCP/IP Services for OpenVMS Management Command
      Reference

   o  Compaq TCP/IP Services for OpenVMS Sockets API and System
      Services Programming

   o  Compaq TCP/IP Services for OpenVMS SNMP Programming and
      Reference