VMS Help
V73 Features, System Management Features

 *Conan The Librarian (sorry for the slow response - running on an old VAX)

    This topic describes new features of interest to OpenVMS system
    managers.

  1 - AlphaServer GS Series

    OpenVMS Version 7.3 provides support for Compaq's AlphaServer
    GS80, GS160 and GS320 systems, which was introduced in OpenVMS
    Version 7.2-1H1, and includes:

    o  OpenVMS support for hard and soft partitions (Galaxy) on
       AlphaServer GS160 and GS320 systems

    o  OpenVMS Resource Affinity Domain (RAD) support for
       applications

    o  CPU Online Replace

 1.1 - Hard and Soft Partitions

    Hard partitioning is a physical separation of computing resources
    by hardware-enforced access barriers. It is impossible to read
    or write across a hard partition boundary. There is no resource
    sharing between hard partitions.

    Soft partitioning is a separation of computing resources by
    software-controlled access barriers. Read and write access across
    a soft partition boundary is controlled by the operating system.
    OpenVMS Galaxy is an implementation of soft partitioning.

    The way customers choose to partition their systems depends on
    their computing environments and application requirements. For
    more information about using hard partitions and OpenVMS Galaxy,
    refer to the OpenVMS Alpha Partitioning and Galaxy Guide.

 1.2 - Resource Affinity Domain (RAD) Support

    OpenVMS Alpha Version 7.3 provides non-uniform memory awareness
    (NUMA) in OpenVMS memory management and process scheduling,
    which was introduced in OpenVMS Version 7.2-1H1. This capability
    provides application support for resource affinity domains
    (RADs), to ensure that applications running on a single instance
    of OpenVMS on multiple quad building blocks (QBBs) can execute
    as efficiently as possible in a NUMA environment. A RAD is a
    set of hardware components (CPU, memory, IO) with common access
    characteristics, and corresponds to a QBB in an AlphaServer GS160
    or GS320 system.

    For more information about using the OpenVMS RAD support for
    application features, refer to the OpenVMS Alpha Partitioning and
    Galaxy Guide.

  2 - Daylight Savings Time

    System parameter AUTO_DLIGHT_SAV controls whether OpenVMS
    will automatically change system time to and from Daylight
    Savings Time when appropriate. A value of 1 tells OpenVMS to
    automatically make the change. The default is 0 (off). This is a
    static parameter.

    However, if you have a time service (such as DTSS), that time
    service continues to control time changes, and OpenVMS does not
    interfere. Do not enable automatic daylight savings time if you
    have another time service.

    For more information, refer to the OpenVMS System Manager's
    Manual.

  3 - CPU Online Replace (Alpha)

    With OpenVMS Alpha Version 7.3, you can replace secondary CPUs on
    a running system without rebooting, which provides increased
    system maintainability and serviceability. This feature is
    supported only on AlphaServer GS160/320 systems. Note that
    replacing the primary CPU requires rebooting.

    To use this feature, you must first download console firmware
    Version 5.9B from the following location:

    http://ftp.digital.com/pub/DEC/Alpha/firmware/

    You can then use the following DCL commands to replace a CPU
    without rebooting:

    1. Direct OpenVMS to stop scheduling processes on the CPU:

       $ STOP/CPU n

       (n is the number of the CPU to be stopped.)

    2. Power off the running CPU:

       $ SET CPU/POWER=OFF n

    3. When the light on the CPU module has turned from green to
       amber, physically remove the CPU module from the system. Then
       put in a new CPU.

    4. Power on the CPU:

       $ SET CPU/POWER=ON  n

    OpenVMS automatically adds the CPU to the active set of
    processors.

    Note that the Galaxy Configuration Utility (GCU) also supports
    this capability.

  4 - Class Scheduler

    With OpenVMS Version 7.3, there is a new SYSMAN-based interface
    for class scheduling. This new class scheduler, implemented on
    both VAX and Alpha systems, gives you the ability to designate
    the amount of CPU time that a system's users may receive by
    placing the users into scheduling classes. Each class is assigned
    a percentage of the overall system's CPU time. As the system
    runs, the combined set of users in a class are limited to the
    percentage of CPU execution time allocated to their class. The
    users may get some additional CPU time if /windfall is enabled
    for their scheduling class. Enabling the /windfall allows the
    system to give a small amount of CPU time to a scheduling class
    when a CPU is idle and the scheduling class' allotted time has
    been depleted.

    To invoke the class scheduler, you use the SYSMAN interface.
    SYSMAN allows you to create, delete, modify, suspend, resume, and
    display scheduling classes. SYSMAN command: class_schedule shows
    the SYSMAN command, CLASS_SCHEDULE, and its sub-commands.

    Table 4-1 SYSMAN command: class_schedule

    Sub-
    command     Meaning

    ADD         Creates a new scheduling class
    DELETE      Deletes a scheduling class
    MODIFY      Modifies the characteristics of a scheduling class
    SHOW        Shows the characteristics of a scheduling class
    SUSPEND     Suspends temporarily a scheduling class
    RESUME      Resumes a scheduling class

    By implementing the class scheduler using the SYSMAN interface,
    you create a permanent database that allows OpenVMS to class
    schedule processes automatically after a system has been booted
    and rebooted. This database resides on the system disk in
    SYS$SYSTEM:VMS$CLASS_SCHEDULE.DATA. SYSMAN creates this file
    as an RMS indexed file when the first scheduling class is created
    by the SYSMAN command, CLASS_SCHEDULE ADD.

    In a cluster environment, SYSMAN creates this database file in
    the SYS$COMMON root of the [SYSEXE] directory. As a result, the
    database file is shared among all cluster members. By using
    SYSMAN's SET ENVIRONMENT command, you can define scheduling
    classes either on a cluster-wide or per-node basis.

    If desired, a system manager (or application manager) uses the
    permanent class scheduler to place a process into a scheduling
    class at process creation time. When a new process is created,
    Loginout determines whether this process belongs to a scheduling
    class. Given process information from the SYSUAF file, Loginout
    then class schedules the process if Loginout determines that the
    process belongs to a scheduling class.

    By using the SYSMAN utility to perform class scheduling
    operations instead of $SCHED system service, you gain the
    following benefits:

    o  You need not modify individual program images to control
       class scheduling. You can add, delete, and modify scheduling
       classifications from the SYSMAN utility.

    o  You can use SYSMAN to create a permanent class scheduling
       database file which allows processes to be class scheduled
       at process creation time and allows class definitions to be
       preserved in case of a system reboot.

    For more detailed information, refer to the following manuals:

       OpenVMS Programming Concepts Manual, Volume I
       OpenVMS DCL Dictionary: N-Z
       OpenVMS System Services Reference Manual: A-GETUAI

  5 - Dedicated CPU Lock Manager (Alpha)

    The Dedicated CPU Lock Manager is a new feature that improves
    performance on large SMP systems that have heavy lock manager
    activity. The feature dedicates a CPU to performing lock manager
    operations.

    A dedicated CPU has the following advantages for overall system
    performance as follows:

    o  Reduces the amount of MP_SYNCH time

    o  Provides good CPU cache utilization

 5.1 - Implementing

    For the Dedicated CPU Lock Manager to be effective, systems
    must have a high CPU count and a high amount of MP_SYNCH due
    to the lock manager. Use the MONITOR utility and the MONITOR
    MODE command to see the amount of MP_SYNCH. If your system has
    more than five CPUs and if MP_SYNCH is higher than 200%, your
    system may be able to take advantage of the Dedicated CPU Lock
    Manager. You can also use the spinlock trace feature in the
    System Dump Analyzer (SDA) to help determine if the lock manager
    is contributing to the high amount of MP_SYNCH time.

    The Dedicated CPU Lock Manager is implemented by a LCKMGR_SERVER
    process. This process runs at priority 63. When the Dedicated
    CPU Lock Manager is turned on, this process runs in a compute
    bound loop looking for lock manager work to perform. Because
    this process polls for work, it is always computable; and with
    a priority of 63 the process will never give up the CPU, thus
    consuming a whole CPU.

    If the Dedicated CPU Lock Manager is running when a program calls
    either the $ENQ or $DEQ system services, a lock manager request
    is placed on a work queue for the Dedicated CPU Lock Manager.
    While a process waits for a lock request to be processed, the
    process spins in kernel mode at IPL 2. After the dedicated CPU
    processes the request, the status for the system service is
    returned to the process.

    The Dedicated CPU Lock Manager is dynamic and can be turned off
    if there are no perceived benefits. When the Dedicated CPU Lock
    Manager is turned off, the LCKMGR_SERVER process is in a HIB
    (hibernate) state. The process may not be deleted once started.

 5.2 - Enabling

    To use the Dedicated CPU Lock Manager, set the LCKMGR_MODE
    system parameter. Note the following about the LCKMGR_MODE system
    parameter:

    o  Zero (0) indicates the Dedicated CPU Lock Manager is off (the
       default).

    o  A number greater than zero (0) indicates the number of CPUs
       that should be active before the Dedicated CPU Lock Manager is
       turned on.

    Setting LCKMGR_MODE to a number greater than zero (0) triggers
    the creation of a detached process called LCKMGR_SERVER. The
    process is created, and it starts running if the number of active
    CPUs equals the number set by the LCKMGR_MODE system parameter.

    In addition, if the number of active CPUs should ever be reduced
    below the required threshold by either a STOP/CPU command or by
    CPU reassignment in a Galaxy configuration, the Dedicated CPU
    Lock Manager automatically turns off within one second, and the
    LCKMGR_SERVER process goes into a hibernate state. If the CPU is
    restarted, the LCKMGR_SERVER process again resumes operations.

 5.3 - Using with Affinity

    The LCKMGR_SERVER process uses the affinity mechanism to set
    the process to the lowest CPU ID other than the primary. You can
    change this by indicating another CPU ID with the LCKMGR_CPUID
    system parameter. The Dedicated CPU Lock Manager then attempts
    to use this CPU. If this CPU is not available, it reverts back to
    the lowest CPU other than the primary.

    The following shows how to dynamically change the CPU used by the
    LCKMGR_SERVER process:

    $RUN SYS$SYSTEM:SYSGEN
    SYSGEN>USE ACTIVE
    SYSGEN>SET LCKMGR_CPUID 2
    SYSGEN>WRITE ACTIVE
    SYSGEN>EXIT
 To verify the CPU dedicated to the lock manager, use the following
 show system command:

    $ SHOW SYSTEM/PROCESS=LCKMGR_SERVER

    This change applies to the currently running system. A reboot
    reverts back to the lowest CPU other than the primary. To
    permanently change the CPU used by the LCKMGR_SERVER process,
    set LCKMGR_CPUID in your MODPARAMS.DAT file.

    Compaq highly recommends that a process not be given hard
    affinity to the CPU used by the Dedicated CPU Lock Manager.
    With hard affinity when such a process becomes computable, it
    cannot obtain any CPU time, because the LCKMGR_SERVER process
    is running at the highest possible real-time priority of 63.
    However, the LCKMGR_SERVER detects once per second if there are
    any computable processes that are set by the affinity mechanism
    to the dedicated lock manager CPU. If so, the LCKMGR_SERVER
    switches to a different CPU for one second to allow the waiting
    process to run.

 5.4 - Using with Fast Path Devices

    OpenVMS Version 7.3 also introduces Fast Path for SCSI and Fibre
    Channel Controllers along with the existing support of CIPCA
    adapters. The Dedicated CPU Lock Manager supports both the
    LCKMGR_SERVER process and Fast Path devices on the same CPU.
    However, this may not produce optimal performance.

    By default, the LCKMGR_SERVER process runs on the first available
    nonprimary CPU. Compaq recommends that the CPU used by the
    LCKMGR_SERVER process not have any Fast Path devices. This can
    be accomplished in either of the following ways:

    o  You can eliminate the first available nonprimary CPU as an
       available Fast Path CPU. To do so, clear the bit associated
       with the CPU ID from the IO_PREFER_CPUS system parameter.

       For example, let's say your system has eight CPUs with CPU IDs
       from zero to seven and four SCSI adapters that will use Fast
       Path. Clearing bit 1 from IO_PREFER_CPUs would result in the
       four SCSI devices being bound to CPUs 2, 3, 4, and 5. CPU 1,
       which is the default CPU the lock manager will use, will not
       have any Fast Path devices.

    o  You can set the LCKMGR_CPUID system parameter to tell the
       LCKMGR_SERVER process to use a CPU other than the default. For
       the above example, setting this system parameter to 7 would
       result in the LCKMGR_SERVER process running on CPU 7. The Fast
       Path devices would by default be bound to CPUs 1, 2, 3, and 4.

 5.5 - Using on AlphaServer GS Series Systems

    The new AlphaServer GS Series Systems (GS80, GS160, and the
    GS320) have NUMA memory characteristics. When using the Dedicated
    CPU Lock Manager on one of these systems, the best performance is
    obtained by utilizing a CPU and memory from within a single Quad
    Building Block (QBB).

    For OpenVMS Version 7.3, the Dedicated CPU Lock Manager does not
    yet have the ability to decide from where QBB memory should be
    allocated. However, there is a method to preallocate lock manager
    memory from the low QBB. This can be done with the LOCKIDTBL
    system parameter. This system parameter indicates the initial
    size of the Lock ID Table, along with the initial amount of
    memory to preallocate for lock manager data structures.

    To preallocate the proper amount of memory, this system parameter
    should be set to the highest number of locks plus resources
    on the system. The command MONITOR LOCK can provide this
    information. If MONITOR indicates the system has 100,000 locks
    and 50,000 resources, then setting LOCKIDTBL to the sum of these
    two values will ensure that enough memory is initially allocated.
    Adding in some additional overhead may also be beneficial.
    Setting LOCKIDTBL to 200,000 thus might be appropriate.

    If necessary, use the LCKMGR_CPUID system parameter to ensure
    that the LCKMGR_SERVER runs on a CPU in the low QBB.

  6 - Enterprise Directory for e-Business (Alpha)

    OpenVMS Enterprise Directory for e-Business is a massively
    scalable directory service, providing both X.500 and LDAPv3
    services on OpenVMS Alpha with no separate license fee. OpenVMS
    Enterprise Directory for e-Business provides the following:

    o  Large percentage of the Fortune 500 already deploy Compaq
       X.500 Directory Service (the forerunner of OpenVMS Enterprise
       Directory for e-Business)

    o  World's first 64-bit directory service

    o  Seamlessly combines the scalability and distribution features
       of X.500 with the popularity and interoperability offered by
       LDAPv3

    o  Inherent replication/shadowing features may be exploited to
       guarantee 100% up-time

    o  Systems distributed around the world can be managed from a
       single point

    o  Ability to store all types of authentication and security
       certificates across the enterprise accessible from any
       location

    o  Highly configurable schema

    o  In combination with AlphaServer technology and in-memory
       database delivers market leading performance and low
       initiation time

    For more detailed information, refer to the Compaq OpenVMS e-
    Business Infrastructure CD-ROM package which is included in the
    OpenVMS Version 7.3 CD-ROM kit.

  7 - Extended File Cache (Alpha)

    The Extended File Cache (XFC) is a new virtual block data cache
    provided with OpenVMS Alpha Version 7.3 as a replacement for the
    Virtual I/O Cache.

    Similar to the Virtual I/O Cache, the XFC is a clusterwide, file
    system data cache. Both file system data caches are compatible
    and coexist in an OpenVMS Cluster.

    The XFC improves I/O performance with the following features that
    are not available with the Virtual I/O Cache:

    o  Read-ahead caching

    o  Automatic resizing of the cache

    o  Larger maximum cache size

    o  No limit on the number of closed files that can be cached

    o  Control over the maximum size of I/O that can be cached

    o  Control over whether cache memory is static or dynamic

    For more information, refer to the chapter on Managing Data
    Caches in the OpenVMS System Manager's Manual, Volume 2: Tuning,
    Monitoring, and Complex Systems.

  8 - ARB SUPPORT Qualifier Added to INSTALL Utility

    Beginning with OpenVMS Alpha Version 7.3, you can use the /ARB_
    SUPPORT qualifier with the ADD, CREATE, and REPLACE commands in
    the INSTALL utility. The ARB_SUPPORT qualifier provides Access
    Rights Block (ARB) support to products that have not yet been
    updated the per-thread security Persona Security Block (PSB) data
    structure.

    This new qualifier is included in the INSTALL utility
    documentation in the OpenVMS System Management Utilities
    Reference Manual.

  9 - MONITOR Utility

    The MONITOR utility has two new class names, RLOCK and TIMER,
    which you can use as follows:

    o  MONITOR RLOCK: the dynamic lock remastering statistics of a
       node

    o  MONITOR TIMER: Timer Queue Entry (TQE) statistics

    These enhancements are discussed in more detail in the MONITOR
    section of the OpenVMS System Management Utilities Reference
    Manual and in the appendix that discusses MONITOR record formats
    in that manual.

    Also in the MONITOR utility, the display screens of MONITOR
    CLUSTER, PROCESSES/TOPCPU, and SYSTEM now have new and higher
    scale values. Refer to the OpenVMS System Management Utilities
    Reference Manual: M-Z for more information.

  10 - OpenVMS Cluster Systems

    The following OpenVMS Cluster features are discussed in this
    section:

    o  Clusterwide intrusion detection

    o  Fast Path for SCSI and Fibre Channel (Alpha)

    o  Floppy disks served in an OpenVMS Cluster system (Alpha)

    o  New Fibre Channel support (Alpha)

    o  Switched LAN as a cluster interconnect

    o  Warranted and migration support

 10.1 - Clusterwide Intrusion Detection

    OpenVMS Version 7.3 includes clusterwide intrusion detection,
    which extends protection against attacks of all types throughout
    the cluster. Intrusion data and information from each system are
    integrated to protect the cluster as a whole. Member systems
    running versions of OpenVMS prior to Version 7.3 and member
    systems that disable this feature are protected individually
    and do not participate in the clusterwide sharing of intrusion
    information.

    You can modify the SECURITY_POLICY system parameter on the
    member systems in your cluster to maintain either a local or a
    clusterwide intrusion database of unauthorized attempts and the
    state of any intrusion events.

    If bit 7 in SECURITY_POLICY is cleared, all cluster members
    are made aware if a system is under attack or has any intrusion
    events recorded. Events recorded on one system can cause another
    system in the cluster to take restrictive action. (For example,
    the person attempting to log in is monitored more closely and
    limited to a certain number of login retries within a limited
    period of time. Once a person exceeds either the retry or time
    limitation, he or she cannot log in.) The default for bit 7 in
    SECURITY_POLICY is clear.

    For more information on the system services $DELETE_INTRUSION,
    $SCAN_INTRUSION, and $SHOW_INTRUSION, refer to the OpenVMS System
    Services Reference Manual.

    For more information on the DCL commands DELETE/INTRUSION_RECORD
    and SHOW INTRUSION, refer to the OpenVMS DCL Dictionary.

    For more information on clusterwide intrusion detection, refer to
    the OpenVMS Guide to System Security.

 10.2 - Fast Path for SCSI and Fibre Channel (Alpha)

    Fast Path for SCSI and Fibre Channel (FC) is a new feature with
    OpenVMS Version 7.3. This feature improves the performance of
    Symmetric Multi-Processing (SMP) machines that use certain SCSI
    ports, or FC.

    In previous versions of OpenVMS, SCSI and FC I/O completion was
    processed solely by the primary CPU. When Fast Path is enabled,
    the I/O completion processing can occur on all the processors in
    the SMP system. This substantially increases the potential I/O
    throughput on an SMP system, and helps to prevent the primary CPU
    from becoming saturated.

    See FAST_PATH_PORTS for information about the SYSGEN parameter,
    FAST_PATH_PORTS, that has been introduced to control Fast Path
    for SCSI and FC.

 10.3 - Floppy Disks Served

    Until this release, MSCP was limited to serving disks. Beginning
    with OpenVMS Version 7.3, serving floppy disks in an OpenVMS
    Cluster system is supported, enabled by MSCP.

    For floppy disks to be served in an OpenVMS Cluster system,
    floppy disk names must conform to the naming conventions for
    port allocation class names. For more information about device
    naming with port allocation classes, refer to the OpenVMS Cluster
    Systems manual.

    OpenVMS VAX clients can access floppy disks served from OpenVMS
    Alpha Version 7.3 MSCP servers, but OpenVMS VAX systems cannot
    serve floppy disks. Client systems can be any version that
    supports port allocation classes.

 10.4 - New Fibre Channel Support (Alpha)

    Support for new Fibre Channel hardware, larger configurations,
    Fibre Channel Fast Path, and larger I/O operations is included in
    OpenVMS Version 7.3. The benefits include:

    o  Support for a broader range of configurations: the lower cost
       HSG60 controller supports two SCSI buses instead of six SCSI
       buses supported by the HSG80; multiple DSGGB 16-port Fibre
       Channel switches enable very large configurations.

    o  Backup operations to tape, enabled by the new Modular Data
       Router (MDR), using existing SCSI tape subsystems

    o  Distances up to 100 kilometers between systems, enabling
       more configuration choices for multiple-site OpenVMS Cluster
       systems

    o  Better performance for certain types of I/O due to Fibre
       Channel Fast Path and support for larger I/O requests

    The following new Fibre Channel hardware has been qualified on
    OpenVMS Version 7.2-1 and on OpenVMS Version 7.3:

    o  KGPSA-CA host adapter

    o  DSGGB-AA switch (8 ports) and DSGGB-AB switch (16 ports)

    o  HSG60 storage controller (MA6000 storage subsystem)

    o  Compaq Modular Data Router (MDR)

    OpenVMS now supports Fibre Channel fabrics. A Fibre Channel
    fabric is multiple Fibre Channel switches connected together.
    (A Fibre Channel fabric is also known as cascaded switches.)

    Configurations that use Fibre Channel fabrics can be extremely
    large. Distances up to 100 kilometers are supported in a
    multisite OpenVMS Cluster system. OpenVMS supports the Fibre
    Channel SAN configurations described in the Compaq StorageWorks
    Heterogeneous Open SAN Design Reference Guide, available at the
    following Compaq web site:

    http://www.compaq.com/storage

    Enabling Fast Path for Fibre Channel can substantially increase
    the I/O throughput on an SMP system. For more information about
    this new feature, see Fast Path for SCSI and Fibre Channel
    (Alpha).

    Prior to OpenVMS Alpha Version 7.3, I/O requests larger than
    127 blocks were segmented by the Fibre Channel driver into
    multiple I/O requests. Segmented I/O operations generally have
    lower performance than one large I/O. In OpenVMS Version 7.3,
    I/O requests up to and including 256 blocks are done without
    segmenting.

    For more information about Fibre Channel usage in OpenVMS Cluster
    configurations, refer to the Guidelines for OpenVMS Cluster
    Configurations.

 10. 4.1 - Tape Support

    Fibre Channel tape functionality refers to the support of SCSI
    tapes and SCSI tape libraries in an OpenVMS Cluster system with
    shared Fibre Channel storage. The SCSI tapes and libraries are
    connected to the Fibre Channel by a Fibre-to-SCSI bridge known as
    the Modular Data Router (MDR).

    For configuration information, refer to the Guidelines for
    OpenVMS Cluster Configurations.

 10.5 - LANs as Cluster Interconnects

    An OpenVMS Cluster system can use several LAN interconnects for
    node-to-node communication, including Ethernet, Fast Ethernet,
    Gigabit Ethernet, ATM, and FDDI.

    PEDRIVER, the cluster port driver, provides cluster
    communications over LANs using the NISCA protocol. Originally
    designed for broadcast media, PEDRIVER has been redesigned to
    exploit all the advantages offered by switched LANs, including
    full duplex transmission and more complex network topologies.

    Users of LANs for their node-to-node cluster communication will
    derive the following benefits from the redesigned PEDRIVER:

    o  Removal of restrictions for using Fast Ethernet, Gigabit
       Ethernet, and ATM as cluster interconnects

    o  Improved performance due to better path selection, multipath
       load distribution, and support of full duplex communication

    o  Greater scalability

    o  Ability to monitor, manage, and display information needed to
       diagnose problems with cluster use of LAN adapters and paths

 10. 5.1 - SCA Control Program

    The SCA Control Program (SCACP) utility is designed to monitor
    and manage cluster communications. (SCA is the abbreviation
    of Systems Communications Architecture, which defines the
    communications mechanisms that enable nodes in an OpenVMS Cluster
    system to communicate.)

    In OpenVMS Version 7.3, you can use SCACP to manage SCA use
    of LAN paths. In the future, SCACP might be used to monitor
    and manage SCA communications over other OpenVMS Cluster
    interconnects.

    This utility is described in more detail in a new chapter in the
    OpenVMS System Management Utilities Reference Manual: M-Z.

 10. 5.2 - Packet Loss Error

    Prior to OpenVMS Version 7.3, an SCS virtual circuit closure
    was the first indication that a LAN path had become unusable. In
    OpenVMS Version 7.3, whenever the last usable LAN path is losing
    packets at an excessive rate, PEDRIVER displays the following
    console message:

    %PEA0, Excessive packet losses on LAN Path from local-device-name -
     _  to device-name on REMOTE NODE node-name

    This message is displayed after PEDRIVER performs an excessively
    high rate of packet retransmissions on the LAN path consisting of
    the local device, the intervening network, and the device on the
    remote node. The message indicates that the LAN path has degraded
    and is approaching, or has reached, the point where reliable
    communications with the remote node are no longer possible. It is
    likely that the virtual circuit to the remote node will close if
    the losses continue. Furthermore, continued operation with high
    LAN packet losses can result in a significant loss in performance
    because of the communication delays resulting from the packet
    loss detection timeouts and packet retransmission.

    The corrective steps to take are:

    1. Check the local and remote LAN device error counts to see if a
       problem exists on the devices. Issue the following commands on
       each node:

       $ SHOW DEVICE local-device-name
       $ MC SCACP
       SCACP> SHOW LAN device-name
       $ MC LANCP
       LANCP> SHOW DEVICE device-name/COUNT

    2. If device error counts on the local devices are within normal
       bounds, contact your network administrators to request that
       they diagnose the LAN path between the devices.

       If necessary, contact your COMPAQ support representative for
       assistance in diagnosing your LAN path problems.

    For additional PEDRIVER troubleshooting information, see Appendix
    F of the OpenVMS Cluster Systems manual.

 10.6 - Warranted and Migration Support

    Compaq provides two levels of support, warranted and migration,
    for mixed-version and mixed-architecture OpenVMS Cluster systems.

    Warranted support means that Compaq has fully qualified the two
    versions coexisting in an OpenVMS Cluster and will answer all
    problems identified by customers using these configurations.

    Migration support is a superset of the Rolling Upgrade support
    provided in earlier releases of OpenVMS and is available for
    mixes that are not warranted. Migration support means that Compaq
    has qualified the versions for use together in configurations
    that are migrating in a staged fashion to a newer version of
    OpenVMS VAX or of OpenVMS Alpha. Problem reports submitted
    against these configurations will be answered by Compaq. However,
    in exceptional cases, Compaq may request that you move to a
    warranted configuration as part of answering the problem.

    Compaq supports only two versions of OpenVMS running in a cluster
    at the same time, regardless of architecture. Migration support
    helps customers move to warranted OpenVMS Cluster version mixes
    with minimal impact on their cluster environments.

    The following table shows the level of support provided for all
    possible version pairings.

    Table 4-2 Warranted and Migration Support

                             Alpha
                Alpha/VAX    V7.2-xxx/
                V7.3         VAX V7.2    Alpha/VAX V7.1

    Alpha/VAX   WARRANTED    Migration   Migration
    V7.3
    Alpha       Migration    WARRANTED   Migration
    V7.2-xxx/
    VAX V7.2
    Alpha/VAX   Migration    Migration   WARRANTED
    V7.1

    In a mixed-version cluster with OpenVMS Version 7.3, you must
    install remedial kits on earlier versions of OpenVMS. For OpenVMS
    Version 7.3, two new features, XFC and Volume Shadowing minicopy,
    cannot be run on any node in a mixed version cluster unless all
    nodes running earlier versions of OpenVMS have installed the
    required remedial kit or upgrade. Remedial kits are available
    now for XFC. An upgrade for systems running OpenVMS Version 7.2-
    xx that supports minicopy will be made available soon after the
    release of OpenVMS Version 7.3.

    For a complete list of required remedial kits, refer to the
    OpenVMS Version 7.3 Release Notes.

  11 - SMP Performance Improvements (Alpha)

    OpenVMS Alpha Version 7.3 contains software changes that improve
    SMP scaling. Designed for applications running on the new
    AlphaServer GS-series systems, many of these improvements will
    benefit all customer applications. The OpenVMS SMP performance
    improvements in Version 7.3 include the following:

    o  Improved MUTEX Acquisition

       Mutexes are used for synchronization of numerous events on
       OpenVMS. The most common use of a mutex is for synchronization
       of the logical names database and I/O base. In releases prior
       to OpenVMS Alpha Version 7.3, the manipulation of a mutex
       was completed with the SCHED spinlock held. Because the SCHED
       spinlock is a heavily used spinlock with high contention on
       large SMP systems and only a single CPU could manipulate a
       mutex, bottlenecks often occurred.

       OpenVMS Alpha Version 7.3 changes the way mutexes are
       manipulated. The mutex itself is now manipulated with atomic
       instructions. Thus multiple CPUs manipulate different mutexes
       in parallel. In most cases, the need to acquire the SCHED
       spinlock has been avoided. In cases where a process must be
       placed into a mutex wait state or when mutex waiters must wake
       up, SCHED will still need to be acquired.

    o  Improved Process Scheduling

       Changes made to the OpenVMS process scheduler reduce
       contention on the SCHED spinlock. Prior to OpenVMS Version
       7.3, when a process became computable, the scheduler released
       all IDLE CPUs to attempt to execute the process. On NUMA
       systems, all idle CPUs in the RAD were released. These idle
       CPUs competed for the SCHED spinlock, which added to the
       contention on the SCHED spinlock. As of OpenVMS Version 7.3,
       the scheduler only releases a single CPU. In addition, the
       scheduler releases high numbered CPUs first. This has the
       effect of avoiding scheduling processes on the primary CPU
       when possible.

       To use the modified scheduler, users must set the system
       parameter SCH_CTLFLAGS to 1. This parameter is dynamic.

    o  Improved SYS$RESCHED

       A number of applications and libraries use the SYS$RESCHED
       system service, which requests a CPU to reschedule another
       process. In releases prior to OpenVMS Version 7.3, this
       system service would lock the SCHED spinlock and attempt to
       reschedule another computable process on the CPU.

       Prior to OpenVMS Version 7.3, when heavy contention existed
       on the SCHED spinlock, using SYS$RESCHED system increased
       resources contention. As of OpenVMS Version 7.3, the
       SYS$RESCHED system service attempts to acquire the SCHED
       spinlock with a NOSPIN routine. Thus, if the SCHED spinlock
       is currently locked, this thread will not spin. It will return
       back to the caller.

    o  Lock Manager 2000 and 180 improvements

       There are several changes to the lock manager. For OpenVMS
       Clusters, the lock manager no longer uses IOLOCK8 for
       synchronization. It now uses the LCKMGR spinlock, which allows
       locking and I/O operations to occur in parallel.

       Remaster operations can be performed much faster now. The
       remaster code sends large messages with data from many locks
       when remastering as opposed to sending a single lock per
       message.

       The lock manager supports a Dedicated CPU mode. In cases
       where there is very heavy contention on the LCKMGR spinlock,
       dedicating a single CPU to performing locking operations
       provides a much more efficient mechanism.

    o  Enhanced Spinlock Tracing capability

       The spinlock trace capability, which first shipped in V7.2-
       1H1, can now trace forklocks. In systems with heavy contention
       on the IOLOCK8 spinlock, much of the contention occurs in fork
       threads. Collecting traditional spinlock data only indicates
       that the fork dispatcher locked IOLOCK8.

       As of OpenVMS Version 7.3, the spinlock trace has a hook in
       the fork dispatcher code. This allows the trace to report
       the routine that is called by the fork dispatch, which
       indicates the specific devices that contribute to heavy
       IOLOCK8 contention.

    o  Mailbox driver change

       Prior to OpenVMS Version 7.3, the mailbox driver FDT routines
       called a routine that locked the MAILBOX spinlock and
       delivered any required attention ASTs. In most cases, this
       routine did not require any attention ASTs to be delivered.
       Because the OpenVMS code that makes these calls already has
       the MAILBOX spinlock locked, the spinlock acquisition was also
       an unneeded second acquire of the spinlock.

       As of OpenVMS Version 7.3, OpenVMS now first checks to see
       if any ASTs may need to be delivered prior to calling the
       routine. This avoids both the call overhead and the overhead
       of relocking the MAILBOX spinlock that was already owned.

  12 - SYSMAN Commands and Qualifiers

    The SYSMAN utility has the following new commands:

    o  CLASS_SCHEDULE commands

       The class scheduler provides the ability to limit the amount
       of CPU time that a system's users receive by placing users in
       scheduling classes.

       Command                 Description

       CLASS_SCHEDULE ADD      Creates a new scheduling class
       CLASS_SCHEDULE DELETE   Deletes a scheduling class
       CLASS_SCHEDULE MODIFY   Modifies the characteristics of a
                               scheduling class
       CLASS_SCHEDULE RESUME   Resumes a scheduling class that has
                               been suspended
       CLASS_SCHEDULE SHOW     Displays the characteristics of a
                               scheduling class
       CLASS_SCHEDULE SUSPEND  Temporarily suspends a scheduling
                               class

    o  IO FIND_WWID and IO_REPLACE_WWID (Alpha-only)

       These commands support Fibre Channel tapes, which are
       discussed in Tape Support.

       Command                 Description

       IO FIND_WWID            Detects all previously undiscovered
                               tapes and medium changers
       IO REPLACE_WWID         Replaces one worldwide identifier
                               (WWID) with another

    o  POWER_OFF qualifier for SYSMAN command SHUTDOWN NODE

       The /POWER_OFF qualifier specifies that the system is to power
       off after shutdown is complete.

    For more information, refer to the SYSMAN section of the OpenVMS
    System Management Utilities Reference Manual: M-Z.

  13 - New System Parameters

    This section contains definitions of system parameters that are
    new in OpenVMS Version 7.3.

 13.1 - AUTO_DLIGHT_SAV

    AUTO_DLIGHT_SAV is set to either 1 or 0. The default is 0.

    If AUTO_DLIGHT_SAV is set to 1, OpenVMS automatically makes the
    change to and from daylight saving time.

 13.2 - FAST_PATH_PORTS

    FAST_PATH_PORTS is a static parameter that deactivates Fast Path
    for specific drivers.

    FAST_PATH_PORTS is a 32-bit mask. If the value of a bit in the
    mask is 1, Fast Path is disabled for the driver corresponding to
    that bit. A value of -1 specifies that Fast Path is disabled for
    all drivers that the FAST_PATH_PORTS parameter controls.

    Bit position zero controls Fast Path for PKQDRIVER (for parallel
    SCSI), and bit position one controls Fast Path for FGEDRIVER
    (for Fibre Channel). Currently, the default setting for FAST_
    PATH_PORTS is 0, which means that Fast Path is enabled for both
    PKQDRIVER and FGEDRIVER.

    In addition, note the following:

    o  CI drivers are not controlled by FAST_PATH_PORTS. Fast Path
       for CI is enabled and disabled exclusively by the FAST_PATH
       system parameter.

    o  FAST_PATH_PORTS is relevant only if the FAST_PATH system
       parameter is enabled (equal to 1). Setting FAST_PATH to zero
       has the same effect as setting FAST_PATH_PORTS to -1.

    For additional information, see FAST_PATH and IO_PREFER_CPUS.

 13.3 - GLX_SHM_REG

    On Galaxy systems, GLX_SHM_REG is the number of shared memory
    region structures configured into the Galaxy Management Database
    (GMDB). If you set GLX_SHM_REG to 0, the default number of shared
    memory regions are configured.

 13.4 - LCKMGR CPUID (Alpha)

    The LCKMGR_CPUID parameter controls the CPU that the Dedicated
    CPU Lock Manager runs on. This is the CPU that the LCKMGR_SERVER
    process will utilize if you turn this feature on with the LCKMGR_
    MODE system parameter.

    If the specified CPU ID is either the primary CPU or a
    nonexistent CPU, the LCKMGR_SERVER process will utilize the
    lowest nonprimary CPU.

    LCKMGR_CPUID is a DYNAMIC parameter.

    For more information, see the LCKMGR_MODE system parameter.

 13.5 - LCKMGR MODE (Alpha)

    The LCKMGR_MODE parameter controls usage of the Dedicated CPU
    Lock Manager. Setting LCKMGR_MODE to a number greater than zero
    (0) indicates the number of CPUs that must be active before the
    Dedicated CPU Lock Manager is turned on.

    The Dedicated CPU Lock Manager performs all locking operations
    on a single dedicated CPU. This can improve system performance
    on large SMP systems with high MP_Synch associated with the lock
    manager.

    For more information about usage of the Dedicated CPU Lock
    Manager, see the OpenVMS Performance Management manual.

    Specify one of the following:

    Value    Description

    0        Indicates the Dedicated CPU Lock Manager is off. (The
             default.)
    >0       Indicates the number of CPUs that must be active before
             the Dedicated CPU Lock Manager is turned on.

    LCKMGR_MODE is a DYNAMIC parameter.

 13.6 - NPAGECALC

    NPAGECALC controls whether the system automatically calculates
    the initial size for nonpaged dynamic memory.

    Compaq sets the default value of NPAGECALC to 1 only during the
    initial boot after an installation or upgrade. When the value of
    NPAGECALC is 1, the system calculates an initial value for the
    NPAGEVIR and NPAGEDYN system parameters. This calculated value is
    based on the amount of physical memory in the system.

    NPAGECALC's calculations do not reduce the values of NPAGEVIR and
    NPAGEDYN from the values you see or set at the SYSBOOT prompt.
    However, NPAGECALC's calculation might increase these values.

    AUTOGEN sets NPAGECALC to 0. NPAGECALC should always remain 0
    after AUTOGEN has determined more refined values for the NPAGEDYN
    and NPAGEVIR system parameters.

 13.7 - NPAGERAD (Alpha)

    NPAGERAD specifies the total number of bytes of nonpaged pool
    that will be allocated for Resource Affinity Domains (RADs) other
    than the base RAD. For platforms that have no RADs, NPAGERAD
    is ignored. Notice that NPAGEDYN specifies the total amount of
    nonpaged pool for all RADs.

    Also notice that the OpenVMS system might round the specified
    values higher to an even number of pages for each RAD, which
    prevents the base RAD from having too little nonpaged pool. For
    example, if the hardware is an AlphaServer GS160 with 4 RADs:

    NPAGEDYN = 6291456 bytes
    NPAGERAD = 2097152 bytes

    In this case, the OpenVMS system allocates a total of
    approximately 6,291,456 bytes of nonpaged pool. Of this amount,
    the system divides 2,097,152 bytes among the RADs that are not
    the base RAD. The system then assigns the remaining 4,194,304
    bytes to the base RAD.

 13.8 - RAD SUPPORT (Alpha)

    RAD_SUPPORT enables RAD-aware code to be executed on systems
    that support Resource Affinity Domains (RADs); for example,
    AlphaServer GS160 systems.

    A RAD is a set of hardware components (CPUs, memory, and I/O)
    with common access characteristics. For more information
    about using OpenVMS RAD features, refer to the OpenVMS Alpha
    Partitioning and Galaxy Guide.

 13.9 - SHADOW_MAX_UNIT

    SHADOW_MAX_UNIT specifies the maximum number of shadow sets that
    can exist on a node. The setting must be equal to or greater
    than the number of shadow sets you plan to have on a system.
    Dismounted shadow sets, unused shadow sets, and shadow sets with
    no write bitmaps allocated to them are included in the total.

    This system parameter is not dynamic; that is, a reboot is
    required when you change the setting.

    The default setting on OpenVMS Alpha systems is 500; on OpenVMS
    VAX systems, the default is 100. The minimum value is 10, and the
    maximum value is 10,000.

    Note that this parameter does not affect the naming of shadow
    sets. For example, with the default value of 100, a device name
    such as DSA999 is still valid.

 13.10 - VCC MAX IO SIZE (Alpha)

    The dynamic system parameter VCC_MAX_IO_SIZE controls the maximum
    size of I/O that can be cached by the Extended File Cache. It
    specifies the size in blocks. By default, the size is 127 blocks.

    Changing the value of VCC_MAX_IO_SIZE affects reads and writes to
    volumes currently mounted on the local node, as well as reads and
    writes to volumes mounted in the future.

    If VCC_MAX_IO_SIZE is 0, the Extended File Cache on the local
    node cannot cache any reads or writes. However, the system is
    not prevented from reserving memory for the Extended File Cache
    during startup if a VCC$MIN_CACHE_SIZE entry is in the reserved
    memory registry.

    VCC_MAX_IO_SIZE is a DYNAMIC parameter.

 13.11 - VCC READAHEAD (Alpha)

    The dynamic system parameter VCC_READAHEAD controls whether
    the Extended File Cache can use read-ahead caching. Read-
    ahead caching is a technique that improves the performance of
    applications that read data sequentially.

    By default VCC_READAHEAD is 1, which means that the Extended File
    Cache can use read-ahead caching. The Extended File Cache detects
    when a file is being read sequentially in equal-sized I/Os, and
    fetches data ahead of the current read, so that the next read
    instruction can be satisfied from cache.

    To stop the Extended File Cache from using read-ahead caching,
    set VCC_READAHEAD to 0.

    Changing the value of VCC_READAHEAD affects volumes currently
    mounted on the local node, as well as volumes mounted in the
    future.

    Readahead I/Os are totally asynchronous from user I/Os and only
    take place if sufficient system resources are available.

    VCC_READAHEAD is a DYNAMIC parameter.

 13.12 - WBM_MSG_INT

    WBM_MSG_INT is one of three system parameters that are available
    for managing the update traffic between a master write bitmap
    and its corresponding local write bitmaps in an OpenVMS Cluster
    system. (Write bitmaps are used by the volume shadowing software
    for minicopy operations.) The others are WBM_MSG_UPPER and
    WBM_MSG_LOWER. These parameters set the interval at which the
    frequency of sending messages is tested and also set an upper and
    lower threshold that determine whether the messages are grouped
    into one SCS message or are sent one by one.

    In single-message mode, WBM_MSG_INT is the time interval in
    milliseconds between assessments of the most suitable write
    bitmap message mode. In single-message mode, the writes issued by
    each remote node are, by default, sent one by one in individual
    SCS messages to the node with the master write bitmap. If the
    writes sent by a remote node reach an upper threshold of messages
    during a specified interval, single-message mode switches to
    buffered-message mode.

    In buffered-message mode, WBM_MSG_INT is the maximum time a
    message waits before it is sent. In buffered-message mode, the
    messages are collected for a specified interval and then sent
    in one SCS message. During periods of increased message traffic,
    grouping multiple messages to send in one SCS message to the
    master write bitmap is generally more efficient than sending each
    message separately.

    The minimum value of WBM_MSG_INT is 10 milliseconds. The maximum
    value is -1, which corresponds to the maximum positive value that
    a longword can represent. The default is 10 milliseconds.

    WBM_MSG_INT is a DYNAMIC parameter.

 13.13 - WBM_MSG_LOWER

    WBM_MSG_LOWER is one of three system parameters that are
    available for managing the update traffic between a master write
    bitmap and its corresponding local write bitmaps in an OpenVMS
    Cluster system. (Write bitmaps are used by the volume shadowing
    software for minicopy operations.) The others are WBM_MSG_INT
    and WBM_MSG_UPPER. These parameters set the interval at which the
    frequency of sending messages is tested and also set an upper and
    lower threshold that determine whether the messages are grouped
    into one SCS message or are sent one by one.

    WBM_MSG_LOWER is the lower threshold for the number of messages
    sent during the test interval that initiates single-message mode.
    In single-message mode, the writes issued by each remote node
    are, by default, sent one by one in individual SCS messages
    to the node with the master write bitmap. If the writes sent
    by a remote node reach an upper threshold of messages during a
    specified interval, single-message mode switches to buffered-
    message mode.

    The minimum value of WBM_MSG_LOWER is 0 messages per interval.
    The maximum value is -1, which corresponds to the maximum
    positive value that a longword can represent. The default is
    10.

    WBM_MSG_LOWER is a DYNAMIC parameter.

 13.14 - WBM_MSG_UPPER

    WBM_MSG_UPPER is one of three system parameters that are
    available for managing the update traffic between a master write
    bitmap and its corresponding local write bitmaps in an OpenVMS
    Cluster system. (Write bitmaps are used by the volume shadowing
    software for minicopy operations.) The others are WBM_MSG_INT
    and WBM_MSG_LOWER. These parameters set the interval at which the
    frequency of sending messages is tested and also set an upper and
    lower threshold that determine whether the messages are grouped
    into one SCS message or are sent one by one.

    WBM_MSG_UPPER is the upper threshold for the number of messages
    sent during the test interval that initiates buffered-message
    mode. In buffered-message mode, the messages are collected for a
    specified interval and then sent in one SCS message.

    The minimum value of WBM_MSG_UPPER is 0 messages per interval.
    The maximum value is -1, which corresponds to the maximum
    positive value that a longword can represent. The default is
    100.

    WBM_MSG_UPPER is a DYNAMIC parameter.

 13.15 - WBM_OPCOM_LVL

    WBM_OPCOM_LVL controls whether write bitmap system messages are
    sent to the operator console. (Write bitmaps are used by the
    volume shadowing software for minicopy operations.) Possible
    values are shown in the following table:

    Value Description

    0     Messages are turned off.
    1     The default; messages are provided when write bitmaps are
          started, deleted, and renamed, and when the SCS message
          mode (buffered or single) changes.
    2     All messages for a setting of 1 are provided plus many
          more.

    WBM_OPCOM_LVL is a DYNAMIC parameter.

  14 - Volume Shadowing for OpenVMS

    Volume Shadowing for OpenVMS introduces three new features, the
    minicopy operation enabled by write bitmaps, new qualifiers for
    disaster tolerant support for OpenVMS Cluster systems, and a new
    /SHADOW qualifier to the INITIALIZE command. These features are
    described in this section.

 14.1 - Minicopy in Compaq Volume Shadowing (Alpha)

    This new minicopy feature of Compaq Volume Shadowing for OpenVMS
    and its enabling technology, write bitmaps, are fully implemented
    on OpenVMS Alpha systems. OpenVMS VAX nodes can write to shadow
    sets that use this feature but they can neither create master
    write bitmaps nor manage them with DCL commands. The minicopy
    operation is a streamlined copy operation. Minicopy is designed
    to be used in place of a copy operation when you return a shadow
    set member to the shadow set. When a member has been removed from
    a shadow set, a write bitmap tracks the changes that are made to
    the shadow set in its absence, as shown in Application Writes to
    a Write Bitmap.

    When the member is returned to the shadow set, the write bitmap
    is used to direct the minicopy operation, as shown in Member
    Returned to the Shadow Set (Virtual Unit). While the minicopy
    operation is taking place, the application continues to read and
    write to the shadow set.

    Thus, minicopy can significantly decrease the time it takes
    to return the member to membership in the shadow set and can
    significantly increase the availability of the shadow sets that
    use this feature.

    Typically, a shadow set member is removed from a shadow set to
    back up the data on the disk. Before the introduction of the
    minicopy feature, Compaq required that the virtual unit (the
    shadow set) be dismounted to back up the data from one of the
    members. This requirement has been removed, provided that the
    guidelines for removing a shadow set member for backup purposes,
    as documented in Volume Shadowing for OpenVMS, are followed.

    For more information about this new feature, including additional
    memory requirements for this version of Compaq Volume Shadowing
    for OpenVMS, refer to Volume Shadowing for OpenVMS.

 14.2 - Multiple-Site OpenVMS Cluster Systems

    OpenVMS Version 7.3 introduces new command qualifiers for the
    DCL commands DISMOUNT and SET for use with Volume Shadowing for
    OpenVMS. These new command qualifiers provide disaster tolerant
    support for multiple-site OpenVMS Cluster systems. Designed
    primarily for multiple-site clusters that use Fibre Channel for
    a site-to-site storage interconnect, they can be used in other
    configurations as well. For more information about using these
    new qualifiers in a multiple-site OpenVMS Cluster system, see the
    white paper Using Fibre Channel in a Disaster-Tolerant OpenVMS
    Cluster System, which is posted on the OpenVMS Fibre Channel web
    site at:

    http://www.openvms.compaq.com/openvms/fibre/

    The new command qualifiers are described in this section. Using
    DISMOUNT and SET Qualifiers describes how to use these new
    qualifiers.

    DISMOUNT/FORCE_REMOVAL ddcu:

    One new qualifier to the DISMOUNT command, DISMOUNT/FORCE_REMOVAL
    ddcu:, is provided. If connectivity to a device has been lost and
    the shadow set is in mount verification, /FORCE_REMOVAL ddcu: can
    be used to immediately expell a named shadow set member (ddcu:)
    from the shadow set. If you omit this qualifier, the device is
    not dismounted until mount verification completes. Note that this
    qualifier cannot be used in conjunction with the /POLICY=MINICOPY
    (=OPTIONAL) qualifier.

    The device specified must be a member of a shadow set that is
    mounted on the node where the command is issued.

    SET DEVICE

    The following new qualifiers to the SET DEVICE command have
    been created for managing shadow set members located at multiple
    sites:

    o  /FORCE_REMOVAL ddcu:

       If connectivity to a device has been lost and the shadow set
       is in mount verification, this qualifier causes the member to
       be expelled from the shadow set immediately.

       If the shadow set is not currently in mount verification, no
       immediate action is taken. If connectivity to a device has
       been lost but the shadow set is not in mount verification,
       this qualifier lets you flag the member to be expelled from
       the shadow set, as soon as it does enter mount verification.

       The device specified must be a member of a shadow set that is
       mounted on the node where the command is issued.

    o  /MEMBER_TIMEOUT=xxxxxx ddcu:

       Specifies the timeout value to be used for a member of a
       shadow set.

       The value supplied by this qualifier overrides the SYSGEN
       parameter SHADOW_MBR_TMO for this specific device. Each member
       of a shadow set can be assigned a different MEMBER_TIMEOUT
       value.

       The valid range for xxxxxx is 1 to 16,777,215 seconds.

       The device specified must be a member of a shadow set that is
       mounted on the node where the command is issued.

    o  /MVTIMEOUT=yyyyyy DSAnnnn:

       Specifies the mount verification timeout value to be used for
       this shadow set, specified by its virtual unit name, DSAnnnn.

       The value supplied by this qualifier overrides the SYSGEN
       parameter MVTIMEOUT for this specific shadow set.

       The valid range for yyyyyy is 1 to 16,777,215 seconds.

       The device specified must be a shadow set that is mounted on
       the node where the command is issued.

    o  /READ_COST=zzz ddcu:

       The valid range for zzz is 1 to 4,294,967,295 units.

       The device specified must be a member of a shadow set that is
       mounted on the node where the command is issued.

       This qualifier allows you to modify the default "cost"
       assigned to each member of a shadow set, so that reads are
       biased or prioritized toward one member versus another.

       The shadowing driver assigns default READ_COST values to
       shadow set members when each member is initially mounted.
       The default value depends on the device type, and its
       configuration relative to the system mounting it. There are
       default values for a DECRAM device; a directly connected
       device in the same physical location; a directly connected
       device in a remote location; a DECram served device; and a
       default value for other served devices.

       The value supplied by this qualifier overrides the default
       assignment. The shadowing driver adds the value of the current
       queue depth of the shadow set member to the READ_COST value
       and then reads from the member with the lowest value.

       Different systems in the cluster can assign different costs to
       each shadow set member.

       If the /SITE command qualifier has been specified, the
       shadowing driver will take site values into account when it
       assigns default READ_COST values. Note that in order for the
       shadowing software to determine if a device is in the category
       of "directly connected device in a remote location," the /SITE
       command qualifier must have been applied to both the shadow
       set and to the individual device.

       Reads requested for a shadow set from a system at Site 1 are
       performed from a shadow set member that is also at Site 1.
       Reads requested for the same shadow set from Site 2 can read
       from the member located at Site 2.

    o  /READ_COST=y DSAnnnn

       The valid range for y is any non-zero number. The value
       supplied has no meaning in itself. The purpose of this
       qualifier is to switch the read cost setting for all shadow
       set members back to the default read cost settings established
       automatically by the shadowing software. DSAnnnn must be a
       shadow set that is mounted on the node from which this command
       is issued.

    o  /SITE=(nnn, logical_name) (ddcu: DSAnnnn:)

       This qualifier indicates to the shadowing driver the site
       location of the shadow set member or of the shadow set
       (represented by its virtual unit name). Prior to using
       this qualifier, you can define the site location in the
       SYLOGICALS.COM command procedure to simplify its use.

       The valid range for nnn is 1 through 255.

       The following example shows the site locations defined,
       followed by the use of the /SITE qualifier:

       $ DEFINE/SYSTEM/EXEC ZKO 1
       $ DEFINE/SYSTEM/EXEC LKG 2
       $!
       $! At the ZKO site ...
       $ MOUNT/SYSTEM DSA0/SHAD=($1$DGA0:,$1$DGA1:) TEST
       $ SET DEVICE/SITE=ZKO  DSA0:
       $!
       $! At the LKG site ...
       $ MOUNT/SYSTEM DSA0/SHAD=($1$DGA0,$1$DGA1) TEST
       $ SET DEVICE/SITE=LKG  DSA0:
       $!
       $! At both sites, the following would be used:
       $ SET DEVICE/SITE=ZKO  $1$DGA0:
       $ SET DEVICE/SITE=LKG  $1$DGA1:

    o  /COPY_SOURCE (ddcu:,DSAnnnn:)

       Controls whether one or both source members of a shadow
       set are used as the source for read data during full copy
       operations, when a third member is added to the shadow
       set. This only affects copy operations that do not use DCD
       operations.

       HSG80 controllers have a read-ahead cache, which significantly
       improves single-disk read performance. Copy operations
       normally alternate reads between the two source members, which
       effectively nullifies the benefits of the read-ahead cache.
       This qualifier lets you force all reads from a single source
       member for a copy operation.

       If the shadow set is specified, then all reads for full copy
       operations will be performed from whichever disk is the
       current "master" member, regardless of physical location of
       the disk.

       If a member of the shadow set is specified, then that member
       will be used as the source of all copy operations. This allows
       you to choose a local source member, rather than a remote
       master member.

    o  /ABORT_VIRTUAL_UNIT DSAnnnn:

       To use this qualifier, the shadow set must be in mount
       verification. When you specify this qualifier, the shadow
       set aborts mount verification immediately on the node from
       which the qualifier is issued. This qualifier is intended to
       be used when it is known that the unit cannot be recovered.
       Note that after this command completes, the shadow set must
       still be dismounted. Use the following command to dismount the
       shadow set:

       DISMOUNT/ABORT   DSAnnnn

 14. 2.1 - Using DISMOUNT and SET Qualifiers

    The diagram in this section depicts a typical multiple-site
    cluster using Fibre Channel. It is used to illustrate the steps
    which must be taken to manually recover one site when the site-
    to-site storage interconnect fails. Note that with current Fibre
    Channel support, neither site can use the MSCP server to regain a
    path to the DGA devices.

    To prevent the shadowing driver from automatically recovering
    shadow sets from connection-related failures, three steps must be
    taken prior to any failure:

    1. Every device that is a member of a multiple-site shadow set
       must have its member_timeout setting raised to a high value,
       using the following command:

       $ SET DEVICE /MEMBER_TIMEOUT= x  ddcu:

       This command will override the SHADOW_MBR_TMO value, which
       would normally be used for a shadow set member. A value for x
       of 259200 would be a seventy-two hour wait time.

    2. Every shadow set that spans multiple sites must have its mount
       verification timeout setting raised to a very high value,
       higher than the MEMBER_TIMEOUT settings for each member of the
       shadow set.

       Use the following command to increase the mount verification
       timeout setting for the shadow set:

       $ SET DEVICE /MVTIMEOUT = y  DSAnnnn

       The y value of this command should always be greater than the
       x value of the $ SET DEVICE/MEMBER_TIMEOUT= x ddcu:.

       The $ SET DEVICE /MVTIMEOUT = y command will override the
       MVTIMEOUT value, which would normally be used for the shadow
       set. A value for y of 262800 would be a seventy-three hour
       wait.

    3. Every shadow set and every shadow set member must have a site
       qualifier. As already noted, a site qualifier will ensure that
       the read cost is correctly set. The other critical factor is
       three-member shadow sets. When they are being used, the site
       qualifier will ensure that the master member of the shadow set
       will be properly maintained.

    In the following diagram, shadow set DSA42 is made up of
    $1$DGA1000 and $1$DGA2000

             <><><><><><><><><><><>  LAN   <><><><><><><><><><><>
             Site A                                Site B
                |                                     |
             F.C. SWITCH  <><><><> XYZZY <><><><>  F.C. SWITCH
                |                                     |
             HSG80 <><> HSG80                      HSG80 <><> HSG80
                |                                     |
             $1$DGA1000  --------- DSA42 --------- $1$DGA2000

    This diagram illustrates that systems at Site A or Site B have
    direct access to all devices at both sites via Fibre Channel
    connections. XYZZY is a theoretical point between the two sites.
    If the Fibre Channel connection were to break at this point,
    each site could access different "local" members of DSA42 without
    error. For the purpose of this example, Site A will be the sole
    site chosen to retain access to the shadow set.

    The following actions must be taken to recover the shadow set at
    Site A.

    On Site A:

    $ DISMOUNT /FORCE_REMOVAL= $1$DGA2000:

    Once the command has completed, the shadow set will be available
    for use only at site A.

    On Site B:

    $ SET DEVICE /ABORT_VIRTUAL_UNIT DSA42:

    Once the command completes, the shadow set status will be
    MntVerifyTimeout.

    Next, issue the following command to free up the shadow set:

    $ DISMOUNT/ABORT DSA42:

    These steps must be taken for all affected multiple-site shadow
    sets.

 14.3 - Using INITIALIZE With SHADOW and ERASE Qualifiers

    The new /SHADOW qualifier to the DCL INITIALIZE command is
    available. The use of the INITIALIZE /SHADOW command to
    initialize multiple members of a future shadow set eliminates
    the requirement for a full copy operation when you later create a
    shadow set.

    Compaq strongly recommends that you also specify the /ERASE
    qualifier with the INITIALIZE/SHADOW command when initializing
    multiple members of a future shadow set. Whereas the /SHADOW
    qualifier eliminates the need for a full copy operation when
    you later create a shadow set, the /ERASE qualifier reduces the
    amount of time a full merge will take.

    If you omit the /ERASE qualifier, and a merge operation of the
    shadow set is subsequently required (because a system on which
    the shadow set is mounted fails), the resulting merge operation
    will take much longer to complete.

    The INITIALIZE command with the /SHADOW and /ERASE qualifiers
    performs the following operations:

    o  Formats up to six devices with one command, so that any three
       can be subsequently mounted together as members of a new host-
       based shadow set.

    o  Writes a label on each volume.

    o  Deletes all information from the devices except for the system
       files containing identical file structure information. All
       former contents of the disks are lost.

    You can then mount up to three of the devices that you have
    initialized in this way as members of a new host-based shadow
    set.

    For more information, refer to Volume Shadowing for OpenVMS.
  Close     HLB-list     TLB-list     Help  

[legal] [privacy] [GNU] [policy] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.